<aside> <img src="/icons/burst_gray.svg" alt="/icons/burst_gray.svg" width="40px" />

Domains: Computer Architecture, Verilog HDL Design, Cache Design, RISC-V, Digital Design

</aside>

https://github.com/shashvatprabhu/SynapseCache

Overview

This project focuses on designing, implementing, and verifying a comprehensive instruction cache subsystem for an existing RISC-V pipelined CPU core. The end goal was to enhance processor performance by adding three different cache architectures—direct-mapped, N-way set associative, and N-way set associative with multi-word blocks—reducing memory-access latency through intelligent caching strategies.

attachment:aec50a59-226c-454d-8645-42046d0f3f9e:Screencast_from_09-23-2025_020040_PM.webm

Key Concepts

Cache Memory High-speed storage that sits between CPU and main memory to reduce access latency by storing frequently used instructions.

Direct-Mapped Cache Simplest cache organization where each memory address maps to exactly one cache location, providing fast access but susceptible to conflict misses.

Set-Associative Cache Cache organization that allows each memory address to map to multiple possible locations within a set, reducing conflict misses through flexible placement.

Approach and Workflow

  1. Design and Implementation Define cache parameters and develop parameterizable Verilog modules for three architectures: direct-mapped cache (1024 words), N-way set associative cache (4-way, round-robin replacement), and N-way multi-word cache (4-way, 4-word blocks with burst interface).
  2. Performance Analysis and Comparison Evaluate cache performance through unified benchmark suites and analyze trade-offs between architectures, measuring hit rates (0-97%), average access times (1.2-11 cycles), and memory bandwidth efficiency (up to 8× improvement).