Digital Integrated Circuit Design Project
DNN accelerators are commonly used to accelerate the inference phase of DNNs. They are designed to handle specific neural network operations with maximum efficiency, providing fast and low-latency inference. ASICs are ideal for edge devices where power efficiency and compactness are critical. They enable real time inference directly on devices such as smartphones, IoT devices, and autonomous systems without relying on cloud-based processing. ASICs support custom neural network architectures and operations, ensuring that the hardware is perfectly suited to their specific DNN workloads. FPGAs can also be programmed to create custom accelerators for various stages of DNN processing, such as convolution, pooling, and fully connected layers. This allows for optimization specific to the neural network architecture and workload. FPGAs are often used for accelerating inference in DNN (Tsai, Ho & Sheu (2019))s. Their ability to execute highly parallel tasks with low latency makes them suitable for applications requiring real-time processing, such as autonomous driving, robotics, and video analytics.
DNNs are composed of multiple layers that transform input data into meaningful outputs. The fundamental operation of a DNN accelerator is matrix multiplication and addition. Depending on the complexity of the DNN model, millions or billions of multiplication and addition operations are required. The central processing unit to implement this functionality is the MAC unit in the digital domain. Thus, it is important to design an efficient, fast, low power MAC unit. In this project, you are expected to design a 36-bit MAC unit.
Note: This is a continuation project of summer PURE.