Deep learning has enabled rapid advancements in the field of image processing. Learning based approaches have achieved stunning success over their traditional signal processing-based counterparts for a variety of applications such as object detection, semantic segmentation etc. This has resulted in the parallel development of hardware architectures capable of optimizing the inferencing of deep learning algorithms in real time. Embedded devices tend to have hard constraints on internal memory space and must rely on larger (but relatively very slow) DDR memory to store vast data generated while processing the deep learning algorithms. Thus, associated systems have to be evolved to make use of the optimized hardware balancing compute times with data operations. We propose such a generalized framework that can, given a set of compute elements and memory arrangement, devise an efficient method for processing of multidimensional data to optimize inference time of deep learning algorithms for vision applications.
Aaron Sequeira, Febin Sam, Anshu Jain, Pramod Swami, "Scalable and Efficient Orchestration of Machine Learning Workloads on DSPs with Multi-level Memory Architecture" in Electronic Imaging, 2024, pp 255-1 - 255-5, https://doi.org/10.2352/EI.2024.36.10.IPAS-255