Automated driving functions, like highway driving and parking assist, are increasingly getting deployed in high-end cars with the goal of realizing self-driving car using Deep learning techniques like convolution neural network (CNN), Transformers. Camera based perception, Driver Monitoring, Driving Policy, Radar and Lidar perception are few of the examples built using DL algorithms in such systems. Traditionally custom software provided by silicon vendors are used to deploy these DL algorithms on devices. This custom software is very optimal for supported features (limited), but these are not flexible enough for evaluating various deep learning model architecture types quickly. In this paper we propose to use various open-source deep learning inference frameworks to quickly deploy any model architecture without any performance/latency impact. We have implemented this proposed solution with three open-source inference frameworks (Tensorflow Lite, TVM/Neo-AI-DLR and ONNX Runtime) on Linux running in ARM.