We introduce Model Surgery, a novel approach for optimizing Deep Neural Network (DNN) models for efficient inference on resource-constrained embedded processors. Model Surgery tackles the challenge of deploying complex DNN models on edge devices by selectively pruning or replacing computationally expensive layers with more efficient alternatives. We examined the removal or substitution of layers such as Squeeze-And-Excitation, SiLU, Swish, HSwish, GeLU, and Focus layer to create lightweight ``lite'' models. Subsequently, these lite models are trained using standard training scripts for optimal performance. The benefits of Model Surgery are showcased through the development of several lite models which demonstrate efficient execution on the hardware accelerators of commonly used embedded processors. To quantify the effectiveness of Model Surgery, we conducted a comparison of accuracy and inference time between the original and lite models via training and evaluating them on the Imagenet1K and COCO datasets. Our results suggest that Model Surgery can significantly enhance the applicability and efficiency of DNN models in edge-computing scenarios, paving the way for broader deployment on low-power devices. The source code for model surgery is also publically available as a part of model optimization toolkit at https://github.com/TexasInstruments/edgeai-modeloptimization/tree/main/torchmodelopt.
In this study, the third order polynomial regression (PR) and deep neural networks (DNN) were used to perform color characterization from CMYK to CIELAB color space, based on a dataset consisting of 2016 color samples which were produced using a Stratasys J750 3D color printer. Five output variables including CIE XYZ, the logarithm of CIE XYZ, CIELAB, spectra reflectance and the principal components of spectra were compared for the performance of printer color characterization. The 10-fold cross validation was used to evaluate the accuracy of the models developed using different approaches, and CIELAB color differences were calculated with D65 illuminant. In addition, the effect of different training data sizes on predictive accuracy was investigated. The results showed that the DNN method produced much smaller color differences than the PR method, but it is highly dependent on the amount of training data. In addition, the logarithm of CIE XYZ as the output provided higher accuracy than CIE XYZ.
Images posted online present a privacy concern in that they may be used as reference examples for a facial recognition system. Such abuse of images is in violation of privacy rights but is difficult to counter. It is well established that adversarial example images can be created for recognition systems which are based on deep neural networks. These adversarial examples can be used to disrupt the utility of the images as reference examples or training data. In this work we use a Generative Adversarial Network (GAN) to create adversarial examples to deceive facial recognition and we achieve an acceptable success rate in fooling the face recognition. Our results reduce the training time for the GAN by removing the discriminator component. Furthermore, our results show knowledge distillation can be employed to drastically reduce the size of the resulting model without impacting performance indicating that our contribution could run comfortably on a smartphone.