Regular
Action recognitionaddressing Climate ChangeActivity time
CNN deep learningCattle activity recognitionColor CheckerCattle recognitionCMOS Image sensorCattle identificationcomputer vision, and remote sensing in food and agriculture systems: Farm, Livestock, Production, Restaurant
Deep Learning
Edge ProcessorsEnvironment
Fruit freshnessFood qualityFish freshness
Image ClassificationImage processing
MobileNetV2Mobile ImagingMultispectral imagingMotion vector
Object Detection
Particle filter object tracking
Quality Monitoring
Reflectance
SmartphoneSurveillance
Tomatoes
 Filters
Month and year
 
  21  0
Image
Pages A12-1 - A12-5,  © Society for Imaging Science and Technology 2020
Digital Library: EI
Published Online: January  2020
  75  9
Image
Pages 171-1 - 171-5,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 12

Quantification of food quality is a critical process for ensuring public health. Fish correspond to a particularly challenging case due to its high perishable nature as food. Existing approaches require laboratory testing, a laborious and timeconsuming process. In this paper, we propose a novel approach for evaluating fish freshness by exploiting the information encoded in the spectral profile acquired by a snapshot spectral camera. To extract the relevant information, we employ state-ofthe- art Convolutional Neural Networks and treat the problem as an instance of multi-class classification, where each class corresponds to a two-day period since harvesting. Experimental evaluation on individuals from the Sparidae (Boops sp.) family demonstrates that the proposed approach constitutes a valid methodology, offering both accuracy as well as effortless application.

Digital Library: EI
Published Online: January  2020
  106  24
Image
Pages 172-1 - 172-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 12

CMOS Image sensors play a vital role in the exponentially growing field of Artificial Intelligence (AI). Applications like image classification, object detection and tracking are just some of the many problems now solved with the help of AI, and specifically deep learning. In this work, we target image classification to discern between six categories of fruits — fresh/ rotten apples, fresh/ rotten oranges, fresh/ rotten bananas. Using images captured from high speed CMOS sensors along with lightweight CNN architectures, we show the results on various edge platforms. Specifically, we show results using ON Semiconductor’s global-shutter based, 12MP, 90 frame per second image sensor (XGS-12), and ON Semiconductor’s 13 MP AR1335 image sensor feeding into MobileNetV2, implemented on NVIDIA Jetson platforms. In addition to using the data captured with these sensors, we utilize an open-source fruits dataset to increase the number of training images. For image classification, we train our model on approximately 30,000 RGB images from the six categories of fruits. The model achieves an accuracy of 97% on edge platforms using ON Semiconductor’s 13 MP camera with AR1335 sensor. In addition to the image classification model, work is currently in progress to improve the accuracy of object detection using SSD and SSDLite with MobileNetV2 as the feature extractor. In this paper, we show preliminary results on the object detection model for the same six categories of fruits.

Digital Library: EI
Published Online: January  2020
  49  5
Image
Pages 173-1 - 173-6,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 12

Farmers do not typically have ready access to sophisticated color measurement equipment. The idea that farmers could use their smartphones to determine when and if crops are ready for harvest was the driving force behind this project. If famers could use their smartphones to image their crops, in this case tomatoes, to determine their ripeness and readiness for harvest their farming practices could be simplified. Five smartphone devices were used to image tomatoes at different stages of ripeness. A relationship was found to exist between the hue angles taken from the smartphone images and as measured by a spectroradiometer. Additionally, a tomato color checker was created using the spectroradiometer measurements. It is intended to be made of a material that makes it easy to transport into the field. The chart is intended for use in camera calibration for future imaging. Different cloth materials were tested, with the eventual choice being a canvas material with black felt backing. Other possibilities are being investigated. The results from the smartphones and the charts will be used in further research on the application of color science in agriculture. Other possible future applications include monitoring progress relative to irrigation and fertilization programs and detection of pests and disease.

Digital Library: EI
Published Online: January  2020
  90  16
Image
Pages 174-1 - 174-6,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 12

In cattle farm, it is important to monitor activity of cattle to know their health condition and prevent accidents. Sensors were used by conventional methods to recognize activity of cattle, but attachment of sensors to the animal may cause stress. Camera was used to recognize activity of cattle, but it is difficult to identify cattle because cattle have similar appearance, especially for black or brown cattle. We propose a new method to identify cattle and recognize their activity by surveillance camera. The cattle are recognized at first by CNN deep learning method. Face and body areas of cattle, sitting and standing state are recognized separately at same time. Image samples of day and night were collected for learning model to recognize cattle for 24-hours. Among the recognized cattle, initial ID numbers are set at first frame of the video to identify the animal. Then particle filter object tracking is used to track the cattle. Combing cattle recognition and tracking results, ID numbers of the cattle are kept to the following frames of the video. Cattle activity is recognized by using multi-frame of the video. In areas of face and body of cattle, active or static activities are recognized. Activity times for the areas are outputted as cattle activity recognition results. Cattle identification and activity recognition experiments were made in a cattle farm by wide angle surveillance cameras. Evaluation results demonstrate effectiveness of our proposed method.

Digital Library: EI
Published Online: January  2020

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]