Among hospitalized patients, getting up from bed can lead to fall injuries, 20% of which are severe cases such as broken bones or head injuries. To monitor patients’ bed-side status, we propose a deep neural network model, Bed Exit Detection Network (BED Net), for bed-exit behavior recognition. The BED Net consists of two sub-networks: a Posture Detection Network (Pose Net), and an Action Recognition Network (AR Net). The Pose Net leverages state-of-the-art neural-network-based keypoint detection algorithms to detect human postures from color camera images. The output sequences from Pose Net are passed to the AR Net for bed-exit behavior recognition. By formatting a pre-trained model as an intermediary, we train the proposed network using a newly collected small dataset, HP-BED-Dataset. We will show the results of our proposed BED Net.
A novel acceleration strategy is presented for computer vision and machine learning field from both algorithmic and hardware implementation perspective. With our approach, complex mathematical functions such as multiplication can be greatly simplified. As a result, an accelerated machine learning method requires no more than ADD operations, which tremendously reduces processing time, hardware complexity and power consumption. The applicability is illustrated by going through a machine learning example of HOG+SVM, where the accelerated version achieves comparable accuracy based on real datasets of human figure and digits.
In autonomous driving applications, cameras are a vital sensor as they can provide structural, semantic and navigational information about the environment of the vehicle. While image quality is a concept well understood for human viewing applications, its definition for computer vision is not well defined. This gives rise to the fact that, for systems in which human viewing and computer vision are both outputs of one video stream, historically the subjective experience for human viewing dominates over computer vision performance when it comes to tuning the image signal processor. However, the rise in prominence of autonomous driving and computer vision brings to the fore research in the area of the impact of image quality in camera-based applications. In this paper, we provide results quantifying the accuracy impact of sharpening and contrast on two image feature registration algorithms and pedestrian detection. We obtain encouraging results to illustrate the merits of tuning image signal processor parameters for vision algorithms.