Estimating the pose from fiducial markers is a widely researched topic with practical importance for computer vision, robotics and photogrammetry. In this paper, we aim at quantifying the accuracy of pose estimation in real-world scenarios. More specifically, we investigate six different factors, which impact the accuracy of pose estimation, namely: number of points, depth offset, planar offset, manufacturing error, detection error, and constellation size. Their influence is quantified for four non-iterative pose estimation algorithms, employing direct linear transform, direct least squares, robust perspective n-point, and infinitesimal planar pose estimation, respectively. We present empirical results which are instructive for selecting a well-performing pose estimation method and rectifying the factors causing errors and degrading the rotational and translational accuracy of pose estimation.
We proposed a deep learning-based approach for pig keypoint detection. In a nutshell, we explored transfer learning to adapt a human pose estimation model for the pigs. In total, we tested three different models and eventually trained openpose on the pig data. For training, the data is annotated in COCO format. Additionally, we visualized the pixel level response of the network named PAF (part infinity field) on the test frames to highlight the model learning capabilities. The trained model shows promising results and open new a door for further research.