Additive manufacturing techniques have been the focus of studies and technological advances in recent years, obtaining the capability to fabricate pieces with complex geometries easily, rapid and with high precision, allowing the use of different materials, the appearance of new techniques, and a range of applications beyond prototyping. However, Additive Manufacturing techniques are still affected by some deficiencies and challenges such as the absence of sensing and control during the fabrication process that would result in a more reliable process and printed part. This paper shows the development of an inference process using probabilistic graphical models, in order to track the motion of the extrusion nozzle during the printing process using linear encoders
An appearance model plays a crucial rule in multi-target tracking. In traditional approaches, the two steps of appearance modeling i.e visual representation and statistically similarity measure are modeled separately. Visual representation is achieved either through hand-crafted features or deep features and statically similarity is measure through a cross entropy loss function. A loss function based on crossentropy (KL-divergence, mutual information) find closely related probability distribution for the targets. However, if the targets have similar visual representation, it ends up mixing the targets. To tackle this problem, we come up with a synergetic appearance model named Single Shot Appearance Model (SSAM) based on Siamese neural network. The network is trained with a contrastive loss function for finding the similarity between different targets in a single shot. The input to the network is two target patches and based on their similarity, a contrastive score is output by the network. The proposed model is evaluated on accumulative dissimilarity metric on three datasets. Quantitatively, promising results are achieved against three baseline methods.