Point clouds generated from 3D scans of part surfaces consist of discrete 3D points, some of which may be incorrect, or outliers. Outlier points can be caused by the scanning method, part surface attributes, and data acquisition techniques. Filtering techniques to remove these outliers from point clouds frequently require a “guess and check” method to determine proper filter parameters. This paper presents two novel approaches to automatically determine proper filter parameters using the relationships among point cloud outlier removal, principal component variance, and the average nearest neighbor distance. Two post-processing workflows were developed that reduce outlier frequency in point clouds using these relationships. These post-processing workflows were applied to point clouds with artificially generated noise and outliers along with two real-world point clouds. Analysis of the results showed both approaches effectively reducing outlier frequency when used in suitable circumstances.
This article resolutely uses the concept of feature fusion to establish a deep learning model that can quickly recognize objects and complete an anti-counterfeit label recognition system. The receiver combines the training of the technology acceptance model (TAM) to evaluate the satisfaction of users in completing the anti-counterfeit label classification training. In this study, the fusion-based recognition program was employed to extract the feature sets of different categories of anti-counterfeit labels based on the operation of multilayer convolutional neural networks (CNNs) with different depth models. Using neighborhood components analysis, ten important sets of features from different CNN models were selected and reorganized parallelly into a new small-scale feature fusion dataset. By using naive Bayes and support vector machine methods, efficient classification of wine label image feature datasets after fusion was achieved. The feature fusion anti-counterfeiting label recognition system proposed in this article had a maximum recognition accuracy of 99.29% and a data reduction compression ratio of about 1/50. In addition to reducing training time, it maintained a high level of accuracy. This study established a TAM with the advantage of a feature fusion anti-counterfeit label recognition system. The model was tested on 100 consumers, and a satisfaction evaluation and validation analysis with partial least squares structural equation modeling were completed thereafter. The efficiency of the fusion-based deep learning model met the level of consumer satisfaction. This will be beneficial for educating consumers to use and enhance their willingness to promote and repurchase wine products in the future.
The aim of this work is to transfer the model trained on magnetic resonance images of human autosomal dominant polycystic kidney disease (ADPKD) to rat and mouse ADPKD models. A dataset of 756 MRI images of ADPKD kidneys was employed to train a modified UNet3+ architecture, which incorporated residual layers, switchable normalization, and concatenated skip connections for kidney and cyst segmentation tasks. The trained model was then subjected to transfer learning (TL) using data from two commonly utilized animal PKD models: the Pkdh1pck (PCK) rat and the Pkd1RC∕RC (RC) mouse. Transfer learning achieved Dice similarity coefficients of 0.93±0.04 and 0.63±0.16 (mean±SD) for a sample combination of PCK+RC kidneys and cysts, respectively, on the test datasets of animal images. We showcased the utilization of TL in situations involving constrained source and target datasets and have achieved good accuracy in the cases of class imbalance.
Archeological textiles can provide invaluable insight into the past. However, they are often highly fragmented, and a puzzle has to be solved to re-assemble the object and recover the original motifs. Unlike common jigsaw puzzles, archeological fragments are highly damaged, and no correct solution to the puzzle is known. Although automatic puzzle solving has fascinated computer scientists for a long time, this work is one of the first attempts to apply modern machine learning solutions to archeological textile re-assembly. First and foremost, it is important to know which fragments belong to the same object. Therefore, features are extracted from digital images of textile fragments using color statistics, classical texture descriptors, and deep learning methods. These features are used to conduct clustering and identify similar fragments. Four different case studies with increasing complexity are discussed in this article: from well-preserved textiles with available ground truth to an actual open problem of Oseberg archeological tapestry with unknown solution. This work reveals significant knowledge gaps in current machine learning, which helps us to outline a future avenue toward more specialized application-specific models.
A continued challenge for preservation is objective data to make informed collection decisions. When considering a shared national print system, this challenge relates to decisions of withdrawal or retention since catalog partners may not have data regarding the condition of others’ volumes. This conundrum led to a national research initiative funded by the Mellon Foundation, “Assessing the Physical Condition of the National Collection.” The project captured and analyzed condition data from 500 “identical” volumes from five American research libraries to explore the following: What is the condition of book collections from 1840–1940? Can condition be predicted by catalog or physical parameters? What assessment tools might indicate a book’s life expectancy? Filling gaps in knowledge about the physicality of our collections is helping identify at-risk collections and explain the cases of dissimilar “same” volumes based on the impact of paper composition. Predictive modeling and assessment tools are also used to improve the understanding of what is typical for specific eras.
Size reduction of a point cloud or triangulated mesh is an intrinsic part of a three-dimensional (3D) documentation process, reducing the data volume and filtering out erroneous and redundant data obtained during acquisition. Additional reduction has an effect on the geometric accuracy of 3D data compared to the tangible object, and for 3D objects utilized in various cultural heritage applications, the small geometric properties of an object are equally as important as the large ones. In this paper, we investigate several simplification algorithms and various geometric features’ relevance to geometric accuracy during the reduction of a 3D object’s data size, and whether any of these features have a particular relation to the results of an algorithmic approach. Different simplification algorithms have been applied to several primitive geometric shapes at several reduction stages, and measured values for geometric features and accuracy have been tracked across every stage. We then compute and analyze the correlation between these values to see the effect each algorithm has on different geometries, and whether some of them are better suited for a simplification process based on the geometric features of a 3D object.
In this paper, we present a novel high-resolution projector photometric compensation method named HRPC. This method leverages a convolutional neural network architecture to compensate the projector input image before projection. The network incorporates multi-scale image feature pyramids and Laplacian pyramids to capture features at different levels. This enables scale-invariant learning of complex mappings among the projection surface image, the uncompensated projected image, and the ground truth image. Additionally, a non-local attention module and a multi-layer perceptron module are introduced into the bottleneck to enhance long-range dependency modeling and non-linear transformation abilities. Experiments on high-resolution projection datasets demonstrate HRPC’s ability to effectively compensate images with reduced color inconsistencies, illumination variations, and detail loss compared to state-of-the-art methods.
This project aims to digitally consolidate the 17th-century collection from Museum Faesch, overcoming physical accessibility issues. The goal is to unify the collection, which was previously dispersed among various institutions, enhancing its accessibility and promoting comprehensive understanding. By leveraging technology and a novel metadata management system, the project seeks to improve discoverability, user experience, and preservation of these valuable artifacts. In binding the dispersed collection, the endeavor not only augments scholarly collaboration but also uncovers hidden narratives and historical contexts, offering a holistic view of Remigius Faesch’s culturally significant cabinet of curiosities.
Aerial work vehicles are widely used in a variety of aerial work scenarios. In these vehicles, energy-saving is realized by telescopic motion control. Due to telescopic boom and person-mounting, their work safety requirements are very high. The anti-rollover protection function is one of their important active safety technologies, which is mainly realized through electrical, electronic, and programmable systems. At present, there is a lack of accurate and dynamic quantitative evaluation methods for functional safety level, and this problem can be solved by a quantitative evaluation method based on the Markov model. In this paper, the evaluation method for a safety system with a heterogeneous redundant structure based on the Markov model is first studied. Through this method, a Markov model is established for the active safety system of an aerial work vehicle, and its safety parameters such as safety and reliability are calculated through numerical simulation. In this way, the designed safety system is evaluated to meet the design requirements of functional safety, and the changing rules of the relevant parameters of the safety system are dynamically understood through Markov simulation. Finally, by this method, the probability of dangerous failure of a complex safety system can be simulated and calculated so as to accurately and quantitatively evaluate its safety parameters.
Computed tomography (CT) images provide a wealth of anatomical information crucial for diagnosing femoral fractures. However, predicting these fractures poses challenges due to postural variabilities of the femur and device-related factors. This study introduces an approach for predicting femoral fracture from CT and mask images. The approach includes several stages: annotations for masks, the scaling iterative closest point (SICP) algorithm for registration, three-dimensional (3D) affine transformation of images, image histogram matching, and a two-channel 3D convolutional neural network (3DCNN). In the proximal femoral region, SICP is applied to adjust the size and posture of the point cloud by using 3D affine transformation to ensure alignment with the target point cloud. The 3D affine transformation, generated by SICP registration, is applied to the original CT and mask images, systematically normalizing variances in the femoral postures and sizes across different subjects. Image histogram matching is used to diminish the variances in image grayscale values that originate from the scanning devices. It redistributes the pixel grayscale distributions in CT images, aligning them more closely with a reference histogram. The two-channel 3DCNN takes as input CT images (i.e., the first channel) that have undergone 3D affine transformation and image histogram matching, along with their corresponding masks (i.e., the second channel), and delivers the probability of a fracture as its output. Results show that the predictive capability of the 3DCNN-based model is notable, achieving an accuracy of 91.299%, specificity of 91.551%, sensitivity of 91.071%, and an area under the curve of 0.973. In conclusion, this approach effectively minimizes the impact of irrelevant factors on prediction, optimally utilizing image information to assess the risks of femoral fracture. Moreover, this approach enhances the accuracy and reliability of fracture prediction.