Safe and comfortable travel on the train is only possible on tracks that are in the correct geometric position. For this reason, track tamping machines are used worldwide that carry out this important track maintenance task. Turnout-ta.mping refers to a complex procedure for the improvement and stabilization of the track situation in turnouts, which is carried out usually by experienced operators. This application paper describes the current state of development of a 3D laser line scanner-based sensor system for a new turnout-tamping assistance system, which is able to support and relieve the operator in complex tamping areas. A central task in this context is digital image processing, which carries out so-called semantic segmentation (based on deep learning algorithms) on the basis of 3D scanner data in order to detect essential and critical rail areas fully automatically.
Holostream is a novel platform which enables high-quality 3D video communication on mobile devices (e.g., iPhones, iPads) using existing standard wireless networks. The major contributions are: (1) a novel high-quality 3D video compression method that drastically reduces both 3D geometry and color texture data sizes in order to transmit them within the bandwidths provided by existing wireless networks; (2) a novel pipeline for 3D video recording, encoding, compression, decompression, visualization and interaction; and (3) a demonstration system which successfully delivered video-rate, photorealistic 3D video content through a standard wireless network to mobile devices. The novel platform improves the quality and expands upon the capabilities of popular applications already utilizing real-time 3D data delivery, such as teleconferencing and telepresence. This technology could also enable emerging applications which may require highresolution, high-accuracy 3D video data delivery, such as remote robotic surgery and telemedicine.
A three-dimensional mobile display based on computergenerated integral imaging technique for the real-world object using the three-dimensional scanning is proposed. The real threedimensional information of the object is acquired by the threedimensional scanning; then, the specification of the lens array is inputted by the user, and the acquired and inputted information are organized for the future processing. According to the acquired information, the virtual three-dimensional model for the object is generated, and the virtual space including a virtual model, a lens array, and imaging plane is configured. After the elemental image array is generated from the virtual model, the orthographic-view image and depth-slices are reconstructed and displayed on the mobile display according to the user interaction. The data organizing and image rendering processes are proceeded through the cloud computing due to the limited performance of the mobile device. The theory is approved experimentally, and the experimental results confirm that the proposed method can be an efficient way to display the three-dimensional visualization of the real-world object on the mobile display.