Due to limited availability of GPS-like signals indoors, and prevailing deployment of WLAN infrastructure in these environments, many proposed state-of-the-art indoor positioning techniques operate using a collection of WLAN signal measurements, called wireless fingerprints or just fingerprints that quite uniquely relate to user locations. As WLAN infrastructure was not historically designed for localization, the research community addressed several challenges to achieve robust operation of indoor positioning systems. While there are still other problems that hinder broad deployment of indoor navigators, an accumulated critical mass of scientific knowledge in this area is expected to drastically change indoor location-awareness, similar to the GPS revolution for outdoor navigation. This paper reviews main concepts of WLAN localization for a short introduction to this emerging transformative technology.
Due to the huge growth in the digital image processing scope and its wide usage in many applications, find objecting dimensions and area remotely will have a great advantage in engineering, industry, and personal business usage. Therefore, this paper proposes an approach to find the object dimensions by using digital images. The system is built for automated image-based measurement using an android mobile application where a user takes an image of an object that could be door/window or any desired object to be measured. The system detects the image of objects to be processed, then it automatically finds the image scale using camera zoom property to measure the dimensions of the desired object. The proposed method applied and tested using images of different objects taken from different places with different features such as size, brightness and angle, The accuracy reached 100% for determining the object corner and dimensions of objects. The system reached 81% of detecting the objects due to noise and clarity issues. Also, the system shows an error of ±0.9% in dimensions measurements that differs according to the focal length of the camera and zoom.
Color is an important aspect of the camera quality. Above all in a Visual Effects Pipeline (VFX) it is necessary to maintain a linear relationship of the pixel color in the recorded image to the original light of the scene throughout every step in the production pipeline. This means that the plate recorded by the camera is not permitted to be subject of changes in any way (,,do no harm to the plate"). Unfortunately most of the camera vendors are applying certain functions during the input step to the recorded RAW material, mostly to meet the needs of the display devices at the end of the pipeline. But they also are adding functions to establish a certain look, the camera company is associated with.Maintaining a linear relationship to the light of the scene enables compositing artists and editors to combine imagery of varying sources (mostly cameras of different vendors). If for example an action scene is filmed using an ARRI film camera to capture the performance of the principal actors, additional imagery is derived using action cameras like the GoproHero. Also it is often desirable to have some less expensive camera at hand which can be moved around easily to take textures and imagery for example to create clean plates. A critical aspect in the production workflow is that all the imagery from the different sources can be combined easily in editing and compositing without additional color correction.The goal of this paper is to calculate the position of the patches of the GretagMacbeth color checker chart [1] (now X-Rite color chart) using an image recorded by the Blackmagic Production Camera and compare it to reference data sets based on those provided by the manufacturer and measured spectral data under the same lighting conditions. As a result a tendency could be obtained if the camera can be used inside the AMPAS ACES workflow.
Depth from focus (DfF) algorithms rely on a scene-invariant series of images captured at different focuses to evaluate the distance between objects of the scene and the camera. One limitation of this technique is the slight "focus zoom" caused by standard lenses where focus is achieved with lens translation. Focus zoom impacts the performance and complexity of DfF estimation algorithms because it requires a costly spatial transform for images registration. Liquid Crystal (LC) lenses and liquid lenses do not rely on lens translation for focus which makes them good candidates for processing-inexpensive DfF techniques. On the other hand, DfF distance resolution depends on the number of acquired images under the constraint of scene-invariance which, in turn, calls for fast framerates and hence fast focusing. LC lenses are not the fastest lenses technology available and a careful characterization of both control vs. focus and focus speed is therefore required in order to define the acquisition system specifications. This paper presents both a system and a method to control and characterize a focus tunable lens. We developed a dedicated methodology, driver and algorithms to control experimental LC lenses in order to evaluate their compliance with the application and compare them with commercial-off-the-shelf (COTS) liquid lenses. Our experimental system controls, captures and processes images to measure the speed limitation of these lenses. We discuss the LC lenses performances, compare them with liquid lenses and show an example of depth map extraction with both of these lens technologies.
Current wearable camera and computer technology opens the way for preservation of every printed, computer mediated and spoken word that an individual has ever seen or heard. Text images acquired autonomously at one frame per second by a 20 megapixel miniature camera and recorded speech, both with GPS tags, can be uploaded and stored permanently on available mobile or desktop devices. After culling redundant images and mosaicking fragments, the text can be transcribed, tagged, indexed and summarized. A combination of already developed methods of information retrieval, web science and cognitive computing will enable selective retrieval of the accumulated information. New issues are engendered by the potential advent of microcosms of personal information at a scale of about 1:1,000,000 of the World Wide Web.
Demographic prediction is a very important component to build mobile user profile that can help improve personalized services and targeted advertising. However, demographic information is often unavailable due to user privacy issue. This paper presents technologies and algorithms to build demographic prediction classifiers based on mobile user data such as call logs, app usages, Web data and so on. To associate those data with demographic information, we implemented a system that consists of two parts: mobile application for data collection with web infrastructure for user survey administration (i.e. gender, age, marital status and so on), and classifiers to predict demographic information. In the demographic prediction, we focus on user interest which is semantically extracted from Web data rather than other mobile data. To capture user interest more precisely, advanced topic model called ARTM (Additive Regularization of Topic Models) used. Using user interest as features, the experimental results show our system achieves demographic prediction accuracies on gender, marital status, and age as high as 97%, 94%, and 76%, respectively using deep learning.
As mobile devices that can accommodate multimedia services are increasingly used, video data transmission consumes a significant amount of wireless channel bandwidth. On the other hand, the sizes and resolutions of mobile devices show large variations. In this paper, we investigate the optimal transmission of video data depending on the size and resolution of the mobile devices that include smartphones, tablets and notebooks. A series of subjective tests indicates the bandwidth consumption for multimedia services can be significantly reduced by considering the size and resolution of displays and content characteristics.
With the mandatory introduction of the May 2011 directive for reassessment of bridges in Germany, the administrations of the federal and state governments have the duty to prove the stability of their bridge stock. Verification of bridge stability will be realized with consideration of the newly increased traffic loads. Particularly in older bridges, the verification can only be achieved if calculative surplus load capacity of the original structural design is taken into account in the recalculation. One option for considering these reserves is the exact determination of the dead weight of the bridge. Within this case study, it will be demonstrated how the problem can be practically solved. In order to determine the dead weight of a concrete bridge, its volume has to be calculated. as a first step, a 3D laser scanner is used to record the internal geometry of a hollow box bridge girder. For the determination of the thickness of the concrete member, the non-destructive technique ultrasonic echo is applied. The construction must be segmented in approximately equidistant parts in order to be able to carry out an economic and efficient investigation. The description of the segmentation of the point cloud, carried out in a 2D model, was presented in the first part of the publication. The subject of this presentation is the merging of 2D cross sections into a 3D model, from which the weight of the bridge can be calculated.
The download number of health-promotion apps from App Stores is increasing every year. These so-called eHealth-Apps are for users a great chance to encourage their health status proactively but also to monitor this continuously. However, the resulting positive properties also entail risks. In particular, when users disclose (in addition to their personally identifiable information) some of their health-related data. Nowadays, general apps are more and more criticized in the media, especially the aspects of privacy and data security of user data are in focus [24,25]. The aim of this study is to analyze what risks may arise through the daily use of Android eHealth-Apps to user data. The security investigation focuses on three basic security relevant aspects.One topic here is the evaluation of required permissions by the providers as well as the transparency towards the users. Furthermore, the data storage of user data will be analyzed, in particular the readability of the stored data in the database and in generated text files. The third critical focus of this study is the monitoring of the data traffic. The background traffic will be checked, i.e. on possible hidden advertising companies, on encrypted or unencrypted communication protocols and on responding provider server.