Nowadays, everyone may easily possess one or more cameras. The innovation of digital camera technology is getting faster and faster, and its performance and capacity is constantly growing. With the pervasive of digital recording devices, personal digital data may show an exponential growth. However, it is difficult and time consuming to organize and relocate specific photograph collection within such a huge amount of digital data when there is lack of prebuilt descriptions.In 2004, Kuo et al.  proposed a multimedia description schema system based on MPEG-7 technology called PARIS (Personal Archiving and Retrieving Image System). It was designed to integrate spatial and temporal information of multimedia content into a MPEG-7 based semantic description. With this description architecture, PARIS envisioned being able to continuously capture and archive personal experience with audio-visual recording data and utilize potential social networking annotations provided by third party services.We carry out an ongoing project with the PARIS architecture using a semi-automatic crowdsourcing annotation possibility enabled by third party social networking services. While most current smartphone cameras are equipped with GPS recording features, the possibility of semi-automatic recording of personal recognizable text based spatial data is still unable. In this paper we explains our proposed methodology of adding text based location description into multimedia contents with the aid of current social networking services.
Po-Yen Chen, Pei-Jeng Kuo, "Archiving of Personal Digital Photograph Collections with a MPEG-7 Based Geotag Related Annotation Methodology" in Proc. IS&T Archiving 2012, 2012, pp 107 - 110, https://doi.org/10.2352/issn.2168-3204.2012.9.1.art00025