Regular
Authenticationalgorithmic art
Blind detectionbiometric identifierbiometric quality assessment
CryptographyCriminial forensicsConvolutional neural networksCounterfeiting CurrencyCharacter recognitioncomputational artcamera ID
deep learningDeep LearningDeep learningDBSData hidingDigital watermarkingDomain adaptationdictionary learningDatabase
empirical detectorEvent Repurposing
face recognitionForensicsFake imagesFake news
Graph theoryGaussian DistributionGAN image detection
halftoninghigh capacity
Inkjet Printerimage forensicsImage Foresnsicsimage quality metricImage forensicsimage copy-move forgery
likelihood ratio testLinear Discriminant AnalysisLow-quality imagesLicense plates
Multimedia SecurityMatrix embeddingMultimedia securityMobile phone camera
Object based video forgery
Printer Identification systemPrinter ForensicsPrincipal Component AnalysisPrinterperformance evaluationportable document formatphoto-response non-uniformity
re-trainingReference Sensor Pattern Noise
Steganographic filesystemsstyle transfersparse codingSpatiotemporal analysisSmartphonesignal rich artsteganalysisSequential analysissensor noisesynchronizationSource camera identificationSIFTSocial networkSecuritySyndrome Trellis CodesSteganalysisSteganography
Temporal change detectiontheoretical upper bound
VideoViterbi algorithmVideo forensics
Watermarkingwatermarking
 Filters
Month and year
 
  8  0
Image
Pages A05-1 - A05-8,  © Society for Imaging Science and Technology 2019
Digital Library: EI
Published Online: January  2019
  6  1
Image
Pages 526-1 - 526-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 5

In this paper, we reveal the impact of the fixed synchronization pattern on the halftone image under DBS processing; and an improved watermarking method is proposed to avoid this impact, which is extended from a previously developed DBS based watermarking method. The watermark and synchronization pattern is to be embedded into the appropriate region of the host image adaptively; and excellent image quality and decent watermark capacity is provided. The method has good resistance to a printing and scanning attack while only size of the watermark and host image is required additionally in the watermark detection. Experimental results are presented for some special host images, including a sketch and a round logo to prove the flexibility of the method.

Digital Library: EI
Published Online: January  2019
  26  2
Image
Pages 527-1 - 527-8,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 5

Digital watermarking technologies are based on the idea of embedding a data-carrying signal in a semi covert manner in a given host image. Here we describe a new approach in which we render the signal itself as an explicit artistic pattern, thereby hiding the signal in plain sight. This pattern may be used as is, or as a texture layer in another image for various applications. There is an immense variety of signal carrying patterns and we present several examples. We also present some results on the detection robustness of these patterns.

Digital Library: EI
Published Online: January  2019
  20  0
Image
Pages 528-1 - 528-6,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 5

The accuracy of face recognition systems is significantly affected by the quality of face sample images. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes. Previous study showed that IQMs can assess face sample quality according to the biometric system performance. In addition, re-training an IQM can improve its performance for face biometric images. However, only one database was used in the previous study, and it contains only image-based distortions. In this paper, we propose to extend the previous study by use multiple face database including FERET color face database, and apply multiple setups for the re-training process in order to investigate how the re-training process affect the performance of no-reference image quality metric for face biometric images. The experimental results show that the performance of the appropriate IQM can be improved for multiple databases, and different re-training setups can influence the IQM’s performance.

Digital Library: EI
Published Online: January  2019
  23  3
Image
Pages 529-1 - 529-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 5

Forensic investigations often have to contend with extremely low-quality images that can provide critical evidence. Recent work has shown that, although not visually apparent, information can be recovered from such low-resolution and degraded images. We present a CNN-based approach to decipher the contents of low-quality images of license plates. Evaluation on synthetically-generated and real-world images, with resolutions ranging from 10 to 60 pixels in width and signal-to-noise ratios ranging from –3:0 to 20:0 dB, shows that the proposed approach can localize and extract content from severely degraded images, outperforming human performance and previous approaches.

Digital Library: EI
Published Online: January  2019
  19  0
Image
Pages 530-1 - 530-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 5

The authenticity of images posted on social media is an issue of growing concern. Many algorithms have been developed to detect manipulated images, but few have investigated the ability of deep neural network based approaches to verify the authenticity of image labels, such as event names. In this paper, we propose several novel methods to predict if an image was captured at one of several noteworthy events. We use a set of images from several recorded events such as storms, marathons, protests, and other large public gatherings. Two strategies of applying pre-trained Imagenet network for event verification are presented, with two modifications for each strategy. The first method uses the features from the last convolutional layer of a pre-trained network as input to a classifier. We also consider the effects of tuning the convolutional weights of the pre-trained network to improve classification. The second method combines many features extracted from smaller scales and uses the output of a pre-trained network as the input to a second classifier. For both methods, we investigated several different classifiers and tested many different pre-trained networks. Our experiments demonstrate both these approaches are effective for event verification and image re-purposing detection. The classification at the global scale tends to marginally outperform our tested local methods and fine tuning the network further improves the results.

Digital Library: EI
Published Online: January  2019
  14  2
Image
Pages 531-1 - 531-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 5

Nowadays, digital images are used as critical evidence for judgment, but they can be forged using image processing tools with invisible traces and little effort. Hence, it is very important to determine the authenticity of these digital images. In this paper, we propose a novel approach that uses dictionary learning and sparse coding to detect digital image forgery. We experimented with two popular data sets to determine how effectively and efficiently our approach detects digital image forgery compared to previous approaches. The results show that our approach not only outperforms these approaches in terms of Precision, Recall, and F1 score, but it is also more robust against compression and rotation attacks. Also, our approach detects forgery significantly faster than previous approaches since it uses a sparse representation that dramatically reduces the feature dimensionality by a factor of more than 20.

Digital Library: EI
Published Online: January  2019
  319  81
Image
Pages 532-1 - 532-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 5

The advent of Generative Adversarial Networks (GANs) has brought about completely novel ways of transforming and manipulating pixels in digital images. GAN based techniques such as Image-to-Image translations, DeepFakes, and other automated methods have become increasingly popular in creating fake images. In this paper, we propose a novel approach to detect GAN generated fake images using a combination of co-occurrence matrices and deep learning. We extract co-occurrence matrices on three color channels in the pixel domain and train a model using a deep convolutional neural network (CNN) framework. Experimental results on two diverse and challenging GAN datasets comprising more than 56,000 images based on unpaired image-to-image translations (cycleGAN [1]) and facial attributes/expressions (StarGAN [2]) show that our approach is promising and achieves more than 99% classification accuracy in both datasets. Further, our approach also generalizes well and achieves good results when trained on one dataset and tested on the other.

Digital Library: EI
Published Online: January  2019
  35  4
Image
Pages 534-1 - 534-11,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 5

The goal of this article is construction of steganalyzers capable of detecting a variety of embedding algorithms and possibly identifying the steganographic method. Since deep learning today can achieve markedly better performance than other machine learning tools, our detectors are deep residual convolutional neural networks. We explore binary classifiers trained as cover versus all stego, multi-class detectors, and bucket detectors in a feature space obtained as a concatenation of features extracted by networks trained on individual stego algorithms. The accuracy of the detector to identify steganography is compared with dedicated detectors trained for a specific embedding algorithm. While the loss of detection accuracy w.r.t. increasing number of steganographic algorithms increases only slightly as long as the embedding schemes are known, the ability of the detector to generalize to previously unseen steganography remains a challenging task.

Digital Library: EI
Published Online: January  2019
  19  8
Image
Pages 535-1 - 535-11,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 5

The number and availability of stegonographic embedding algorithms continues to grow. Many traditional blind steganalysis frameworks require training examples from every embedding algorithm, but collecting, storing and processing representative examples of each algorithm can quickly become untenable. Our motivation for this paper is to create a straight-forward, nondata-intensive framework for blind steganalysis that only requires examples of cover images and a single embedding algorithm for training. Our blind steganalysis framework addresses the case of algorithm mismatch, where a classifier is trained on one algorithm and tested on another, with four spatial embedding algorithms: LSB matching, MiPOD, S-UNIWARD and WOW. We use RAW image data from the BOSSbase database and and data collected from six iPhone devices. Ensemble Classifiers with Spatial Rich Model features are trained on a single embedding algorithm and tested on each of the four algorithms. Classifiers trained on MiPOD, S-UNIWARD and WOW data achieve decent error rates when testing on all four algorithms. Most notably, an Ensemble Classifier with an adjusted decision threshold trained on LSB matching data achieves decent detection results on MiPOD, S-UNIWARD and WOW data.

Digital Library: EI
Published Online: January  2019

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]