1 Cover
2 Title Page SCIENCES Image, Field Director – Laure Blanc-Feraud Compression, Coding and Protection of Images and Videos , Subject Head – Christine Guillemot
3 Copyright First published 2022 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc. Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 27-37 St George’s Road London SW19 4EU UK www.iste.co.uk John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA www.wiley.com © ISTE Ltd 2022 The rights of William Puech to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group. Library of Congress Control Number: 2021948467 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-78945-026-2 ERC code: PE6 Computer Science and Informatics PE6_5 Cryptology, security, privacy, quantum cryptography PE6_8 Computer graphics, computer vision, multi media, computer games
4 Foreword by Gildas Avoine
5 Foreword by Cédric Richard
6 Preface
7 1 How to Reconstruct the History of a Digital Image, and of Its Alterations 1 How to Reconstruct the History of a Digital Image, and of Its Alterations Quentin BAMMEY1, Miguel COLOM1, Thibaud EHRET1, Marina GARDELLA1, Rafael GROMPONE1, Jean-Michel MOREL1, Tina NIKOUKHAH1 and Denis PERRAUD2 1 Centre Borelli, ENS Paris-Saclay, University of Paris-Saclay, CNRS, Gif-sur-Yvette, France 2 Technical and Scientific Police, Central Directorate of the Judicial Police, Lyon, France Between its raw acquisition from a camera sensor and its storage, an image undergoes a series of operations: denoising, demosaicing, white balance, gamma correction and compression. These operations produce artifacts in the final image, often imperceptible to the naked eye but yet detectable. By analyzing those artifacts, it is possible to reconstruct the history of an image. Indeed, one can model the different operations that took place during the creation of the image, as well as their order and parameters. Information about the specific camera pipeline of an image is relevant by itself, in particular because it can guide the restoration of the image. More importantly, it provides an identifying signature of the image. A model of the pipeline that is inconsistent across the whole image is often a clue that the image has been tampered with. However, the traces left by each step can be altered or even erased by subsequent processing operations. Sometimes these traces are even maliciously masked to make a forged image seem authentic to forensic tools. While it may be easy to deliberately hide the artifacts linked to a step in the processing of the image, it is more difficult to simultaneously hide several of the artifacts from the entire processing chain. It is therefore important to have enough different tests available, each of them focused on different artifacts, in order to find falsifications. We will therefore review the operations undergone by the raw image, and describe the artifacts they leave in the final image. For each of these operations, we will discuss how to model them to detect the significant anomalies caused by a possible manipulation of the image.
1.1. Introduction 1.1. Introduction 1.1.1. General context The Internet, digital media, new means of communication and social networks have accelerated the emergence of a connected world where perfect mastery over information becomes utopian. Images are ubiquitous and therefore have become an essential part of the news. Unfortunately, they have also become a tool of disinformation aimed at distracting the public from reality. Manipulation of images happens everywhere. Simply removing red eyes from family photos could already be called an image manipulation, whereas it is simply aimed at making an image taken with the flash on look more natural. Even amateur photographers can easily erase the electric cables from a vacation panorama and correct physical imperfections such as wrinkles on a face, not to mention the touch-ups done on models in magazines. Beyond these mostly benign examples, image manipulation can lead to falsified results in scientific publications, reports or journalistic articles. Altered images can imply an altered meaning, and can thus be used as fake evidence, for instance to use as defamation against someone or report a paranormal phenomenon. More frequently, falsified images are published and relayed on social media, in order to create and contribute to the spread of fake news. The proliferation of consumer software tools and their ease of use have made image manipulation extremely easy and accessible. Some software even go as far as to automatically restore a natural look to an image when parts of it have been altered or deleted. Recently, deep neural networks have made it possible to generate manipulated images almost automatically. One example is the site This Person Does Not Exist 1 , which randomly generates faces of people who do not exist while being unexpectedly realistic. The most surprising application is undoubtedly the arrival of deepfake methods, which allow, among other things, a face in a video to be replaced with the one of another person (face swapping).
1.2. Describing the image processing chain 1.2. Describing the image processing chain The main steps in the digital image acquisition process, illustrated in Figure 1.2, will be briefly described in this section. Other very important steps, such as denoising, are beyond the scope of this chapter and will therefore not be covered here.
1.3. Traces left on noise by image manipulation 1.4. Demosaicing and its traces 1.5. JPEG compression, its traces and the detection of its alterations 1.6. Internal similarities and manipulations 1.7. Direct detection of image manipulation 1.8. Conclusion 1.9. References
8 2 Deep Neural Network Attacks and Defense: The Case of Image Classification 2.1. Introduction 2.2. Adversarial images: definition 2.3. Attacks: making adversarial images 2.4. Defenses 2.5. Conclusion 2.6. References
9 3 Codes and Watermarks 3.1. Introduction 3.2. Study framework: robust watermarking 3.3. Index modulation 3.4. Error-correcting codes approach 3.5. Contradictory objectives of watermarking: the impact of codes 3.6. Latest developments in the use of correction codes for watermarking 3.7. Illustration of the influence of the type of code, according to the attacks 3.8. Using the rank metric 3.9. Conclusion 3.10. References
10 4 Invisibility 4.1. Introduction 4.2. Color watermarking: an approach history? 4.3. Quaternionic context for watermarking color images 4.4. Psychovisual approach to color watermarking 4.5. Conclusion 4.6. References
11 5 Steganography: Embedding Data Into Multimedia Content 5.1. Introduction and theoretical foundations 5.2. Fundamental principles 5.3. Digital image steganography: basic methods 5.4. Advanced principles in steganography 5.5. Conclusion 5.6. References
12 6 Traitor Tracing 6.1. Introduction 6.2. The original Tardos code 6.3. Tardos and his successors 6.4. Research of better score functions 6.5. How to find a better threshold 6.6. Conclusion 6.7. References
Читать дальше