Results 1 to 6 of 6

Thread: Image compression optimization for tampering detection?

  1. #1
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    747
    Thanks
    232
    Thanked 235 Times in 145 Posts

    Image compression optimization for tampering detection?

    There is recently a lot of talking about dangers of deep fakes and generally tampering of electronic media (e.g. https://www.wired.com/story/facebook...ect-deepfakes/ ) - bringing a question here if something could be done from (lossy) data compression perspective to improve this situation, help tampering detection.

    Here is a paper claiming "the proposed approach increased image manipulation detection accuracy from 45% to over 90%":

    http://openaccess.thecvf.com/content...2019_paper.pdf
    https://github.com/pkorus/neural-imaging



    "Training NIPs Optimized for Manipulation Detection" (NIP - neural imaging pipeline, FAN - forensics analysis network):





    Maybe this kind of optimization should be considered as (an optional) feature for new image (/video) formats like JPEG XL?
    Last edited by Jarek; 21st September 2019 at 14:00.

  2. Thanks:

    Hakan Abbas (21st September 2019)

  3. #2
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    747
    Thanks
    232
    Thanked 235 Times in 145 Posts
    Fresh paper about such tampering-resistant image compression: https://openreview.net/pdf?id=HyxG3p4twS
    It has very nice "soft quantization" idea - made continuous for efficient training of neural network.

    However, I have doubts regarding safety of such tampering protection - classification based only on characteristics of the image, as an attacker could tamper the image and then introduce distortion to agree with the characteristics.
    To get something really safe, it seems (?) there is needed cryptography, e.g. extract some robust features, encrypt them with private key of e.g. camera's manufacturer, and store such encrypted features using robust steganography ... but such robustness is really tough to achieve.

    Any interesting approaches for image compression with tampering protection?

  4. #3
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,766
    Thanks
    275
    Thanked 1,203 Times in 670 Posts
    > e.g. extract some robust features, encrypt them with private key of e.g. camera's manufacturer,
    > and store such encrypted features using robust steganography ...

    I'd rather say "remove features used by steganography, normalize image, compute image hash,
    then encode signed image hash into the image using steganography".

    > but such robustness is really tough to achieve.

    That can be solved by ECC.
    For example http://ollydbg.de/Paperbak/

    > Any interesting approaches for image compression with tampering protection?

    I think the only uncertain topic here is the choice of steganography target.
    But that actually depends on expected image transformations.
    Surviving eg. printing on paper in dithered b/w can be tough.

    Well, obviously we'd need a good psychovisual model to estimate
    the value of image features.

  5. #4
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    747
    Thanks
    232
    Thanked 235 Times in 145 Posts
    Without distortion the problem is trivial, e.g. there can be steganogrphically written ECC-protected lower resolution image - allowing to point and partially repair potential tampering.

    ECC is rather only for very simple errors like bit-flips and erasures, the simplest synchronization channel ( https://en.wikipedia.org/wiki/Deletion_channel ) is already a big problem - for image distortions it seems an impossible task: requires robust steganography, e.g. a neural network trained to retrieve the same message after some set of deformations, and that hiding this message does not bring essential deformation to the original image.

    Also retrieving robust features (to be encrypted with private key and stored steganographically) is a tough problem. Additional difficulty here is that the attacker knowing these features shouldn't be able to make tampering without modifying them ...

  6. #5
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,766
    Thanks
    275
    Thanked 1,203 Times in 670 Posts
    > ECC is rather only for very simple errors like bit-flips and erasures,

    Yes, but that should be enough.
    Just fill the quantization noise with repeated signature.
    It might work even without ECC, but I think repeated signature+ECC would be more
    reliable than just signature, even with less copies.

    > for image distortions it seems an impossible task:

    Actually I think we should discuss different applications separately.
    1) tampering protection - proof that the image is a real photo produced by a camera.
    Doesn't actually require steganography, camera hardware just has to sign it with RSA/ECDSA.
    2) watermarks (for DRM etc)
    3) storage/communication

    > requires robust steganography, e.g. a neural network trained to retrieve the same message

    I don't think that NN is a good idea here.
    It can be helpful when dealing with a specific image type maybe,
    but I don't see how it can be used universally.

    I'd rather use something like this:
    https://www.researchgate.net/publica...olar_Transform

    > after some set of deformations, and that hiding this message does not bring
    > essential deformation to the original image.

    Well, a good solution can be built by reverse-enginering a 3d model from an image.
    Then we can use steganography on the model and render an image from it again.

    > Also retrieving robust features (to be encrypted with private key and
    > stored steganographically) is a tough problem.

    Encoding a signature as quantization noise shouldn't be much of a problem.

    Also keep in mind that signature generation and checking don't really have to be symmetric.
    Like, on a PC we can keep checking for signature with multiple rotation angles and zoom levels.

    > Additional difficulty here is that the attacker knowing these features
    > shouldn't be able to make tampering without modifying them ...

    That's supposedly prevented by storing a signed hash of original image.

  7. Thanks:

    Jarek (27th September 2019)

  8. #6
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    747
    Thanks
    232
    Thanked 235 Times in 145 Posts
    "repeated signature" corresponds to N-modular redundancy ECC, which generally are very suboptimal.
    Ok, there is advantage here of being placed in multiple regions, but image deformations are usually spread over the entire image - damaging all such copies.

    Indeed there are various applications, an ideal solution here would be (?): someone makes let say video using parts of images of others (crop, rotation, downsample, distortions) - a perfect tool would be able to decompose the scene into such images and classify which of them were additionally tampered ... a natural approach is comparing with database of all images, but maybe some tampering protection could help with such task without database - which would be rather always incomplete.

    There are some successful NN-based robust steganography systems like http://www.matthewtancik.com/stegastamp
    However, they probably learn some relatively simple encoding schemes - directly using more sophisticate possibilities of information theory might lead to better schemes, e.g. based on Kuznetsov-Tsybakov problem.

    Sure automatic decomposition into 3D scene would be great for many applications (like video compression), but it seems we are still far from that (?) ... and single image is often insufficient, and there are problematic cases like paradox images ...

    Regarding hiding information in quantization, like rounding up or down, this is a natural approach but getting robustness is difficult even if using only low frequencies.

    Hash of the the original image is difficult to confirm if having only distorted version of the image.

Similar Threads

  1. The 7-zip compression selection optimization thread
    By SvenBent in forum Data Compression
    Replies: 14
    Last Post: 1st March 2017, 12:18
  2. Precomp stream detection
    By danswano in forum Data Compression
    Replies: 10
    Last Post: 26th September 2013, 03:10
  3. Structure detection
    By Shelwien in forum Data Compression
    Replies: 0
    Last Post: 13th September 2010, 09:54
  4. C++ compile-time constant detection
    By Shelwien in forum The Off-Topic Lounge
    Replies: 5
    Last Post: 5th August 2010, 08:11
  5. Text Detection
    By Simon Berger in forum Data Compression
    Replies: 15
    Last Post: 30th May 2009, 09:58

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •