Skip to main content

AI-powered image forgery detection

Periodic Reporting for period 1 - QI (AI-powered image forgery detection)

Reporting period: 2019-08-01 to 2019-11-30

The threat of image forgery and DeepFake (DF) video has accelerated. Since we began the H2020 application: the concern for and impact of DF in media and social media has exploded.

For the purpose of this report we define DF as Artificial Intelligence (AI) assisted manipulation and/or creation of ex nihilo images/video shifting the burden from humans to the AI. DF is now not only an economic threat, but a growing concern for both national and societal security.

Quantum Integrity (QI) has a significant scientific lead on its competitors, and has anticipated the threat of DF: further links to press reports can be found at

Pressing this advantage and using feedback from customers - QI has fine-tuned its forgery detection systems and is developing a DeepLearing (DL) based Generative Adversarial Network (GAN) that includes a forgery creator working in competition with a forgery detector to stay one step ahead of new attack methods and DF. These algorithms can be customer specific detection: tailored to needs and use cases, or more generic detectors destined for B2C use. Furthermore, due to the DL forgery creator, QI’s AI can be trained in hours to detect new forms of attacks, as opposed to the weeks needed with current technologies.

The B2B aspect is aimed at streamlining workflow, and saving time and money to companies receiving digital images : Insurance, KYC, Warranty management, HR are examples. Our tool is tailor made for use cases and verticals to address their specific problems.

A more Generic B2C product will be developed to allow the public to detect and defend against malicious DeepFake images or videos.

The overall objective is to become the leader in the space of image and video forgery detection.
We have been caught up in a rising tide since the beginning of this project.
Though we have been studying this issue since 2016, we did not forsee the magnitude of the impact of DeepFake back then.

Where we were focussing on still images and business, we now see a bigger potential market in media and social media.

Putting the market aside, the social impact of a tool that will allow citizens to in effect police media themselves, and check the integrity and veracity of images and videos will return power to the people.

We are now looking at to market strategies for B2C options: this was not so pertinent at the begging of the project.
To date, and after an extensive state of the art analysis conducted by MMSPG labs at EPFL:
our is the only technology focussing on nimble, adaptable and changing use cases.

The scare of Deepfake to date has been that of famous people being made to say things they never did:

When we approached the issue of what was then image forgery for business : that has now warped into DeepFake detection we did so thinking out of the box.

Our competition is focusing on solving a specific issue ; mostly with a priori knowledge:
e.g. analysis of hours of Donald Trump speeches to be able to say that a DeepFake of Trump is fake based on micro gestures , eye movements or hand gestures.

We focussed on a Deep Learning AI that could be trained quickly to detect any new type of forgery: and does not rely on prior knowledge.

In effect we are ahead of the competition due to the years of thought and research that have gone into our development.

Europe will have the most powerful DeepFake detection tool for its cybersecurity.
DeepFake threat