- The lighting coded by the noise hides invisible video filigranes inside the patterns of light for falsification detection
- The system remains effective on varied lighting, compression levels and the conditions of movement of the camera
- The packages must reproduce several corresponding code videos to circumvent detection successfully
Researchers at Cornell University have developed a new method to detect the video manipulated or generated by AI by integrating coded signals in light sources.
The technique, known as the lighting coded by noise, hides information in apparently random light fluctuations.
Each integrated filigree carries a low fidelity and horrible version of the original scene under slightly modified lighting – and when the falsification occurs, the manipulated areas do not correspond to these coded versions, revealing evidence of alteration.
The system works via software for computers or by fixing a small chip with standard lamps.
Because the integrated data appear in the form of noise, it is extremely difficult to detect it without the decoding key.
This approach uses the asymmetry of information, ensuring that those who try to create Faked Deep have access to the unique integrated data necessary to produce convincing counterfeits.
The researchers tested their method in relation to a range of handling techniques, including deep fascines, composits and changes in reading speed.
They also evaluated it in various environmental conditions, such as different levels of lighting, video compression degrees, the movement of the camera and the interior and exterior parameters.
In all scenarios, the coded light technique has retained its effectiveness, even when changes have occurred at levels that are too subtle for human perception.
Even if a blacksmith had learned the decoding method, it should reproduce several versions of code correspondence of images.
Each of them should line up with the hidden patterns of light, a task that considerably increases the complexity of the production of undetectable video falseries.
Research addresses an increasingly urgent problem in the authentication of digital media, because the availability of sophisticated publishing tools means that people can no longer assume that video represents reality without any doubt.
Although methods such as controls can detect file changes, they cannot distinguish harmless compression and deliberate manipulation.
Certain watermark technologies require control over the registration equipment or the original source material, which makes them unexpected for wider use.
Lighting coded by noise could be integrated into safety suites to protect sensitive video flows.
This form of integrated authentication can also help reduce the risk of identity theft by protecting personal or official video recordings of unteashed falsification.
Although the Cornell team has recognized the strong protection that his work offers, he declared that the broader challenge of Deepfake detection would persist as the manipulation tools evolve.