DARPA to fund AI-driven research to help officials root out deepfakes

The US Defense Advanced Research Projects Agency (DARPA) is planning to fund a research program aimed at developing technologies that can automatically detect deepfake media. Deepfakes, which use artificial intelligence and machine learning techniques to manipulate visual and audio content, are a growing concern for governments and businesses around the world. DARPA’s program, called the MediFor (Media Forensics) program, will focus on developing systems that can detect deepfakes with a high degree of accuracy.

The MediFor program will bring together researchers from academia, government, and industry to develop advanced machine learning algorithms that can automatically detect deepfakes. The algorithms will be trained on large datasets of real and manipulated media, allowing them to identify subtle differences between genuine and manipulated media.

The DARPA program will be divided into two phases. The first phase will focus on developing algorithms that can detect deepfakes with a high degree of accuracy, while the second phase will focus on developing technologies that can identify the source of deepfakes. This will help investigators to track down the people behind deepfakes, which could be used for malicious purposes such as spreading disinformation or interfering with political campaigns.

The MediFor program is part of DARPA’s ongoing efforts to develop technologies that can protect against emerging threats. The agency has previously funded research into cybersecurity, robotics, and other cutting-edge fields. DARPA hopes that the MediFor program will help to ensure that emerging deepfake technologies are not used to undermine democracy or cause other forms of harm.