Scientists at the State University of New York in Korea are exploring new ways by which both machine made and human made fake images of faces can be detected. This study was published through a research paper in the ACM Digital Library.
More about the Study Carried Out at State University, New York
The researchers mainly used ensemble methods to detect images, which were created by generative adversarial networks (GANs). These researchers utilized pre-processing techniques to enhance detection of images that were made by humans using Adobe Photoshop.
During the last few years, several advancements have occurred in the context of image processing and machine learning. This has caused the creation of numerous fake images and fake identities. These images can be used to spread incorrect and dangerous information over the internet and other media platforms. These images might also be designed in such a way that they can bypass image detection algorithms and may go undetected by recognition tools. These factors have made scientists conduct research on how to determine the difference between fake and real images.
According to Shahroz Tariq, one of the researchers who carried out the study, fake images have been researched for a substantial time until now. However, studies have mainly focused on photos that were made by humans using Photoshop tools. A study by Karras et. al. recently depicted a generative adversarial network (GAN), which is expected to produce human face images that are highly realistic. The images can be easily used by people to create fake IDs on the Internet.
The focus of the research study carried out by Tariq and his colleagues was to detect both computer-generated and human-generated fake photos of faces by using deep learning techniques. To carry out these activities, the researchers developed a neural network classifier, followed by training it on a database of real and fake images.