Truth- It's a popular five-letter word. As many a dictionary would call it, truth is, "that which is true or in accordance with fact or reality." But what if someone could swoop in and alter your reality? What if a simple algorithm could perfectly alter your so-called truth? Unfortunately, no longer is this a what-if question. News, videos, and so much more can today be faked, and AI and machine learning are making it easier every day.
Before we get into solving the problem, it might be valuable to take a glimpse at the problem itself. Take a look at this video. Deep Fakes are a common example of fake news, the popular example being that of an alteration of Barack Obama's voice. If I can't watch a video and know that those words were spoken by a given person, what really can I believe? The whole concept of faking a video, more so than text, is blatantly a violation of trust and our belief in what we see- an optical illusion, of sorts.But it's not just videos. You've probably talked to a friend in the past who told you a story, only to later learn that he was lying about the whole thing. This isn't exactly uncommon: humans often lie when they need to. Unfortunately, news and media are often a part of this trend. Corporate interference, bribery, and all sorts of issues interfere in the objectivity of news and its information.
Yet whether we look at videos or text, the common thread is an exploitation of our trust. It's a blatant lie, hidden behind some intention. Naturally, this begs the question: what can we do about it? Interestingly enough, the technology is often responsible, for the fake can be used to counter it just as well.
Research today has led to some amazing results in attempting to defend ourselves from fake news. Simons from MITs CSAIL adds, "Researchers from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) and the Qatar Computing Research Institute (QCRI) believe that the best approach is to focus not only on individual claims but on the news sources themselves. Using this tack, they've demonstrated a new system that uses machine learning to determine if a source is accurate or politically biased." Investigation of the type of source can often lead to an analysis of a track record, and over time, can allow us to build a blacklist of sources that you just can't trust. But what about the videos? Do we have a defense mechanism for those?
Thankfully, we do. "Vijay Thaware, and Software Development Engineer Niranjan Agnihotri, writes that they have created a tool to spot fake videos based on Google FaceNet. Google FaceNet is a neural network architecture that Google researchers developed to help with face verification and recognition. Users train a FaceNet model on a particular image and can then verify their identity during tests thereafter," as Horowitz from PC Mag puts it.
While these systems are partially functional and accurate, lots of work is yet to be done. Without near complete accuracy, we are vulnerable to better and better deep fakes over time, given that the technology of building deep fakes will also improve.
Interestingly, the research being done to detect this sort of thing is actually quite involved. According to "The Conversation," detection of blink rates, facial cues, and tonal analysis are being used rapidly to create better detectors. Not only are these detectors important for you and me, but for governments and media too! It's a huge risk if a president is deep faked and the whole country begins to antagonize him. So detection is essential and needs to be done now.We're entering a truly risky period of time in human history. Tools of technology can be used for good and for evil, and knowing humans, evil is going to be a big issue. So, it's up to the good people around us to keep working and building these tools to fight lies. It's almost a moral issue, one in which objectivity is the goal and technology is both the enemy and the hero. From here on out, it's just a race, a race that one can only hope that the good people win.