He was referring to the practice of “bacha bazi” which is an Afghani term meaning “boy play” that refers to sexual relationships between older men and young boys who are from very poor families or orphans and used as sex slaves by wealthy and powerful Afghanis. 888U.S. soldiers were reportedly told to ignore such abuse because it is part of the culture in regions of the Middle East. 889This abomination is a whole other issue, but the point is the CIA actually proposed making a deepfake of Saddam Hussein as a pedophile thinking it would incite people to rise up and overthrow him, because if such a video were real, people in a civilized culture would do just that.
Nvidia, a video graphics card company, has created an AI so powerful that it can automatically change the weather in video footage, making a clip of a car driving down a road on a sunny day appear as if it was actually shot in the middle of winter with a few inches of snow on the ground and the leaves missing from the trees. 890The same technology can take photos of cats or dogs and change them to make them look like a different breed, and can change people’s facial expressions from happy to sad, or anything in between. 891
Nvidia’s AI can even generate realistic pictures of people who don’t actually exist by taking features from actual photos and combining elements of them together into a composite that is almost impossible to tell that it’s fake. 892The website ThisPersonDoesNotExist.com uses this technology to display a different fake photo every time you visit it, most of them looking like HD photos of ordinary people.
AI can now create 3D models of people just from a few photographs, and while it may be fun to input a character in your favorite video game that looks just like you, the capacity for nefarious abuses of this technology are vast.
In November 2016, Adobe (the creator of Photoshop) demonstrated what they called Adobe Voco, or Photoshop-for-voices, which can generate realistic sounding audio, making it sound like someone is saying something that they never actually said. The software works by inputing samples of someone’s voice, and then can create fake audio files in that same voice saying whatever is typed onto the screen. 893
Dr. Eddy Borges Rey, a professor at the University of Stirling, said, “It seems that Adobe’s programmers were swept along with the excitement of creating something as innovative as a voice manipulator, and ignored the ethical dilemmas brought up by its potential misuse.” 894
He continues, “Inadvertently, in its quest to create software to manipulate digital media, Adobe has [already] drastically changed the way we engage with evidential material such as photographs. This makes it hard for lawyers, journalists, and other professionals who use digital media as evidence.” 895Google has created similar software called WaveNet that generates realistic sounding human speech by modeling samples of people actually talking. 896
In May 2019 a group of Machine Learning Engineers released an audio clip they created using their RealTalk technology which sounded like podcaster Joe Rogan talking about investing in a new hockey team made up of chimpanzees. 897It wasn’t perfect, but if you didn’t know that it was fake before you heard it, you may be fooled into thinking that it’s real. The researchers admitted, “the societal implications for technologies like speech synthesis are massive. And the implications will affect everyone.” 898
“Right now, technical expertise, ingenuity, computing power and data are required to make models like RealTalk perform well. So not just anyone can go out and do it. But in the next few years (or even sooner), we’ll see the technology advance to the point where only a few seconds of audio are needed to create a life-like replica of anyone’s voice on the planet. It’s pretty f*cking scary,” the creators wrote on their blog. 899
They went on to list some of the possible abuses this technology may be used for, “if the technology got into the wrong hands.” These include, “Spam callers impersonating your mother or spouse to obtain personal information. Impersonating someone for the purposes of bullying or harassment. Gaining entrance to high security clearance areas by impersonating a government official,” and “An ‘audio deepfake’ of a politician being used to manipulate election results or cause a social uprising.” 900
They raise some great points. What’s to stop people from creating deepfakes of politicians, CEOs of major corporations, or popular YouTubers, and making them appear as if they’re saying racist, hateful, or violent things, and claiming they got it from a coworker or a “friend” who secretly recorded it, or that the clip was from an old YouTube video once uploaded to someone’s channel that they later deleted?
National Security Concerns
In July 2017 researchers at Harvard, who were backed by the U.S. Intelligence Advanced Research Projects Activity (IARPA), published a report titled Artificial Intelligence and National Security where they detailed the growing risk of deepfake forgeries, saying, “The existence of widespread AI forgery capabilities will erode social trust, as previously reliable evidence becomes highly uncertain,” and details some of the horrific possibilities that are right around the corner. 901
The report then quotes part of an article one of the researchers wrote for Wired magazine about these dangers, saying, “Today, when people see a video of a politician taking a bribe, a soldier perpetrating a war crime, or a celebrity starring in a sex tape, viewers can safely assume that the depicted events have actually occurred, provided, of course, that the video is of a certain quality and not obviously edited. But that world of truth—where seeing is believing—is about to be upended by artificial intelligence technologies.” 902
The article continues, “When tools for producing fake video perform at higher quality than today’s CGI and are simultaneously available to untrained amateurs, these forgeries might comprise a large part of the information ecosystem.” 903
The Artificial Intelligence and National Security report goes on to warn that, “A future where fakes are cheap, widely available, and indistinguishable from reality would reshape the relationship of individuals to truth and evidence. This will have profound implications for domains across journalism, government communications, testimony in criminal justice, and of course national security… In the future, people will be constantly confronted with realistic-looking fakes.” 904
It concludes that, “We will struggle to know what to trust. Using cryptography and secure communication channels, it may still be possible to, in some circumstances, prove the authenticity of evidence. But, the ‘seeing is believing’ aspect of evidence that dominates today—one where the human eye or ear is almost always good enough—will be compromised.” 905
Elon Musk is funding a non-profit organization called OpenAI which is trying to ensure that the creation of artificial intelligence will be “safe,” but they created an AI tool so powerful they won’t release it to the public out of concern that it could create such realistic forgeries and fake news articles that they would be difficult to distinguish from real ones. “Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” the organization wrote on their blog. 906
Others are equally concerned. Sean Gourley, who is the founder and CEO of a company called Primer, which data mines social media posts for U.S. intelligence agencies to track issues of concern and possible threats, warns, “The automation of the generation of fake news is going to make it very effective.” 907
Читать дальше