Recently amidst all the Twitter memes and hashtags, something big was trending.
Del Harvey, vice president of trust and safety at Twitter, announced a draft of how the social-media platform plans to handle “synthetic and manipulated” media that purposely tries to mislead or confuse people. In his blog post, Harvey said, “Twitter may: place a notice next to Tweets that share synthetic or manipulated media; warn people before they share or like Tweets with synthetic or manipulated media; or add a link – for example, to news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.”
One concept that has been getting lots of attention lately helped to inspire Twitter’s announcement: deepfakes.
A deepfake is an artificial intelligence-generated video that shows someone saying or doing something that he or she never actually did. These altered videos use neural networks to overlay someone’s face – in other words, computing systems learn to perform tasks by considering examples and recognizing patterns. For example, Facebook’s Mark Zuckerberg was deepfaked this year when a video of him showed up online preaching about FB’s powers – but it wasn’t actually Zuckerberg. The video had been altered to look like him.
As Zuckerberg and other deepfaked celebs made news in recent months, the deepfake discussion also escalated at the University of Pennsylvania in Philadelphia, Pa., where Knowledge@Wharton High School is based. During PennApps XX in September – one of the world’s largest college hackathons for computer programmers and others to collaborate intensively on software projects – a team of four local students took home the grand prize, beating out some 250 teams from more than 750 high schools and colleges. Their winning project was DeFake, described as “a browser extension that uses machine learning to assess the likelihood that a given video has been subtly manipulated.” It’s an app designed to detect deepfakes.
Wharton Global Youth caught up with Sofiya Lysenko, a senior at Abington Senior High School in Pennsylvania and a leader of the DeFake team, to learn more about deepfake technology.
“I think what is so fascinating about deepfakes is how difficult they are to detect with known computational methods,” noted Lysenko, who has competed in PennApps for several years and has also won attention for other machine learning projects. When she was 14, for instance, she created a program that could predict the next mutation in the Zika virus. “Our team was stuck in the very beginning about how to resolve [deepfake detection]. We investigated several methods, which ultimately we found to be unsatisfactory, until we tried and were successful with the final methods of the project,” added Lysenko, who created DeFake’s machine-learning algorithm that helps determine if a video is fake or real. “Machine learning and computer vision [a field of computer science that enables computers to see, identify and process images in the same way that human vision does] are becoming interesting topics to learn because of all the applications that stem from them, such as deepfakes.”
Inspired by Lysenko’s deep research into deepfakes, here are a few additional truths to help demystify this infamous technology:
- Still not quite sure how this works? Michael Kearns, a computer and information science professor at the University of Pennsylvania, recently suggested that the process to create a deepfake is like a “personal trainer” for software. Speaking with The Christian Science Monitor in October, Kearns explained how a deep-learning application compares one image with another to identify distinguishing characteristics and uses that information to then create a synthetic image. Each time the program successfully identifies the differences between a fake image and a real one, “the next fake it produces becomes more seemingly authentic” – or in better shape, as the personal trainer analogy suggests. As deepfakes become more and more indistinguishable from the real thing, Kearns added this warning: “Be ever vigilant.”
- Growing concerns about deepfakes – and even DeFake’s recent hackathon grand-prize victory – have lots to do with the race for president in the U.S. “Deepfakes are a threat that needs to be detected due to the possibility that this could be used as a quick and deceptive form of misinformation as we approach the 2020 U.S. Presidential elections,” said Lysenko. In fact, DeFake describes the motivation for its machine-learning project like this: “The upcoming election season is predicted to be drowned out by a mass influx of fake news. Deepfakes are a new method to impersonate famous figures saying fictional things, and could be particularly influential in the outcome of this and future elections. With international misinformation becoming more common, we wanted to develop a level of protection and reality for users.” Alex Wang, a University of Pennsylvania freshman studying computer and information science in the School of Engineering and Applied Science, provided this context in an opinion piece in the Penn Political Review: “Much of the concern surrounding deepfakes centers around the 2020 election due to the existence of both large datasets and motivations to target political figures. What would the public reaction be if a doctored video of Elizabeth Warren disparaging Mexican immigrants were to be released?…Would it be legal for Joe Biden’s campaign to create negative deepfakes of opposition candidates?”
- In the universe of Internet interaction, deepfakes have much broader implications about how we communicate and how we make decisions based on what we believe to be true. Wharton management professor Saikat Chaudhuri, who is also the executive director of Wharton’s Mack Institute for Innovation Management, recently interviewed Sherif Hanna, vice president of ecosystem development at Truepic, a photo and video verification platform, on his SiriusXM radio show, Mastering Innovation. Hanna, whose company has developed a solution to verify the source of images, has given a great deal of thought to this issue of misrepresentation, noting that the website “thispersondoesnotexist.com” presents a completely AI-generated image every time you hit the refresh button. “We as a society and as a world at large depend on photos and videos in almost every aspect of daily life…at the same time there’s a rising tide of threats against those photos and videos that we’ve come to rely on and there’s a decline in trust of what you see,” said Hanna. “The danger for society is losing consensus around the shared sense of reality. That’s kind of a big deal if we can’t all agree on what it is that happened because we can’t agree that we trust the photos and videos that document events. It becomes very difficult to make joint decisions as a society if everyone’s perception of what happened differs substantially.”
Truepic and the PennApps project DeFake are working to restore and preserve truth – at least in what we see. Lysenko, who plans to pursue a career in research by leading her own research group and teaching at a university, as well as developing technologies within startups and companies, calls machine learning “truly a super power when it comes to solving some of the hardest challenges today.”
Skeptics, like Wang, point out that this superpower can also serve to make the challenge even greater, calling deepfake detection a losing battle. “The ongoing battle to detect deepfakes is a perfect mirror for the technology itself: as algorithms to detect deepfakes improve, deepfake creators adapt to changes by generating even more realistic ones.”
Is what you’re viewing real? “Be ever vigilant.”
Related Links
- DeFake Devpost
- Penn Today: Deepfake Detector Wins PennApps XX
- The Christian Science Monitory: Surviving the First Deepfake Election
- Penn Political Review
- Mastering Innovation Interview with Sherif Hanna
- Mashable: The Deepfake Apocalypse Never Came
Conversation Starters
What are deepfakes and why are they making headlines? Will deepfakes change your behavior on the Internet?
How did humor help give rise to deepfakes? Have they crossed an important line?
Sherif Hanna says, “The danger for society is losing consensus around the shared sense of reality.” Discuss and debate this topic in small groups. Is AI threatening our very existence as seekers of truth? Use the articles in the Related KWHS Stories tab to help inform your discussion. Is it possible to restore trust on the Internet?
Many say that deepfakes haven’t played a role in the U.S. election after all. What happened? Check out the Related Links with this article for a Mashable article about deepfakes during the election.
I believe that there is a direct correlation between the dangers of deepfakes and the way people are swayed by fake news in the media today. If we take a look at the Coronavirus Pandemic that is happening now we can see a prime example of this; mass hysteria has broken out due the influence of the media which in turn has led to panic buying and vast economic consequences. So, the more people that are blindly listening to whatever the news tells them, the more dangerous deepfakes become. Essentially, these people will began feeding fuel to the fire. From a recent Model UN conference I took part in, I learned that due to Moore’s Law the rise of AI in the coming years is essentially undeniable. Thus, I believe it’s important that we turn to solutions such as project DeFake that attempt to narrow down the amount of people that fall “victim” to deepfakes. Furthermore, rather than directly addressing how we can minimize the amount of deepfakes are being shared and circulated, I think we should we asking ourselves how we can change the general public’s perception of the media so they can learn how to recognize and properly respond to this type of fake news.
Not only are deep fakes very realistic and hard to decipher from real videos, they have a tremendous influence on public opinion. As Sophia Lysenko said, deep fakes are dangerous and could cause voters to be misled with false information ultimately influencing the election results. The ways technology can be used to manipulate and sway the opinions of others has reminded me of a presentation I did on the ethics of photo manipulation in journalistic practices and public opinion for Future Business Leaders of America. While researching for my speech I found an interesting theory proposed by Dr. Allan Paivio. His theory is called the “Dual-Coding Theory” and it states that humans have two cognitive processes that function separately. One process is verbal and the other is non-verbal, like an image or video, and the reaction of one can trigger a response from the other. So something like the word “dog” would cause an image of a fluffy-four legged friend to pop into most people’s heads.
The Dual-Coding theory can be applied to deep fakes in political campaigns. People may release deep fake videos of presidential candidates saying things that will cause people to think that the candidate says and does such things that they do not agree with. Then the next time they hear that candidate’s name mentioned, the image of the deep fake video they saw previously will come to their mind and they will associate the negative things in the deep fake video to the candidate. This could leave a permanent impression on the voter and cause them to not vote for that candidate in the upcoming election because of the false information they have about the candidate from the deep fake video. In order for the integrity of the United States’ elections to be maintained this manipulation needs to be stopped. For my business project we also reached out to the president of the National Press Photographers Association (NPPA), Akili Ramsess, to ask her how to stop photomanipulation. She referred us to the code of ethics outlined by the NPPA, in which I found many things that could also be applicable for deep fakes, such as principal five, “While photographing subjects do not intentionally contribute to, alter, or seek to alter or influence events.” I believe that there should be more organizations like Lysenko’s “DeFake” that find deep fakes as well as regulate companies/people that work with deep fakes. Technology is omnipresent in our society today and it needs to be regulated in order to preserve our rights.
Replying to Andrew O
Hey Andrew, I find your insight and unique experiences quite intriguing, and it got me thinking: What is the most effective way of stopping and preventing deep fakes? You proposed more companies and organizations that are similar to Lysenko’s “DeFake.” However, although that might seem to work at first glance, I don’t believe it is sustainable or practical to have private organizations try and stop the mass spread and production of deepfakes. What I propose instead is the passing of legislation that makes it a felony to aid in and produce deepfakes. This way, the effort to stop deepfakes is not only nationwide, but also supported for and funded by the U.S government. The government can enforce this piece of legislation by creating their own agency that is funded by the government and takes oversight and control in preventing the use and spread of deepfakes.
Having the government stepping in isn’t a new idea. In 2018, Senator Ben Sasse submitted a bill called the Malicious Deepfake Prohibition Act of 2018. This bill got read twice in the Senate and was unable to pass. The bill was deemed to have too many holes and not be sufficient enough to be passed.
Reason.com provides a good example of why the bill that Senator Ben Sasse proposed doesn’t work. A part of the bill states that it prohibits the distribution of an audiovisual record with the intent that the distribution would facilitate tortious conduct. Let’s say for example, Bob throws an old phone with a deepfake in it at an angry neighbor. This action might count as an act of facilitating and distributing deepfakes, and Bob might even end up with a ten-year felony because it “facilitates violence,” a punishment that is too extreme This scenario demonstrates how the proposed bill may produce undesired consequences and needs to be refined.
Now there are some very important key points that you bring up. For example, you talk about how deepfakes can easily sway the mind, especially in politics. Using the dual-coding theory in which you explained, we know that even one image or a video of a deep face of a political candidate can forever brand a negative image into a person’s brain whenever they hear of or think of that candidate. That is the exact reason why some states are contemplating and are in the process of passing their own deep fake prohibition acts. However, not all states are, and the process is extremely slow. What we can do and should do is have the United States Congress take a look back at the 2018 bill proposed by Senator Sasse, refine it, and add in parts that are missing or vital to the success of this legislation. For example, it could narrow down and specify clearly what counts as a deepfake, as well as underline and create specific punishments for specific deepface crimes. Only then can our country work together to prevent the making and distribution of deepfakes. With our government stepping in, there will be no need for a variety of messy privately-funded organizations to work separately to try and combat the problem of deepfakes. Like you mentioned in your final sentence, technology is everywhere nowadays, and we must put a leash on it in order to preserve our rights. The best way to do that is to have our Congress pass a piece of legislation doing so.
Replying to Ivan Z
Hello Ivan, I found your idea of taking legislative action on the deepfake problem and making it a felony very interesting, and thought it was a great course of action to expand upon even 4 years later. As deepfakes continue to exponentially grow in use, whether it be for comedic purposes on Instagram reels or with more malicious intent to humiliate a specific target, it is more apparent nowadays that the internet’s overwhelmingly rapid distributing capacity of media strongly overpowers any kind of legal or legislative action.
Realistically, it’s not really possible to put a leash around a scattering mass that multiplies in size within every second that passes. After all, to compare it to something similar, not every person who has involved themselves with internet piracy has been charged, even though there are clear laws that specifically discourage and condemn piracy of copyright media. Whether it be pirating games or TV shows, immeasurable amounts of users have dipped their toes in this illegal activity, because it’s easy to do and free. Simply put, the law can’t catch them all because there are too many! One may contemplate this: rather than putting a leash around the specific users of these programs, the government should cut it from the roots and shoot its shot directly at the distributors and creators of the deepfake software.
Sure, there are also a plethora of these kinds of software, but unlike catching individual users, targeting each online software, especially the major ones that people are likely to use, instantly eliminates a hub of illegal activity. So as time passes, as developers and distributors of these software are charged and or detained, and more and more users have nowhere else to turn to for making deepfakes, there should be no users and daring developers left as time passes. In theory, that is. In reality, while it would serve to eliminate some sources of deepfake activity, deepfake creation would still be rampant, as such websites can be seen reviving again and again like zombies in internet piracy, for example. So that really leaves us with enforcing a harsher punishment, as you said, a felony. But this, as anyone could easily realize, hinders the development of A.I. technology and scientific progress as a whole, which is not fair to those who wish to use this branch of A.I. for a potential good.
This shows both sides of the spectrum, creating a nuanced topic that requires a tedious solution. On one hand, deepfake technology is dangerous waters most likely to be used for harm, but on the other hand, it is a powerful tool that can enable many benefits such as low-budget companies being able to generate advertisements without paying a lot of money. But this can also be seen as an act of A.I. replacing human jobs such as advertisement actors, film directors, voice actors, and special effects directors, while big mega corporations do the same simply to cut costs. But then does that mean the low-business startups should not be given the opportunity at all? Is it unethical? Is it moral? Is it fair? Is it dystopian? And so we go down the rabbit hole of just one of many examples that seems to have no definitive conclusion. Deepfake technology is dangerous and murky waters that in my opinion bring about more potential harm than benefit to technological advancement and social construct.
It is a topic so intricate that it branches out to all disciplines of business ethics and philosophy that I think can only be solved through unrealistic utopian methods regarding the harmony of humanity. But this much is clear: there is a boundary that need not ever be crossed, and there has to be harsh border patrol.
Deep fakes will change your behavior on social media because it gives them a reminder of how prejudice there is. For example deep flaking is when is false information on the subject. Also trump mentioned it in his speech because there is alot of fake news in the air and nobody knows which one to belive which is rediculous on the fact on how U.S citizens should be.
Responding to Andrew O., I want to agree with his point about “Dual-Coding Theory,” but add on and stress the importance of a single story and how it can affect the premise of “Dual-Coding Theory.” I want to emphasize this detail and argue with his point, after a recent speech I gave to my school about “The Power of Fake News in our society.” Although Andrew’s “Dual-Coding Theory” point about human behavior is generally correct, we can’t ignore the factor of a single story that can change the response to the “Dual-Coding Theory” experiment. This was brought up as a valuable point by Chimamanda Ngozi Adichie in her TedTalk, where Adichie mentions that in her own personal experience, she encountered many false narratives expressed about her when she first moved to college. She was generally misinterpreted by everyone because of false information about Adichie’s native country Nigeria. This should be noted in Dr. Allan Paivio’s theory, as his “Dual-Coding Theory” neglects the factor of a Single Story perspective: one can’t demystify deep fakes that easily if they are heavily influenced by only one story throughout their lifetime. This consequently might explain why people may not think logically and instead irrationally fight for their beliefs. This is a complicated yet powerful topic to discuss because neglecting the factor of different viewpoints can cause consequential dilemmas.
An example of these intricacies is a recent news story about a father who read an article about a sex-trafficking in a local coffee shop he lived close to. He took action based on the false information he read by storming into the cafe and shooting multiple rounds into the air. Luckily, no one was hurt, but this clearly demonstrates how the “Dual-Coding Theory” isn’t necessarily correct — if the father didn’t have kids or an average person was reading the article, they wouldn’t have taken this drastic measure to go to the cafe and shoot rounds into the air. This presents the flaw in Paivio’s theory as many people in our society don’t generally follow the “Dual-Coding Theory.” as they are usually motivated to do something based on their experience about the topic. Therefore, I don’t fully believe that technology should be used to change one’s perspective forcefully. Instead, it should be used to educate them to make more rational decisions. The “Dual-Coding Theory” fails to mention how different perspectives vary, as we all grew up with different experiences, leading to us making rational decisions based on our own separate experiences. Therefore, I believe the most effective way to demystify and stop the spread of false news is to educate people more. We can’t solely prove against others based on how they generally will respond to the “Dual-Coding Theory,” as people will be influenced by their own experiences.
A presentable example that we can relate in our society to ease up these tensions about education, instead of directly cutting one’s voice in speech, can be seen in different views held by people in different socioeconomic classes. People in the middle-to-upper classes might view dogs as adorable animals, while others, particularly in impoverished countries would view these animals as beaten up, stray, homeless, malnourished, or homeless animals that crawl the streets. So given this example, we should not be so steadfast in changing our society views so rapidly. as we all have different views. Therefore, we should all be rational and reconsider this situation before we neglect others based on our general opinions. Although deep fakes are ravishing our society harshly, most of its influence results from distinctive views between people.
If we are all exposed to the diversity of views surrounding a topic, we may be more inclined to come to an agreement. In conclusion, I disagree with Andrew O.’s point about justifying others by using technology based on the general society: doing so is unreasonable as minorities between us still have views that are different because we are all unique.
Deepfake technology, driven by advancements in deep learning and generative models like GANs, poses both opportunities and challenges. To mitigate the risks associated with deepfakes and enhance their positive applications, several improvements can be implemented. First and foremost, enhanced detection algorithms are essential. By refining machine learning models to detect subtle inconsistencies in deepfake videos, the accuracy of identifying fake content can be significantly improved. Techniques such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) can be optimized for this purpose.
In conjunction with better detection algorithms, robust authentication systems can ensure the integrity of digital content. Digital watermarking can embed invisible markers within videos to verify their origin and authenticity, while blockchain technology can provide a decentralized ledger to track the creation and distribution of content, making it difficult to alter without detection.
Additionally, improving GAN models can also help reduce the creation of malicious deepfakes. Refining the training process to include ethical guidelines and constraints can prevent misuse. Furthermore, advancements in GAN architecture, such as integrating attention mechanisms and better loss functions, can enhance the quality and safety of generated content.
User education and awareness tools also play a vital role in this landscape. By developing browser extensions or mobile apps that analyze videos in real-time and provide authenticity scores, individuals can be empowered to identify fake content. Ethical AI frameworks can guide researchers and developers, outlining best practices for creating deepfakes for legitimate purposes while setting boundaries to prevent harmful applications.
Moreover, collaborative research and the establishment of international standards can ensure a unified response to deepfake challenges. Shared research initiatives and public databases of deepfakes and real videos can aid in training more effective detection algorithms. This collaboration must be supported by regulation and policy development. Governments and regulatory bodies need to enforce policies that address the creation and distribution of deepfakes, mandating the use of authenticity markers in digital content and penalizing harmful deepfakes. Supporting research and development of detection technologies through policy can further help manage the risks.
By focusing on these improvements, deepfake technology can be better managed, ensuring its benefits are harnessed while minimizing its risks. This approach will help maintain public trust in digital media and prevent the harmful impacts of malicious deepfakes.