AI and the Human Brain: Holding On to Our Humanity

by Diana Drake
A digital illustration of a human brain with glowing neural connections and intersecting lines, representing neural activity and connectivity.

Each year we celebrate the voices of high school students around the world as they reflect on Wharton Global Youth content during our Comment and Win competition.

Zhenghao (Eric) Z., a high school senior from Potomac, Maryland in the U.S., was particularly intrigued by neuroscience, the study of the brain and nervous system. In his comment on the article Neuroeconomics: Getting to the Root of How We Think, Eric wrote, “Neuroscience, with its ability to decode the human mind, is poised to redefine business strategies and societal norms… Envision a society where knowledge can be directly uploaded into our minds, rendering the traditional, time-consuming learning process obsolete. This could revolutionize efficiency in our world, but it also raises questions about potential vulnerabilities.”

We hear you, Eric. In particular, the revolutionary age of artificial intelligence, when computers are learning tasks in ways that mimic human intelligence, is igniting a firestorm of questions about the role of the human brain and machines, the brain-computer interface, and how the two will work together. Can humanity and technology coexist safely and successfully?

GIGO and More Food for Thought

The Wharton Neuroscience Summit — led by the Wharton School’s Neuroscience Initiative – brought together brain and business experts to discuss Human Brain Capital in the Artificial Intelligence Era.

The Brain Capital Alliance defines brain capital as an economic asset that prioritizes brain health and brain skills to accomplish socioeconomic objectives. The panel discussed the power of human thought amid rampant algorithmic innovation. One acronym for your tech toolbox? GIGO. Garbage in, garbage out suggests that the quality of the output from an AI model is directly related to the quality of the input data. As humans, we can analyze the information we’re receiving and potentially filter the garbage before making decisions. Machines, not so much, thus leading to inaccuracies and misinformation.

What other issues surface as we envision human brain capital alongside AI? In the spirit of evidence-based learning, here are five direct quotations from the summit’s brain and business experts. How do they stimulate your thinking?

Human smarts. “Computers can do certain things really well. They can store information more reliably longer-term. They have more stable memories than we have. They can do pattern recognition — similarity matching. We should try to think about ways in which we can leave all of that work to be done by machines, and we do what they cannot do. A lot of the conversation around AI is centered on human replacement…I think the more inspiring thing is to think about what they cannot do and focus on that. It’s not about humans being dumb; it’s about discovering what makes us smart, what we can do better.” –Stefano Putoni, marketing professor, the Wharton School

Connections. “I’m exceedingly concerned when I hear people say that they are going to get a different perspective by asking a computer. I want you to experience it. I want you to go into the world. I want you to get off the computer and go have that physical and mental connection with people. If we start to think that we can get all the answers and perspectives from a large language model…that is very dangerous. I worry most that people are going to get overconfident in this space and miss out on something really important.” –Missy Cummings, Director, Mason Autonomy and Robotics Center, George Mason University

Realistic expectations required. “We haven’t had food insecurity in America because there wasn’t enough computing power. When it comes to the problems we have and the societal changes we need to be making, there’s this waiting-for-Superman idea: oh, now AI is going to fix it. No. AI is going to do what we’ve trained it to do, and we’ve created a society and a whole bunch of ‘garbage in’ for these systems that we’re just going to amplify. These systems are not saviors or solutions; they are amplifiers of what we’re doing.” –Scott Sandland, CEO, Cyrano, artificial empathy and strategic linguistics

Emotions make us human. “This past year, two studies were done where they had patients who opted to get difficult news by a chatbot, like a cancer diagnosis. They rotated ChatGPT and real physicians on the backend of this chatbot, and when they evaluated patient perspectives, the patients found the GPT to be more useful, more empathetic and more human than the physicians. AI can be a highly manipulative tool. And so, going to the place where you need to be to experience humanity is really important. But not losing our humanity in the process of building these technologies is probably the single highest important thing that we have to do right now.” -Michael Jabbour, Chief Innovation Officer, U.S. Education, Microsoft.

Systemic change. “In a future where I’m determining if people get smarter or dumber because of AI, I want to know if they’ve fixed the education system. Are people still going to college because they have to, and they want the grade because they have to get the grade? Or do they actually want to learn? If they want to learn, they can learn better with ChatGPT. But if they’re only there because they have to go to get a job, and they’re cheating their way through by using ChatGPT because you have to tick a box, [then I’m less optimistic]. I’m in the academic system, but I think it’s broken. AI is going to make us dumber if we keep going the way we’re going, but it gives us the opportunity to get smarter if we fix the underlying problems of the system that it’s in.” –Gale Lucas, Assistant Professor, University of Southern California Institute for Creative Technologies

How we all think about the potential and power of machine learning will influence human-AI interaction, agreed the panel. We need to have realistic expectations.

“I have a call to action,” said Lucas. “Somebody come up with the answer to this question: What is the new paradigm (model) for AI? When the personal computer came out, people had crazy expectations about what the PC could do until they made it a desktop. When they made it a desktop, people said, I know what a desk can do. This is just like the papers in my desk, and they could think about it that way. What are we going to interact with AI through to give us that paradigm that sets our expectations? How do we teach people not to think it’s the smartest thing ever and trust what it says? It’s got to be calibrated to what it can actually do, not what’s in our heads, which is that it’s going to be perfect and right because it’s a computer.”

Now it’s your turn. How do you answer Dr. Lucas’s call to action?

Lisa Rothstein, cartoonist for the New Yorker and illustrator extraordinaire, lent her services to the Wharton Neuroscience Summit, providing free-form, on-the-fly visual representations for each session. Take that, AI!

Conversation Starters

What does GIGO stand for and why is it important in a conversation about artificial intelligence?

Which quote resonates most with you and why?

Dr. Lucas said, “In a future where I’m determining if people get smarter or dumber because of AI, I want to know if they’ve fixed the education system.” What does she feel are the shortcomings of the system as it now exists? How is that related to generative AI?

Leave a Reply

Your email address will not be published. Required fields are marked *