AI and the Human Brain: Holding On to Our Humanity

by Diana Drake
A digital illustration of a human brain with glowing neural connections and intersecting lines, representing neural activity and connectivity.

Each year we celebrate the voices of high school students around the world as they reflect on Wharton Global Youth content during our Comment and Win competition.

Zhenghao (Eric) Z., a high school senior from Potomac, Maryland in the U.S., was particularly intrigued by neuroscience, the study of the brain and nervous system. In his comment on the article Neuroeconomics: Getting to the Root of How We Think, Eric wrote, “Neuroscience, with its ability to decode the human mind, is poised to redefine business strategies and societal norms… Envision a society where knowledge can be directly uploaded into our minds, rendering the traditional, time-consuming learning process obsolete. This could revolutionize efficiency in our world, but it also raises questions about potential vulnerabilities.”

We hear you, Eric. In particular, the revolutionary age of artificial intelligence, when computers are learning tasks in ways that mimic human intelligence, is igniting a firestorm of questions about the role of the human brain and machines, the brain-computer interface, and how the two will work together. Can humanity and technology coexist safely and successfully?

GIGO and More Food for Thought

The Wharton Neuroscience Summit — led by the Wharton School’s Neuroscience Initiative – brought together brain and business experts to discuss Human Brain Capital in the Artificial Intelligence Era.

The Brain Capital Alliance defines brain capital as an economic asset that prioritizes brain health and brain skills to accomplish socioeconomic objectives. The panel discussed the power of human thought amid rampant algorithmic innovation. One acronym for your tech toolbox? GIGO. Garbage in, garbage out suggests that the quality of the output from an AI model is directly related to the quality of the input data. As humans, we can analyze the information we’re receiving and potentially filter the garbage before making decisions. Machines, not so much, thus leading to inaccuracies and misinformation.

What other issues surface as we envision human brain capital alongside AI? In the spirit of evidence-based learning, here are five direct quotations from the summit’s brain and business experts. How do they stimulate your thinking?

Human smarts. “Computers can do certain things really well. They can store information more reliably longer-term. They have more stable memories than we have. They can do pattern recognition — similarity matching. We should try to think about ways in which we can leave all of that work to be done by machines, and we do what they cannot do. A lot of the conversation around AI is centered on human replacement…I think the more inspiring thing is to think about what they cannot do and focus on that. It’s not about humans being dumb; it’s about discovering what makes us smart, what we can do better.” –Stefano Putoni, marketing professor, the Wharton School

Connections. “I’m exceedingly concerned when I hear people say that they are going to get a different perspective by asking a computer. I want you to experience it. I want you to go into the world. I want you to get off the computer and go have that physical and mental connection with people. If we start to think that we can get all the answers and perspectives from a large language model…that is very dangerous. I worry most that people are going to get overconfident in this space and miss out on something really important.” –Missy Cummings, Director, Mason Autonomy and Robotics Center, George Mason University

Realistic expectations required. “We haven’t had food insecurity in America because there wasn’t enough computing power. When it comes to the problems we have and the societal changes we need to be making, there’s this waiting-for-Superman idea: oh, now AI is going to fix it. No. AI is going to do what we’ve trained it to do, and we’ve created a society and a whole bunch of ‘garbage in’ for these systems that we’re just going to amplify. These systems are not saviors or solutions; they are amplifiers of what we’re doing.” –Scott Sandland, CEO, Cyrano, artificial empathy and strategic linguistics

Emotions make us human. “This past year, two studies were done where they had patients who opted to get difficult news by a chatbot, like a cancer diagnosis. They rotated ChatGPT and real physicians on the backend of this chatbot, and when they evaluated patient perspectives, the patients found the GPT to be more useful, more empathetic and more human than the physicians. AI can be a highly manipulative tool. And so, going to the place where you need to be to experience humanity is really important. But not losing our humanity in the process of building these technologies is probably the single highest important thing that we have to do right now.” -Michael Jabbour, Chief Innovation Officer, U.S. Education, Microsoft.

Systemic change. “In a future where I’m determining if people get smarter or dumber because of AI, I want to know if they’ve fixed the education system. Are people still going to college because they have to, and they want the grade because they have to get the grade? Or do they actually want to learn? If they want to learn, they can learn better with ChatGPT. But if they’re only there because they have to go to get a job, and they’re cheating their way through by using ChatGPT because you have to tick a box, [then I’m less optimistic]. I’m in the academic system, but I think it’s broken. AI is going to make us dumber if we keep going the way we’re going, but it gives us the opportunity to get smarter if we fix the underlying problems of the system that it’s in.” –Gale Lucas, Assistant Professor, University of Southern California Institute for Creative Technologies

How we all think about the potential and power of machine learning will influence human-AI interaction, agreed the panel. We need to have realistic expectations.

“I have a call to action,” said Lucas. “Somebody come up with the answer to this question: What is the new paradigm (model) for AI? When the personal computer came out, people had crazy expectations about what the PC could do until they made it a desktop. When they made it a desktop, people said, I know what a desk can do. This is just like the papers in my desk, and they could think about it that way. What are we going to interact with AI through to give us that paradigm that sets our expectations? How do we teach people not to think it’s the smartest thing ever and trust what it says? It’s got to be calibrated to what it can actually do, not what’s in our heads, which is that it’s going to be perfect and right because it’s a computer.”

Now it’s your turn. How do you answer Dr. Lucas’s call to action?

Lisa Rothstein, cartoonist for the New Yorker and illustrator extraordinaire, lent her services to the Wharton Neuroscience Summit, providing free-form, on-the-fly visual representations for each session. Take that, AI!

Conversation Starters

What does GIGO stand for and why is it important in a conversation about artificial intelligence?

Which quote resonates most with you and why?

Dr. Lucas said, “In a future where I’m determining if people get smarter or dumber because of AI, I want to know if they’ve fixed the education system.” What does she feel are the shortcomings of the system as it now exists? How is that related to generative AI?

10 comments on “AI and the Human Brain: Holding On to Our Humanity

  1. AI’s power depends entirely on the quality of its input. The phrase “garbage in, garbage out” perfectly sums this up. What worries me most is how broken our education system is. Without real reform, AI will only amplify existing problems rather than solve them.
    Professor Gale Lucas put it well when she said, “AI is going to make us dumber if we keep going the way we’re going, but it gives us the opportunity to get smarter if we fix the underlying problems of the system that it’s in.” This quote stood out to me because it highlights that technology alone won’t save us, meaningful change in how we learn is essential.

  2. As a person who grew up without the influence of AI, it worries me how quickly the line between human intelligence and AI has blurred. Although I, too, am guilty of using AI to shortcut assignments or Google searches, it comes with the guilt of knowing that the thoughts on my paper aren’t completely and wholly mine. To add to this, the concern of privacy and digital security is not one that is spoken about enough when discussing AI. Recently, I used ChatGPT to search for scholarship information specifically catered towards me. Without giving any details about me or who I am, ChatGPT was able to accurately predict information using my previous inputs. It led me to begin thinking about what we are risking while using AI. Having the world at our fingertips is a larger responsibility than many of us realize, and this article has shifted my perspective with its discussion of “brain capital.” Instead of sitting and worrying about what could be done, we should focus on our “brain capital” and what we can do rather than what we can use AI to do. Focusing on ourselves can be an empowering movement, pushing us into a greater future rather than one in which we let technology control our lives.

    Another concern of mine was how easily we fall into the deception of AI. Being unable to tell the difference between human empathy and an AI chat box is a serious concern. It speaks volumes to the capabilities of AI that we had never known before, and worries me because we are losing our skills in empathy, kindness, and creativity. We are willing to outsource decisions and humanity to technology rather than doing it ourselves, which is causing us to lose our abilities to reflect, ration, and relate to others; the skills that enabled us to build up nations in the first place. We are using AI to think for us rather than with us, which is detrimental to education and to society as a whole. Preparing for the future means looking at how we can use technology to elevate our thinking rather than to take it away.

  3. GIGO stands for “Garbage In, Garbage Out.” LLMs are trained on massive datasets. If the data is fake, the output will be fake; if the data is biased, the output will be biased. The internet is full of garbage, and that’s where AI pulls its knowledge from. AI isn’t flawless. As humans, we must use our own judgment instead of relying completely on AI, no matter how advanced it seems, because there are things we can do that AI never will. Missy Cummings’ quote about connections hits the nail on the head. AI can’t offer multiple perspectives the way people can. If everyone gets their perspective from a computer, we’ll all end up thinking alike. Relationships and connections with others are irreplaceable. Over-relying on AI could stifle our creativity and harm society’s mental health.

    From a student’s perspective, the current education system isn’t fully adapted to AI. Many students use AI for homework, essays, or even to cheat on tests. They don’t learn anything, switch off their thinking, and waste time without gaining real knowledge or skills. The education system needs to change fast to prepare future generations for work and life. With generative AI, tasks can seem effortless, but that’s the problem. I believe schools should teach us to use AI as a tool. It’s awesome for speeding up work or studying, but only if used properly. People aren’t used to such rapid changes, and adapting to this is a major challenge for society.

  4. We’ve all done it, we’ve all gone and had an argument with someone -whether or not it was really all that important- to the point the argument starts blowing up out of proportion. After a long time of constant bickering, you both finally come to the only conclusion, “What were we talking about in the first place?” That’s precisely where we currently stand in the argument of AI and our humanity. How can we argue if we have our humanity when we’ve barely touched on what makes us human?

    People struggle to realize that AI cannot steal the truth to humanity, but it can replicate being humane. Michael Jabbour describes how ChatGPT was inherently more human when it generated a response with more empathy. When in reality that’s the act of being humane, not human. The physicians gave an actual, authentic human response. The difference is that humans cannot always empathize and give you the answer you want, which is shown all throughout history. Humanity is simply the act of being human; we’re an imperfect race, one without all the right answers. Which is why we now find ourselves constantly turning to AI, it always has the perfect answer. It eliminates the hardship of finding the answer, but our experiences got us to where we are today. AI is today’s threat because it was tomorrow’s perfect promise, the one thing humanity isn’t.

    Our struggle is what sets us apart. In the end who faces consequence and sabotage? Only humans can truly feel the destruction set upon themselves, no matter how much AI mimics humanity. To never despair and always have the answer is the life that AI promises. It sounds great in theory, but it destroys value and meaning to everything. To never achieve, create, struggle, acknowledge yourself. Be neither good or bad, and unable to choose based on your own mental turmoil. That is what it means to lose your humanity.

    Even as I’m writing this, I get increasingly more frustrated. I can’t immediately brainstorm the right idea, the perfect way to get my point across. But if we allow ourselves to struggle and make those connections, we will always outgrow AI in some way.

    Our humanity is something AI can never take from us, we just fail to recognize it.

  5. Fire. Automobiles. The internet. And now AI.

    The history of humanity has been defined by new ideas, all of which seem revolutionary in their respective time period. At the dawn of humanity, fire brought us warmth and fear. Half a century ago, the internet changed the way we as humans interacted and developed—in ways both positive and negative. For some, the web meant being able to collaborate with innovative minds across the globe. For others, such as myself, the internet was a way to see my grandma’s face despite living 8,290 miles away. Unfortunately, the world wide web also opened new methods of crime, such as hacking and cyberbullying.

    In the 21st century, AI poses a similar situation for humanity: limitless benefits coupled with harrowing dangers. To avoid being a cynicist, let me start off with the positive outlook. As aptly stated by Mary Meeker, the “Queen of the Internet,” AI is the fastest growing tech in human history. I myself can tell you that within two years, I went from someone without a clue of what ChatGPT was to a 16-year-old developing LLMs on my computer. But this technology is more than an outlet for curious coders—it’s an innovation advancing humanity across every aspect of society. Whether it be generating novel protein structures in minutes (a task which used to take years at a time), recovering ancient archaic languages once believed to be lost, or just helping you figure out what to make for dinner, AI is truly making an impact in every sector.

    However, this magical cure-all comes with its side effects. My mother always told me to never trust anything with a mind of its own. Though her words made me live in fear of Siri for much of my childhood, I take them with a grain of salt in the present as someone both utilizing and developing AI extensively. Artificial Intelligence is like a sponge. The more liquid you soak it in, the larger it grows. However, if the water you immerse it in is dirty, the sponge itself will dispense nothing but dirt as well. As the article aptly states, AI follows the principle of GIGO. This is why we see models displaying extensive bias against certain groups: it is simply reflecting the same bias seen in society. In 2018, Amazon attempted to implement an AI algorithm for hiring, but the project was immediately scrapped after visibly discriminating against female candidates—a reflection of the gender inequality in STEM fields.

    Simply put, just like us humans, AI is not perfect. To blindly trust AI is to operate without reasoning in the real world. To stop questioning AI is to lose your humanity. Remember, AI stands for ARTIFICIAL intelligence—it does not reflect us humans as a whole. So, I urge everyone reading this comment to keep my mother’s advice in mind: use AI, but use your own reasoning more. For in the end, it is our brain that makes us the superior primate, not our computers.

  6. While reading this article mentioned surface level issues with A.I, what really startled me was the depth to which I would begin questioning the direction our world is headed.
    Firstly, I would like to agree that as the article mentions, there is undoubtedly a line between A.I. and us. No matter how developed A.I. becomes, the truth remains that it is but a creation of mankind, it is artificial. But many people question, if this tool were made by us, for us, then what’s the big issue? Well, I’m not trying to be cliche, but I would argue that A.I. can simply never imitate the fire in our hearts. So the A.I. can help us but it will never be able to be us or replace us, which many fear. I believe what Prof. Stefano Putoni mentioned, “it’s about discovering what makes us smart, what we can do better,” is absolutely correct, in that we should focus on what we can do as humans and leave to A.I. what it is as good at (or better than us) at. Ultimately, this robot was trained based on us and can never do more than imitate us. Therefore, when it comes to actually living, does inputting all your questions into a machine and using its answers really make you feel alive? Does using these A.I. really make us feel human?
    Moving on to my question for those of us reading this article as well as those quoted in the article: what does it mean to be human? Does it derive from the emotions we have? The genuine feeling of sorrow when we lose one who is precious to us? Or is it the delicate skin and beating heart that we have, the complex body that each Homo sapiens walks around in? Whatever it is, how do we protect this humanity when developing A.I? How can we harness the power of A.I. without losing ourselves in the process? When I start to wonder about these questions, I get lost.
    To provide a guiding light for those of us who are lost, we must address the root cause of why A.I. use is so common. Our never ending greed drives us forward, creating and developing more and more. At the same time, the many Gen Z youth are crumbling under the pressure to succeed or meet standards. We are human. We must recognize the flaws within us. If not, these flaws will drive us to grasp for more while robbing others of all they have.

  7. If used mindfully, AI can be a very useful and beneficial tool to enhance the learning experience in academia. AI provides many options that can be curated to students needs and to the different ways students learn. For example, I learn best through repetition so by using Chat GPT to give me practice problems for a topic in math or quiz me on Spanish vocabulary, it caters to my needs. Chat GPT is not perfect and it makes errors from time to time but by keeping that in mind, even actively looking for errors when being quizzed can be an engaging study strategy. AI usage in learning could be a valuable change to the education system that could avail students and teachers alike. As long as students are taught what is appropriate when using AI and how to use it mindfully, AI could be a very advantageous, personalized learning tool that can amplify the scholastic experience.

    • I appreciate you sharing your optimistic view towards using AI in academia. I can’t disagree that it can be helpful, as I use AI regularly in order to maximize the efficiency of my learning. Still, your comment made me wonder: how we could guide human cognitive resilience if AI would be fully implemented into the education system? I’m eager to argue that AI could be easily misused by many, which would guarantee its transformation from a tool that is supposed to assist to a tool that keeps future generations dependent. What are the ways we could secure our critical skills, while using AI efficiently, considering that we are risking the erasure of struggle, which is proven to deepen understanding and striking curiosity? I’m looking forward to hearing your thoughts on that.

  8. Isn’t it unsettling how quickly humans began to rely on AI, seeing it not even as a tool but foolishly as a solution itself? I strongly stand by the argument that humanity needs to redefine what to expect from AI and how to separate it from the expectations towards humanity itself.
    Dr. Luca’s inquiry to invent a paradigm that frames AI in a way the desktop defined computing seems quite quaint to me in comparison with what I heard during a lecture a few months ago. The question, ‘How do we retain power over entities more powerful than us, forever?’ that was pressed by Stuart Russell and stayed with me until now.
    In order to get on the right pathway that leads to solving Luca’s challenge, I dare to refer to Russell’s concept of ‘assistance games’ which claims that machines might always stay uncertain to some extent regarding human objectives. This leads me to think that instead of looking for a paradigm that frames AI, we should find a way to build systems that never fully rely on their own grasp on human welfare in order to ensure a secure future for all.
    Regarding Michael Jabbour’s point about patients finding AI being able to be more empathetic than human beings, we should raise a concern about why it is that way. From my perspective, this concern arose due to humans seeking comfort over experiences, effort and authenticity. It ties to the fact that emotional depth and empathy are demanding and in a world where students ask ChatGPT to list their perspectives, there is no place for true human compassion.
    Perhaps the real concern isn’t about finding a paradigm that will tame AI, but being able to confront the system that uses humans to aim to be dependent on AI, even in cases of moral choices or empathy. If we don’t face the underlying issues of the system, no interface will protect our race from building tools that exemplify our blind spots.
    Thank you, Wharton Global Youth, for engaging me in reflecting on those urgent questions.

  9. I felt that the article was very thought-provoking, but I disagree with the idea that students might prefer learning from AI, such as ChatGPT, rather than learning from college professors.

    The article mentioned that humans are perhaps going to college because humans can learn better from ChatGPT than from learning in college. I disagree with this statement. As I read through the article, I noticed a big contradiction. GIGO, or “Garbage In, Garbage Out,” was a key concept in the article. The article seemed to be implying that students go to college because they have to, and that people don’t want to actually learn in college. Even though this might apply to some people, I think that most people do genuinely want to go to college to learn. As mentioned in the article, ChatGPT has a lot of information that is of a “garbage” quality. Many students, including myself, don’t want to learn from an AI that simply gives back “garbage” information, or at best, information of a normal quality. Most students will want to learn from professors who actually give a better quality of education compared to Al, such as ChatGPT.

    I believe that many students will want to learn from professors in college rather than to learn from AI or generative AI such as ChatGPT. I want to think about it this way. Let’s say that there is a $100 bill lying on a street. It’s been left there for some time, and no one has picked it up. Al will think that, since no one has picked it up, it is very likely that the $100 bill is fake, and leave it at that without examining it closely. However, a professor will pick it up and examine it closely. This is the difference. Although Al may have some good high quality information, it is a collection of all (good/high-quality & bad/low-quality) information, so it may not give out the best information to you. At the end of the day, Al will always be a secondhand source that is a collective of good and bad information. On the other hand, a professor will research, filter, and examine information. Then, the professor will present the high-quality information to the students, which is often exclusive to the college.

    What I am concerned about information and AI regarding the future is that good/high-quality information will not be produced by AI, in which case, the “Garbage In, Garbage Out” will be a real reality in our world. I believe that in our future, sooner or later, we will have to pay money or pay for a subscription in order to get good/high-quality information on our internet. Many prestigious sources, such as the Washington Post and Wall Street Journal, just to name a few, require a payment in order to read their journals or articles. Databases such as JSTOR have a free version, but their paid version gives extensive access to their content. I believe that on top of the many websites that already require you to pay, more and more websites will start to require payment/subscription to view their content/information. I also believe that there will be less high-quality information that Al can use. I think that websites would not like their high-quality information to be stolen for use by Al, sometimes even without any credit, which is why more websites would want to start to require people to pay for their information and content. Then, Al will function in a “Garbage In, Garbage Out” style, which is very concerning since a lot of people rely on Al for research purposes, decision-making, and so on.

    When thinking about how we can hold on to our humanity, I think that it is very important to consider what differentiates us humans and AI. I believe that the key differentiator between Al and humanity is our soul. I think our soul is about our identity, our passion/desire, as well as our religious beliefs. Our passion/desire can only be satisfied by working hard for it and improving at that one thing that you are passionate about. As a Japanese person, I grew up hearing a Japanese proverb which is “好きこそ物の上手なれ.” This is saying that if you love something, you will become good at it. One time, someone told me that I won’t be a good chess player because I started playing chess later in my childhood, because I kept losing games after I started learning, and because I didn’t take chess classes. Al will probably tell me that I won’t become a good chess player as well because Al likes to determine things based on facts and quantitative information. But what the person and Al didn’t consider was that I had a strong passion for chess. In 3 years, I went from a complete beginner to the top 1% of chess players on chess.com. Another good example would be entrepreneurs. Al will propose/support/encourage ideas that are similar to past successful business ideas because Al is based on probability and has information on past cases. Al will not support a radical, new business idea that is an outlier to past cases of successful businesses. Al will not see the passion or drive that someone has to make the new business idea successful. Al will not be able to understand the passion or drive that someone has when considering a new business idea. Take Steve Jobs, for instance. At the time he invented the iPhone, there were no similar products like the iPhone. However, he was very passionate about building the iPhone as he managed every detail of the iPhone and saw it as a revolution, not simply as a product. This goes to show that even if a business idea or product is new and radical, if you have passion, it is more likely to succeed. Al is not able to take into account someone’s passion because Al doesn’t have a soul, unlike us humans. I would also want to point out that Al will not have the ability to make decisions based on intuition, such as believing that someone’s new/radical business idea will be successful, since that person is passionate about it.

    Additionally, Al will not be able to have any religious beliefs. I think about god, the afterlife, reincarnation, and karma, but Al doesn’t. If something that you do isn’t ethical but will benefit you in the short term, Al may suggest that you do it. Al may suggest you take action without any wisdom that is based on religion. Religion won’t be a central part in decision-making and actions for Al, unlike us humans. Religion often teaches us that if we do good, even if it means sacrificing ourselves, we will be rewarded in some way, whether through god, the afterlife, reincarnation, etc.

    With all that being said, I think that there are a few ways to make our AI better. First, I think that Al should try to get information from trusted sources. I think Al should have a filtration system in which, when they come across information, they should compare it to a fact-check website. In addition, I believe that we should also input a lot of books into the Al’s database. Since Al can’t truly get life experiences and gain wisdom, books will be a decent but not perfect substitute to “get life experiences” as well as to better learn about human connections or emotions. However, books will not be able to truly replace our process of getting life experiences.

    Al will continue to grow, and if I’m being honest, it scares me. But I will keep searching for ways to differentiate myself from Al.

Leave a Reply

Your email address will not be published. Required fields are marked *