AI and the Human Brain: Holding On to Our Humanity

Each year we celebrate the voices of high school students around the world as they reflect on Wharton Global Youth content during our Comment and Win competition.
Zhenghao (Eric) Z., a high school senior from Potomac, Maryland in the U.S., was particularly intrigued by neuroscience, the study of the brain and nervous system. In his comment on the article Neuroeconomics: Getting to the Root of How We Think, Eric wrote, “Neuroscience, with its ability to decode the human mind, is poised to redefine business strategies and societal norms… Envision a society where knowledge can be directly uploaded into our minds, rendering the traditional, time-consuming learning process obsolete. This could revolutionize efficiency in our world, but it also raises questions about potential vulnerabilities.”
We hear you, Eric. In particular, the revolutionary age of artificial intelligence, when computers are learning tasks in ways that mimic human intelligence, is igniting a firestorm of questions about the role of the human brain and machines, the brain-computer interface, and how the two will work together. Can humanity and technology coexist safely and successfully?
GIGO and More Food for Thought
The Wharton Neuroscience Summit — led by the Wharton School’s Neuroscience Initiative – brought together brain and business experts to discuss Human Brain Capital in the Artificial Intelligence Era.
The Brain Capital Alliance defines brain capital as an economic asset that prioritizes brain health and brain skills to accomplish socioeconomic objectives. The panel discussed the power of human thought amid rampant algorithmic innovation. One acronym for your tech toolbox? GIGO. Garbage in, garbage out suggests that the quality of the output from an AI model is directly related to the quality of the input data. As humans, we can analyze the information we’re receiving and potentially filter the garbage before making decisions. Machines, not so much, thus leading to inaccuracies and misinformation.
What other issues surface as we envision human brain capital alongside AI? In the spirit of evidence-based learning, here are five direct quotations from the summit’s brain and business experts. How do they stimulate your thinking?
Human smarts. “Computers can do certain things really well. They can store information more reliably longer-term. They have more stable memories than we have. They can do pattern recognition — similarity matching. We should try to think about ways in which we can leave all of that work to be done by machines, and we do what they cannot do. A lot of the conversation around AI is centered on human replacement…I think the more inspiring thing is to think about what they cannot do and focus on that. It’s not about humans being dumb; it’s about discovering what makes us smart, what we can do better.” –Stefano Putoni, marketing professor, the Wharton School
Connections. “I’m exceedingly concerned when I hear people say that they are going to get a different perspective by asking a computer. I want you to experience it. I want you to go into the world. I want you to get off the computer and go have that physical and mental connection with people. If we start to think that we can get all the answers and perspectives from a large language model…that is very dangerous. I worry most that people are going to get overconfident in this space and miss out on something really important.” –Missy Cummings, Director, Mason Autonomy and Robotics Center, George Mason University
Realistic expectations required. “We haven’t had food insecurity in America because there wasn’t enough computing power. When it comes to the problems we have and the societal changes we need to be making, there’s this waiting-for-Superman idea: oh, now AI is going to fix it. No. AI is going to do what we’ve trained it to do, and we’ve created a society and a whole bunch of ‘garbage in’ for these systems that we’re just going to amplify. These systems are not saviors or solutions; they are amplifiers of what we’re doing.” –Scott Sandland, CEO, Cyrano, artificial empathy and strategic linguistics
Emotions make us human. “This past year, two studies were done where they had patients who opted to get difficult news by a chatbot, like a cancer diagnosis. They rotated ChatGPT and real physicians on the backend of this chatbot, and when they evaluated patient perspectives, the patients found the GPT to be more useful, more empathetic and more human than the physicians. AI can be a highly manipulative tool. And so, going to the place where you need to be to experience humanity is really important. But not losing our humanity in the process of building these technologies is probably the single highest important thing that we have to do right now.” -Michael Jabbour, Chief Innovation Officer, U.S. Education, Microsoft.
Systemic change. “In a future where I’m determining if people get smarter or dumber because of AI, I want to know if they’ve fixed the education system. Are people still going to college because they have to, and they want the grade because they have to get the grade? Or do they actually want to learn? If they want to learn, they can learn better with ChatGPT. But if they’re only there because they have to go to get a job, and they’re cheating their way through by using ChatGPT because you have to tick a box, [then I’m less optimistic]. I’m in the academic system, but I think it’s broken. AI is going to make us dumber if we keep going the way we’re going, but it gives us the opportunity to get smarter if we fix the underlying problems of the system that it’s in.” –Gale Lucas, Assistant Professor, University of Southern California Institute for Creative Technologies
How we all think about the potential and power of machine learning will influence human-AI interaction, agreed the panel. We need to have realistic expectations.
“I have a call to action,” said Lucas. “Somebody come up with the answer to this question: What is the new paradigm (model) for AI? When the personal computer came out, people had crazy expectations about what the PC could do until they made it a desktop. When they made it a desktop, people said, I know what a desk can do. This is just like the papers in my desk, and they could think about it that way. What are we going to interact with AI through to give us that paradigm that sets our expectations? How do we teach people not to think it’s the smartest thing ever and trust what it says? It’s got to be calibrated to what it can actually do, not what’s in our heads, which is that it’s going to be perfect and right because it’s a computer.”
Now it’s your turn. How do you answer Dr. Lucas’s call to action?
What does GIGO stand for and why is it important in a conversation about artificial intelligence?
Which quote resonates most with you and why?
Dr. Lucas said, “In a future where I’m determining if people get smarter or dumber because of AI, I want to know if they’ve fixed the education system.” What does she feel are the shortcomings of the system as it now exists? How is that related to generative AI?
AI’s power depends entirely on the quality of its input. The phrase “garbage in, garbage out” perfectly sums this up. What worries me most is how broken our education system is. Without real reform, AI will only amplify existing problems rather than solve them.
Professor Gale Lucas put it well when she said, “AI is going to make us dumber if we keep going the way we’re going, but it gives us the opportunity to get smarter if we fix the underlying problems of the system that it’s in.” This quote stood out to me because it highlights that technology alone won’t save us, meaningful change in how we learn is essential.
I want to start off by saying that I really appreciate your view on this topic, Tanisha. I especially like how you emphasized GIGO as the lens through which to view what role AI has in our education. I completely agree that without real reform in terms of how we learn and source our information, AI is going to risk amplifying our shortcomings instead of correcting them.
Your focus on reforming the education system resonates with me very deeply, with a concern that I have, which is that AI lacks intrinsic judgment, even when it is powered by vast data. I have realized that AI oftentimes can’t distinguish between a flawed data stream and a credible, trusted source. That is exactly why I truly believe that it is crucial for humanity(teachers, students, institutions, etc) to act as curators. We should filter and elevate the highest-quality information so that AI will not provide us with garbage information mixed with gold.
In addition, I would like to build on your point by saying that we really need to safeguard and prioritize the human characteristics or qualities that AI will not be able to replicate, such as our passion, our intuition, and our soul. I went through the process of learning chess through dedication and intrinsic motivation. I have clearly seen how far human determination can take people beyond what algorithms or AI are able to predict or imagine. AI may be well trained about the typical patterns of success, but it surely can’t sense the urgency or authentic drive the way humans are able to do.
So, as we strive for systematic change, I think there are two main goals that we should keep in mind:
The first goal is to create educational systems that are going to incentivize three things, which are curiosity, effort, and critical analysis, not just rote output in our children/kids.
The second goal is to make use of AI as a tool to assist and amplify, but to never replace our human effort to be able to question, experiment, and push our boundaries even further.
By reforming our education system/schooling as well as growing our inner drive together, I am sure that we will shape a future where AI can enhance but never diminish our humanity/human qualities.
As a person who grew up without the influence of AI, it worries me how quickly the line between human intelligence and AI has blurred. Although I, too, am guilty of using AI to shortcut assignments or Google searches, it comes with the guilt of knowing that the thoughts on my paper aren’t completely and wholly mine. To add to this, the concern of privacy and digital security is not one that is spoken about enough when discussing AI. Recently, I used ChatGPT to search for scholarship information specifically catered towards me. Without giving any details about me or who I am, ChatGPT was able to accurately predict information using my previous inputs. It led me to begin thinking about what we are risking while using AI. Having the world at our fingertips is a larger responsibility than many of us realize, and this article has shifted my perspective with its discussion of “brain capital.” Instead of sitting and worrying about what could be done, we should focus on our “brain capital” and what we can do rather than what we can use AI to do. Focusing on ourselves can be an empowering movement, pushing us into a greater future rather than one in which we let technology control our lives.
Another concern of mine was how easily we fall into the deception of AI. Being unable to tell the difference between human empathy and an AI chat box is a serious concern. It speaks volumes to the capabilities of AI that we had never known before, and worries me because we are losing our skills in empathy, kindness, and creativity. We are willing to outsource decisions and humanity to technology rather than doing it ourselves, which is causing us to lose our abilities to reflect, ration, and relate to others; the skills that enabled us to build up nations in the first place. We are using AI to think for us rather than with us, which is detrimental to education and to society as a whole. Preparing for the future means looking at how we can use technology to elevate our thinking rather than to take it away.
While reading through numerous articles and comments, this one in particular captured my interest. Your transparency is crucial in supporting your viewpoint on this complicated topic. AI is a powerful tool, the most powerful humanity has ever created. However, AI is still a tool, and that’s the most crucial part of all. AI is merely a tool, and we have to use it properly to retain our skills and abilities.
Today, it seems that most people can be divided into two distinct groups: those that heavily oppose the use or even the idea of AI, and those that accept AI as a fundamental part of life. Personally, I believe that we can responsibly collaborate with AI to strengthen our skills, rather than replacing them. It would be foolish to ignore that AI offers many advantages. Its speed and efficiency in handling complex tasks outmatches human capability by many margins. The use of AI can be seen in many fields, most remarkably in healthcare.
AI is now being used to assist radiologists in interpreting scans. It acts as a second set of eyes that direct doctors to areas that they may have overlooked. Additionally, AI has been used to detect the location and source of strokes in patients, directing physicians and increasing accuracy and efficiency. AI is constantly assisting healthcare workers by optimizing patient flow, improving surgeries, and aiding in detection of issues in patients. The collaboration between healthcare workers and AI systems demonstrates the advantages that AI can bring. AI is not taking away human connections; it is saving human lives instead.
Many businesses fail to recognize the advantages of utilizing AI or implement it incorrectly. Technology has historically improved life, and AI shouldn’t be viewed much differently. AI is being used to literally saving lives in hospitals. As long as we retain our humanity and responsibly collaborate with AI, it will become an essential partner in our daily lives. The impact and improvement that AI has and will make in human lives cannot be understated.
I agree with what you said about AI making it feel like our work isn’t fully ours. I’ve also used AI for some quick help and felt like I was cheating myself. And I do also think that the answer is not to avoid it completely—I think we need to find balance: figure out how to implement AI without losing the human within us.
The “brain capital” idea from the article really stood out because it made me understand how our creativity, problem-solving ability, and people skills are still our greatest asset. AI can do things fast, but it can’t think or feel like we do. If we let it handle the boring, repetitive, and mundane stuff, we can focus on what we humans are best at—being creative, empathetic, and innovative. This is where the benefits truly start dominating the disadvantages, much in the same way as the “break-even point” for a company.
If you put it in terms of SWOT, AI’s power is data and speed, but it lacks emotions and original thought. Humans are the opposite—we’re not as fast, but we can imagine, create, feel, and relate to others. If we use both together we can combine those strengths, and AI won’t replace our brain capital, instead, it will actually help us build on it.
For example, we can utilize AI to summarize dense research in order for us to focus on developing original ideas, or to analyze patterns in data while we focus on making decisions and creating solutions. In school, it can help brainstorm essay outlines or simulate different problem-solving approaches, while we write the essay or solve the problem and add in our creativity. In a workplace setting, AI can also draft reports and automate repetitive tasks so that people can spend more time on the important things: strategy, innovation, and collaboration.
I agree that we can’t let AI do all the thinking for us, but if we let it think with us, we can actually use it to make ourselves smarter and more capable. We’ll be able to use our “brain capital” to its fullest.
GIGO stands for “Garbage In, Garbage Out.” LLMs are trained on massive datasets. If the data is fake, the output will be fake; if the data is biased, the output will be biased. The internet is full of garbage, and that’s where AI pulls its knowledge from. AI isn’t flawless. As humans, we must use our own judgment instead of relying completely on AI, no matter how advanced it seems, because there are things we can do that AI never will. Missy Cummings’ quote about connections hits the nail on the head. AI can’t offer multiple perspectives the way people can. If everyone gets their perspective from a computer, we’ll all end up thinking alike. Relationships and connections with others are irreplaceable. Over-relying on AI could stifle our creativity and harm society’s mental health.
From a student’s perspective, the current education system isn’t fully adapted to AI. Many students use AI for homework, essays, or even to cheat on tests. They don’t learn anything, switch off their thinking, and waste time without gaining real knowledge or skills. The education system needs to change fast to prepare future generations for work and life. With generative AI, tasks can seem effortless, but that’s the problem. I believe schools should teach us to use AI as a tool. It’s awesome for speeding up work or studying, but only if used properly. People aren’t used to such rapid changes, and adapting to this is a major challenge for society.
We’ve all done it, we’ve all gone and had an argument with someone -whether or not it was really all that important- to the point the argument starts blowing up out of proportion. After a long time of constant bickering, you both finally come to the only conclusion, “What were we talking about in the first place?” That’s precisely where we currently stand in the argument of AI and our humanity. How can we argue if we have our humanity when we’ve barely touched on what makes us human?
People struggle to realize that AI cannot steal the truth to humanity, but it can replicate being humane. Michael Jabbour describes how ChatGPT was inherently more human when it generated a response with more empathy. When in reality that’s the act of being humane, not human. The physicians gave an actual, authentic human response. The difference is that humans cannot always empathize and give you the answer you want, which is shown all throughout history. Humanity is simply the act of being human; we’re an imperfect race, one without all the right answers. Which is why we now find ourselves constantly turning to AI, it always has the perfect answer. It eliminates the hardship of finding the answer, but our experiences got us to where we are today. AI is today’s threat because it was tomorrow’s perfect promise, the one thing humanity isn’t.
Our struggle is what sets us apart. In the end who faces consequence and sabotage? Only humans can truly feel the destruction set upon themselves, no matter how much AI mimics humanity. To never despair and always have the answer is the life that AI promises. It sounds great in theory, but it destroys value and meaning to everything. To never achieve, create, struggle, acknowledge yourself. Be neither good or bad, and unable to choose based on your own mental turmoil. That is what it means to lose your humanity.
Even as I’m writing this, I get increasingly more frustrated. I can’t immediately brainstorm the right idea, the perfect way to get my point across. But if we allow ourselves to struggle and make those connections, we will always outgrow AI in some way.
Our humanity is something AI can never take from us, we just fail to recognize it.
Hey Megan, your comment was one of those that makes you stop and rethink everything. I’ve been focused on the usual issues like jobs and cheating, but your point about the difference between AI being humane and a person being human is the most poetically interesting idea I’ve come across in this whole debate.
What’s interesting is that this is making me re-evaluate my own thinking. Just two weeks ago for this competition, I wrote a response about AI in medicine, and my whole argument was that doctors were safe because AI lacks “true” empathy. After reading your comment and thinking again about that quote where patients found a chatbot more empathetic than a real doctor, I realized my old argument was too simple. It’s not about AI lacking feelings; it’s about the unnerving perfection of its performance.
To begin, this made me realize that AI doesn’t just replicate our behavior; it puts a new kind of pressure on us. I started to think of it like a perfect mirror. Its flawlessness makes us more aware of our own imperfections. For example, if you’re a doctor being compared to a machine that never says the wrong thing, I have to imagine you’d get nervous. It seems logical that you might start acting less like yourself and more like the machine: more scripted, less spontaneous, and afraid to make a human error.
Furthermore, this led me to think about what makes human empathy so valuable in the first place. I believe it all comes down to risk. When a person shows empathy, they make themselves vulnerable. They risk feeling your pain, saying the wrong thing, and sharing in the emotional weight of a moment. There’s something at stake. An AI, on the other hand, risks nothing. For this reason, its empathy feels like counterfeit currency; it looks real and can be spent just like the real thing, but it’s fundamentally worthless because nothing of value backs it up.
This all brings me to Dr. Lucas’s call to action in the article, where she asks for a new paradigm for AI. Your comment helped me come up with one: I believe the best paradigm for AI is cognitive scaffolding. In construction, scaffolding is a temporary tool that helps you build something strong. Once the structure is built, you take the scaffolding down. I think AI should be used in the same way for our minds. It can be a powerful tool to help us learn and create, but if we never take it down, if we use it to avoid every mental struggle, then our own skills in critical thinking, problem-solving, and creativity will never get stronger.
Lastly, I completely agree that AI will never take our humanity. I think the real danger is quieter and more personal. My concern now is that we will become so reliant on this scaffolding that we’ll forget how to build anything for ourselves. I believe the risk isn’t that we will be replaced, but that we will trade the rewarding, difficult process of becoming wise for a tool that only helps us appear smart.
(Even at the end of this comment, I still can’t get over your word play of the words humane and human, it really “drove the point home” in such a poetic way, really well done!)
Fire. Automobiles. The internet. And now AI.
The history of humanity has been defined by new ideas, all of which seem revolutionary in their respective time period. At the dawn of humanity, fire brought us warmth and fear. Half a century ago, the internet changed the way we as humans interacted and developed—in ways both positive and negative. For some, the web meant being able to collaborate with innovative minds across the globe. For others, such as myself, the internet was a way to see my grandma’s face despite living 8,290 miles away. Unfortunately, the world wide web also opened new methods of crime, such as hacking and cyberbullying.
In the 21st century, AI poses a similar situation for humanity: limitless benefits coupled with harrowing dangers. To avoid being a cynicist, let me start off with the positive outlook. As aptly stated by Mary Meeker, the “Queen of the Internet,” AI is the fastest growing tech in human history. I myself can tell you that within two years, I went from someone without a clue of what ChatGPT was to a 16-year-old developing LLMs on my computer. But this technology is more than an outlet for curious coders—it’s an innovation advancing humanity across every aspect of society. Whether it be generating novel protein structures in minutes (a task which used to take years at a time), recovering ancient archaic languages once believed to be lost, or just helping you figure out what to make for dinner, AI is truly making an impact in every sector.
However, this magical cure-all comes with its side effects. My mother always told me to never trust anything with a mind of its own. Though her words made me live in fear of Siri for much of my childhood, I take them with a grain of salt in the present as someone both utilizing and developing AI extensively. Artificial Intelligence is like a sponge. The more liquid you soak it in, the larger it grows. However, if the water you immerse it in is dirty, the sponge itself will dispense nothing but dirt as well. As the article aptly states, AI follows the principle of GIGO. This is why we see models displaying extensive bias against certain groups: it is simply reflecting the same bias seen in society. In 2018, Amazon attempted to implement an AI algorithm for hiring, but the project was immediately scrapped after visibly discriminating against female candidates—a reflection of the gender inequality in STEM fields.
Simply put, just like us humans, AI is not perfect. To blindly trust AI is to operate without reasoning in the real world. To stop questioning AI is to lose your humanity. Remember, AI stands for ARTIFICIAL intelligence—it does not reflect us humans as a whole. So, I urge everyone reading this comment to keep my mother’s advice in mind: use AI, but use your own reasoning more. For in the end, it is our brain that makes us the superior primate, not our computers.
Hi Shreya, thank you for your thought-provoking comment and sharing your mother’s advice about AI. I’ve already read your comment over and over many times, and I really love the way you use visual descriptions to paint a picture for Artificial Intelligence.
I’ve drawn a similar viewpoint as you’ve mentioned at the beginning, where with the continuous human evolution, we are constantly seeing the advancement of technology and its abilities to reshape our lives, yet with the potential costs it brings to humanity. I still remember watching Apple’s 2007 launch event online of its introduction of a full screen phone, and seeing the society’s reactions of this new paradigm shift with excitement and disbelief (I watched its launch in later years as I wasn’t born yet when it was first launched). Now, seeing the development of Artificial Intelligence feels no different from any of humanity’s previous revolutions.
With Artificial Intelligence following the principle of Garbage in, garbage out, which you thoughtfully used the simile of a sponge to describe its process, it raised me to question: why can’t we use this algorithm and constantly innovate as humanity to avoid cases you mentioned such as Amazon’s implementation of using AI for hiring? Just like what Scott Sandland has mentioned, we could see AI as a mirror that reflects and amplifies the quality of our ideas. So if we input thoughtful, well-researched, or creative concepts into AI systems, instead of amplifying its ‘dirt’, it will help humanity to spark a new direction of thought that would be difficult to reach alone. This healthy process encourages humanity to engage critically with information and make humanity better, rather than passively consuming AI’s thoughts.
However, it’s crucial for humanity to maintain our own unique thoughts and ideas in the current advancement of Artificial Intelligence.
While reading this article mentioned surface level issues with A.I, what really startled me was the depth to which I would begin questioning the direction our world is headed.
Firstly, I would like to agree that as the article mentions, there is undoubtedly a line between A.I. and us. No matter how developed A.I. becomes, the truth remains that it is but a creation of mankind, it is artificial. But many people question, if this tool were made by us, for us, then what’s the big issue? Well, I’m not trying to be cliche, but I would argue that A.I. can simply never imitate the fire in our hearts. So the A.I. can help us but it will never be able to be us or replace us, which many fear. I believe what Prof. Stefano Putoni mentioned, “it’s about discovering what makes us smart, what we can do better,” is absolutely correct, in that we should focus on what we can do as humans and leave to A.I. what it is as good at (or better than us) at. Ultimately, this robot was trained based on us and can never do more than imitate us. Therefore, when it comes to actually living, does inputting all your questions into a machine and using its answers really make you feel alive? Does using these A.I. really make us feel human?
Moving on to my question for those of us reading this article as well as those quoted in the article: what does it mean to be human? Does it derive from the emotions we have? The genuine feeling of sorrow when we lose one who is precious to us? Or is it the delicate skin and beating heart that we have, the complex body that each Homo sapiens walks around in? Whatever it is, how do we protect this humanity when developing A.I? How can we harness the power of A.I. without losing ourselves in the process? When I start to wonder about these questions, I get lost.
To provide a guiding light for those of us who are lost, we must address the root cause of why A.I. use is so common. Our never ending greed drives us forward, creating and developing more and more. At the same time, the many Gen Z youth are crumbling under the pressure to succeed or meet standards. We are human. We must recognize the flaws within us. If not, these flaws will drive us to grasp for more while robbing others of all they have.
Hi Brandon, I resonate with many of the points you make as well as many of the questions you pose. While I agree that AI is an artificial creation and cannot necessarily “feel” emotions or have “the fire in our hearts”, I would argue that rather than an imitation of humanity, AI is a mirror of human wisdom. AI reflects not only our knowledge, linguistics, and creativity, but also our morals, desires, and aspirations. Thus, AI is a product of humanity, and its responses reflect our human emotions.
An interesting question arises on whether using AI makes us feel human. I would argue that obtaining answers from an AI is no different than searching for answers online or reading a book, again because AI reflects the human wisdom we can find through the internet or through literature. But more importantly, why do we need AI to “feel human”? On a purely utility-focused level, using AI for answering questions, organization, management, and productivity related tasks, etc. would have little impact on our human emotions, as they are simply taking over mundane and simple tasks. But on a deeper layer, I would argue that for better or worse, AI can really make us “feel alive”. While AI in the present may be clunky and often generates cliché responses, it doesn’t deny AI’s capabilities to represent and replicate human emotions.
In some cases, AI may actually be better than real humans in highly emotional settings. The article mentions studies that showed that patients found ChatGPT more empathetic than real doctors. Despite not being able to “feel” the emotional turmoil patients go through, AI, in a way, is far more selfless, with its only goal being to reassure and comfort the patient with an optimal formula, in contrast to physicians who have their own busy lives, their own struggles, and their own flaws. This is not to discount the incredibly difficult and crucial work physicians do, or the importance of human-to-human interactions, but rather to provide a fresh perspective on how AI has the capacity to convey emotion and project empathy.
Another way to look at this is to ask, what makes us happy? Does it have to be through another human capable of feeling emotions the same way we do? Not at all. People can derive happiness and emotions through a wide variety of ways, reading books or watching movies can give us a roller coaster of emotions, achieving success or working hard on a project can also give us great satisfaction, even more simply, eating a good meal or petting a furry animal can make us happy. Finally, there are many people who adore idols they’ve never met in person, people who cheer on athletes thousands of miles away, and even people who find happiness in fictional characters. All of these emotions that we feel are real, even though the happiness doesn’t come from interacting with a real person. All of this suggests that maybe what makes us happy is simpler than we make it out to be, and that there is no reason why interactions with AI can’t give us happiness, validation, or the ability to feel emotions.
Finally, what does it mean to be human? This may be hard to accept, but we really are no more than a rather sizable bag of different molecules. Our emotions, our senses, our thoughts are only chemical signals, originally made only for our survival. Are we really different from the lines of code that make up AI? We push AI away, claiming that we are different because we can “feel” emotions, that we have capacity for creativity, and other numerous reasons. Yet we fail to see that AI is the mirror image of ourselves that we have created. How can we claim to be so different from something we don’t quite understand when we can hardly even understand ourselves?
My stance is simple. Whether AI can or will surpass us is something we may never know. Whether it will guide us to a better future or destroy our lives, we will never know. But one thing is clear:AI has already become the shadow under our feet. It is blurry and uncertain, yet outlines our shape, something we can no longer run away from. Rather than reject AI, we should embrace it as a part of us. A part of our humanity.
If used mindfully, AI can be a very useful and beneficial tool to enhance the learning experience in academia. AI provides many options that can be curated to students needs and to the different ways students learn. For example, I learn best through repetition so by using Chat GPT to give me practice problems for a topic in math or quiz me on Spanish vocabulary, it caters to my needs. Chat GPT is not perfect and it makes errors from time to time but by keeping that in mind, even actively looking for errors when being quizzed can be an engaging study strategy. AI usage in learning could be a valuable change to the education system that could avail students and teachers alike. As long as students are taught what is appropriate when using AI and how to use it mindfully, AI could be a very advantageous, personalized learning tool that can amplify the scholastic experience.
I appreciate you sharing your optimistic view towards using AI in academia. I can’t disagree that it can be helpful, as I use AI regularly in order to maximize the efficiency of my learning. Still, your comment made me wonder: how we could guide human cognitive resilience if AI would be fully implemented into the education system? I’m eager to argue that AI could be easily misused by many, which would guarantee its transformation from a tool that is supposed to assist to a tool that keeps future generations dependent. What are the ways we could secure our critical skills, while using AI efficiently, considering that we are risking the erasure of struggle, which is proven to deepen understanding and striking curiosity? I’m looking forward to hearing your thoughts on that.
Isn’t it unsettling how quickly humans began to rely on AI, seeing it not even as a tool but foolishly as a solution itself? I strongly stand by the argument that humanity needs to redefine what to expect from AI and how to separate it from the expectations towards humanity itself.
Dr. Luca’s inquiry to invent a paradigm that frames AI in a way the desktop defined computing seems quite quaint to me in comparison with what I heard during a lecture a few months ago. The question, ‘How do we retain power over entities more powerful than us, forever?’ that was pressed by Stuart Russell and stayed with me until now.
In order to get on the right pathway that leads to solving Luca’s challenge, I dare to refer to Russell’s concept of ‘assistance games’ which claims that machines might always stay uncertain to some extent regarding human objectives. This leads me to think that instead of looking for a paradigm that frames AI, we should find a way to build systems that never fully rely on their own grasp on human welfare in order to ensure a secure future for all.
Regarding Michael Jabbour’s point about patients finding AI being able to be more empathetic than human beings, we should raise a concern about why it is that way. From my perspective, this concern arose due to humans seeking comfort over experiences, effort and authenticity. It ties to the fact that emotional depth and empathy are demanding and in a world where students ask ChatGPT to list their perspectives, there is no place for true human compassion.
Perhaps the real concern isn’t about finding a paradigm that will tame AI, but being able to confront the system that uses humans to aim to be dependent on AI, even in cases of moral choices or empathy. If we don’t face the underlying issues of the system, no interface will protect our race from building tools that exemplify our blind spots.
Thank you, Wharton Global Youth, for engaging me in reflecting on those urgent questions.
I felt that the article was very thought-provoking, but I disagree with the idea that students might prefer learning from AI, such as ChatGPT, rather than learning from college professors.
The article mentioned that humans are perhaps going to college because humans can learn better from ChatGPT than from learning in college. I disagree with this statement. As I read through the article, I noticed a big contradiction. GIGO, or “Garbage In, Garbage Out,” was a key concept in the article. The article seemed to be implying that students go to college because they have to, and that people don’t want to actually learn in college. Even though this might apply to some people, I think that most people do genuinely want to go to college to learn. As mentioned in the article, ChatGPT has a lot of information that is of a “garbage” quality. Many students, including myself, don’t want to learn from an AI that simply gives back “garbage” information, or at best, information of a normal quality. Most students will want to learn from professors who actually give a better quality of education compared to Al, such as ChatGPT.
I believe that many students will want to learn from professors in college rather than to learn from AI or generative AI such as ChatGPT. I want to think about it this way. Let’s say that there is a $100 bill lying on a street. It’s been left there for some time, and no one has picked it up. Al will think that, since no one has picked it up, it is very likely that the $100 bill is fake, and leave it at that without examining it closely. However, a professor will pick it up and examine it closely. This is the difference. Although Al may have some good high quality information, it is a collection of all (good/high-quality & bad/low-quality) information, so it may not give out the best information to you. At the end of the day, Al will always be a secondhand source that is a collective of good and bad information. On the other hand, a professor will research, filter, and examine information. Then, the professor will present the high-quality information to the students, which is often exclusive to the college.
What I am concerned about information and AI regarding the future is that good/high-quality information will not be produced by AI, in which case, the “Garbage In, Garbage Out” will be a real reality in our world. I believe that in our future, sooner or later, we will have to pay money or pay for a subscription in order to get good/high-quality information on our internet. Many prestigious sources, such as the Washington Post and Wall Street Journal, just to name a few, require a payment in order to read their journals or articles. Databases such as JSTOR have a free version, but their paid version gives extensive access to their content. I believe that on top of the many websites that already require you to pay, more and more websites will start to require payment/subscription to view their content/information. I also believe that there will be less high-quality information that Al can use. I think that websites would not like their high-quality information to be stolen for use by Al, sometimes even without any credit, which is why more websites would want to start to require people to pay for their information and content. Then, Al will function in a “Garbage In, Garbage Out” style, which is very concerning since a lot of people rely on Al for research purposes, decision-making, and so on.
When thinking about how we can hold on to our humanity, I think that it is very important to consider what differentiates us humans and AI. I believe that the key differentiator between Al and humanity is our soul. I think our soul is about our identity, our passion/desire, as well as our religious beliefs. Our passion/desire can only be satisfied by working hard for it and improving at that one thing that you are passionate about. As a Japanese person, I grew up hearing a Japanese proverb which is “好きこそ物の上手なれ.” This is saying that if you love something, you will become good at it. One time, someone told me that I won’t be a good chess player because I started playing chess later in my childhood, because I kept losing games after I started learning, and because I didn’t take chess classes. Al will probably tell me that I won’t become a good chess player as well because Al likes to determine things based on facts and quantitative information. But what the person and Al didn’t consider was that I had a strong passion for chess. In 3 years, I went from a complete beginner to the top 1% of chess players on chess.com. Another good example would be entrepreneurs. Al will propose/support/encourage ideas that are similar to past successful business ideas because Al is based on probability and has information on past cases. Al will not support a radical, new business idea that is an outlier to past cases of successful businesses. Al will not see the passion or drive that someone has to make the new business idea successful. Al will not be able to understand the passion or drive that someone has when considering a new business idea. Take Steve Jobs, for instance. At the time he invented the iPhone, there were no similar products like the iPhone. However, he was very passionate about building the iPhone as he managed every detail of the iPhone and saw it as a revolution, not simply as a product. This goes to show that even if a business idea or product is new and radical, if you have passion, it is more likely to succeed. Al is not able to take into account someone’s passion because Al doesn’t have a soul, unlike us humans. I would also want to point out that Al will not have the ability to make decisions based on intuition, such as believing that someone’s new/radical business idea will be successful, since that person is passionate about it.
Additionally, Al will not be able to have any religious beliefs. I think about god, the afterlife, reincarnation, and karma, but Al doesn’t. If something that you do isn’t ethical but will benefit you in the short term, Al may suggest that you do it. Al may suggest you take action without any wisdom that is based on religion. Religion won’t be a central part in decision-making and actions for Al, unlike us humans. Religion often teaches us that if we do good, even if it means sacrificing ourselves, we will be rewarded in some way, whether through god, the afterlife, reincarnation, etc.
With all that being said, I think that there are a few ways to make our AI better. First, I think that Al should try to get information from trusted sources. I think Al should have a filtration system in which, when they come across information, they should compare it to a fact-check website. In addition, I believe that we should also input a lot of books into the Al’s database. Since Al can’t truly get life experiences and gain wisdom, books will be a decent but not perfect substitute to “get life experiences” as well as to better learn about human connections or emotions. However, books will not be able to truly replace our process of getting life experiences.
Al will continue to grow, and if I’m being honest, it scares me. But I will keep searching for ways to differentiate myself from Al.
Dear Taisei,
Thank you for your insightful comment! I will begin by agreeing with you and stressing that we should always scrutinize the downside of any new technology, and AI is no exception. At the same time, I see room for nuance in your arguments regarding AI. In this comment, I want to briefly offer my own perspective on the issues that you touched upon.
First, I think there are good reasons to question the “garbage-in, garbage-out” (GIGO) idea that you repeatedly mention in your comment. Over the past few years, as AI technology has developed, their output has improved in quality, not the opposite. Why? It is because higher quality data sets are being used to train AI models. Alongside improvements in computing technology and the development of new AI architecture, I think that future developments will only lead to the creation of more accurate, not less accurate, models. Furthermore, while GIGO could be a possible scenario, there is just no reason for AI companies to ever build models that input low-quality data and output low-quality responses. After all, if one company produces a model trained on “garbage data,” consumers will simply switch to models by different companies that produce a better output. Thus, through technological advancement and competition, AI models should gradually improve over time. That being said, there is no doubt that real professors offer a level of engagement, mentorship, and nuance that AI simply cannot match. But if there was something that you missed during a lecture, or need a quick refresher later, ChatGPT serves as a universally accessible tool that can give you an accurate answer in seconds.
Second, I think that the apocalyptic scenario where all “good” content becomes placed behind a paywall is rather unrealistic. Firstly, I do not think that websites will become paid since they are the only sources that can provide “good” information. Reputable sites have multiple revenue streams—ads, donations, sponsorships, etc.—so a paywall is not their only path to financial sustainability. The competition that exists between these sites would drive them to lower costs as much as possible for consumers. In addition, many high-quality information outlets (like NPR and the BBC) are government funded, which makes them unlikely to ever implement a paywall. It is also highly doubtful that websites will institute paywalls because they are afraid of their information being stolen by AI. Already, many news sites block large-scale scraping by bots using tools like robots.txt exclusions, rate-limiting, and CAPTCHAs—while keeping their services accessible for human users. Overall, I think that there are just too many uncontrollable variables for us to make the call that AI produced content will lead to an “information dark age” where everything is behind a paywall.
Third, while I agree that AI cannot experience things like humans do, AI does not need to. The purpose of AI is to extend human capability, not replace human emotion. It would seem rather ridiculous to say that if, for example, I had a brilliant business idea that I am enthusiastic about, that an AI model would dissuade me simply because it calculated that I would have a low chance of success. If I give up that easily, can I even claim that I was “passionate” about the idea? It seems a bit contradictory to say that knowledge of their low chance of success would prevent them from following their passion.
Fourth, regarding faith and AI ethics, I would offer another perspective. First, I just want to reiterate that I am not aiming to criticize your (or anyone’s) religious beliefs; everyone has a right to believe what they want to believe. But this also means that while people may not use AI because it does not have religious beliefs, other people might be perfectly comfortable under the same circumstances. Also, I do not think that AI being is a huge concern. Like with any tool, their human creators are responsible for setting clear ethical boundaries—as they do in current models (for example, ChatGPT cannot generate hateful or discriminatory speech).
Regarding your suggestions, I would just like to modify them slightly. Definitely, AI should be trained on reliable data—and that is increasingly being done. On the matter of training AI with books, I would agree with you that doing so is beneficial for AI, though I would like to point out the potential ethical challenges associated with training data from books. Most book training data is derived from shadow libraries, raising legitimate copyright and consent issues. As a matter of principle and best practice, AI should be trained in data that is legally obtained and properly licensed. Additionally, I do not see why AI should be made more human. In my opinion, as long as AI models return accurate, high-quality response to inquiries and follow basic ethical principles (which can be hard-coded), it has served its purpose.
Ultimately, AI is best viewed as a tool whose value depends on how wisely we deploy it.
To close out my comment, I would like to echo Professors Lucas’ comment. Fundamentally, AI should be used as a tool to aid learning, and nothing more. Beyond the question of whether AI can replace a college professor, which it obviously cannot, Professors Lucas’ comment really gets at the heart of the issue: no matter where someone decides to get an education, they must actually learn, not just mindlessly rely on AI as a mental clutch. AI is a learning tool, not to lean on, but to spark deeper thinking. If you rely on AI, then it does not matter if you study at home or at a world-renowned university—you are not going to learn anything either way.
Taisei, thanks again for your detailed comment; I had a lot of fun writing this response! I would love it if you, or anyone else, would take the time to read my (somewhat wordy) response and offer your own perspectives on the issue!
Hello there Taisei! I thoroughly enjoyed your take on the article, AI and the Human Brain: Holding On to Our Humanity, it was captivating and hooked me from the start. Your beginning lines, especially, where you mention that you disagree with the idea that students may have a preference to learn from AI rather than college professors, stood out. I agree with your disagreement.
From the perspective of a high school student, I myself have witnessed the accessibility, ease, and speed of gaining knowledge from AI. It’s very easy to see why us overworked and stressed students turn to AI so openly, as it takes away from our struggles in academics, especially rigorous courses. But AI, as suggested in its own name, is artificial, and not “authentically” human. AI cannot provide us the same perspectives and experiences as a human professor can.
Recently, I’ve discussed with a college counselor the question, “why do people go to college?” Naturally, one may think, to pursue higher education, to pave a way for my future career, or even just because they want more opportunities. But college is more than that. Humans are social creatures, we gravitate towards being around others. College is one more step into surrounding yourself with people that have commonalities with you as well as differences, people you can learn from through their various perceptions and experiences, and where you create your professional and social network. More than that, it’s where you learn to truly, independently think for yourself, which AI defeats the purpose of. On this path, just as you mentioned: “most students will want to learn from professors who actually give a better quality of education compared to Al.”. By relying on AI for education, one will miss out on many important experiences: eureka moments in class, connections with peers, discussions that could lead to the next big thing, or even friendships with people who could be your soulmates.
I want to second your point that “at the end of the day, Al will always be a secondhand source that is a collective of good and bad information.” Research indicates that Ai is imperfect just like humans are. The illusion of its perfectness comes from how easily accessible it is, how easily it can provide us something that we would normally have to put in so much effort for. In the end, this is simply a product of work, based on what is fed to it through numbers and analytics. Your example about chess makes me think of my own experience as a high school athlete. AI cannot make a judgment of my athletic performance because statistics are only one part of the picture and does not display the full portrait of an athlete. On the other hand, a coach would. Only humans can see your hard work, your passion, and your potential.
The fear of AI is superfluous. AI will never become human, and humans will never become AI. Humans are complex individuals, filled with diversity that a piece of engineering cannot replicate. As a student, an athlete, and a friend, I hope that many will follow Taisei’s footsteps in ensuring that we are seen as who we are, as humans.
Hi Taisei,
Thank you for sharing the proverb “好きこそ物の上手なれ.” It resonated with me, as did your reflections on soul, intuition, and passion as uniquely human traits. Indeed, learning thrives when it is driven by true passion and not obligation.
I agree with highlighting AI’s “Garbage In, Garbage Out” issue and identifying the contradiction between “garbage” information and students attending college because they have an obligation to. If AI depends on the data that we feed it, and that data is increasingly biased or shallow, then we’re not actually creating intelligence but rather recycling assumptions. Reading this reminded me of a related idea I learned from a friend, Hannah Arendt’s concept of the “banality of evil.” The “banality of evil” describes how evil actions can be committed by ordinary people not driven by malice, but rather a lack of critical thinking and an over-reliance on established systems. Arendt introduced this term in her discussion of the trial of Adolf Eichmann, a Nazi bureaucrat and major organizer of the Holocaust. Arendt argued that Eichmann’s evil was rooted in his inability to think critically about the consequences of his actions: an illustration of the danger of individuals abandoning moral judgement and simply conforming to authority. As you mentioned, AI is a collection of both high and low quality information. An uncritical acceptance of AI output can lead to normalization of misleading content. Although we may not expect this acceptance to lead to overtly “evil” or harmful actions, the harm that this consumption of low quality content can cause may be more damaging than expected. An over-reliance on AI can certainly result in a gradual erosion of critical thinking, thus shaping misinformed and unjust outcomes. Though this harm may not be dramatic or intentional, it’s insidious, echoing the banality of evil.
I additionally agree with your concern about the future of high-quality information! In the future, information untouched by AI may become increasingly exclusive, as you stated, possibly deepening inequality in terms of who has access to reliable learning. As Sandland puts it, AI is an amplifier. Therefore, allowing AI to dominate learning while the most reliable information becomes paywalled creates a two tiered educational system: one for those who can afford access to top-tier data, and one for those left with diluted or unreliable knowledge through AI tools. This challenges the notion of equity in learning.
It’s true that better data can make AI more accurate, but we should also consider what happens when AI becomes the default method of learning. While your solutions to make AI better through improving the quality of information are good ways to enhance its reliability, I also think that it’s important to acknowledge the problem of an overdependence on AI. Integrating AI into education in a way that treats it as a scaffold for human learning rather than a shortcut can allow us to use it responsibly rather than becoming overly reliant on it. For instance, teachers can incorporate lessons on how AI works and what its limitations are, allowing students to use AI more thoughtfully. This echoes Cummings’ warning against students turning to AI instead of seeking real-world, human experiences. Rather than constantly aiming to “fix” AI or make it flawless, perhaps a more meaningful goal is to accept that it will always have certain limitations. We shouldn’t expect AI to replace uniquely human characteristics such as critical reflection and connection. Instead, it’s important to design systems in which AI supports but does not replace the roles of human thinking.
Taisei,
First of all, thank you for sharing such a thoughtful and passionate perspective. Your writing raised important questions about the future of learning, ethics, and what it truly means to be human in the age of AI. I genuinely admire your honesty and your ability to bring your personal story, like your journey with chess, into a larger reflection on technology. That said, I wonder if perhaps you’re viewing things just a bit too cynically. What if we looked at AI not as a threat to humanity, but as a tool that, when used responsibly, can actually strengthen what makes us human?
You mentioned that AI lacks ethics, passion, and the ability to innovate, and that it can’t understand a radical vision like Steve Jobs’s iPhone or see the potential in someone driven by soul. But I’d gently argue that AI doesn’t need to replace those things. It just needs to support them. For instance, tools like AlphaFold have allowed researchers to predict protein structures in hours, work that once took years. Imagine someone with Jobs’s passion paired with that kind of computational power. The combination of creativity and AI’s speed could lead to even more breakthroughs, not fewer.
You also raised an important point about the soul. And I agree, AI doesn’t have one. But that’s exactly the point. It’s not supposed to. We are. It’s our job to bring ethics, compassion, and belief systems into how we use tools like AI. Expecting a machine to have a moral compass is like expecting a hammer to understand architecture. If AI gives unethical advice or outputs biased information, the issue isn’t the tool. It lies in how we choose to use it. As Michael Jabbour put it, “Not losing our humanity in the process of building these technologies is probably the single most important thing.” The burden of ethics still belongs to us.
On the topic of innovation, you said that AI won’t support bold, original ideas because it relies on probability. That’s true to some extent, but the same could be said for most people. Humans often resist unfamiliar ideas, sometimes more cynically than any machine. AI may not take leaps of faith, but it can test ideas faster, simulate outcomes, and provide more data than ever. It doesn’t need to replace intuition. It helps amplify it. Passion and AI are not opposites. They can work together. In fact, because AI can retrieve vast amounts of knowledge across any field, it can also help spark new interests by exposing people to ideas and disciplines they might not have encountered otherwise.
And yes, GIGO is a real concern, but only if we treat AI as an unquestioned authority. As the article notes, “Humans can analyze the information we’re receiving and potentially filter the garbage before making decisions.” With even basic digital literacy, students are no longer passive recipients. They become active drivers of how AI gets used.
In fact, when we consider AI’s overwhelming data-gathering ability, it’s clear that the barrier to doing high-level analysis, something you rightly admire in professors, has significantly lowered. If students can think critically and verify information, they can now engage in synthesis and insight once reserved for advanced scholars. That’s not a flaw in the system. It is a leap forward. Far from weakening academia, AI helps democratize it.
Which brings me to learning itself. You’re right, professors offer refined, curated knowledge. But AI isn’t trying to replace that. As Dr. Gale Lucas said, “If they want to learn, they can learn better with ChatGPT.” AI can’t replicate mentorship or emotional connection, but it can expand access and scale in a way no teacher ever could. Why not use both?
In the end, your caution isn’t misplaced. It’s thoughtful and necessary. But I’d encourage you to think of AI not as a rival to humanity, but as a mirror and magnifier. It reflects who we are, and when used wisely, it enhances what we already do best. You’re absolutely right that AI can’t replace soul, passion, or wisdom. But maybe it can help passionate people like you reach your vision faster.
Thanks again for your brilliant take, Taisei. Your skepticism is valid and needed. But maybe, just maybe, there’s also space for a bit of optimism too.
Warmly,
Yehoon
AI is the theoretical fire that mirrors the real fire that took us out of the prehistoric era.
I grew up like most kids my age, a TV blasting Dora and a Tablet with every imaginable game and YouTube video possible. I got my first phone when I was around 8 (a hand down from my sister). It didn’t have a sim card, so I spent most day after school surfing through the internet. All things are good in a safe measure but there is a danger posed by their unwise usage.
I was a kid handed fire and told that it was the world at my finger tips. From cartoons and lullabies to news reports and unfiltered violence, this and everything in the known internet down to the last binary digit is being processed by AI.
It is in the same way I was, a child being taught how to behave, being taught what it means to be ” more useful, more empathetic, and more human.” There is a difference between the fire and the one who starts it, but in both existing they mirror each other. In the same way I am often fooled by a comment a friend might make and go along the assumption it’s true until proven wrong, AI will also be fooled. “Garbage In, Garbage out,” my track coach always tells us that you get in what you put out- he uses it in a sports context but I say it applies to everything. Bad food makes you sick and lots of protein makes you strong. We must feed the fire good wood lest it grows dim and does nothing but smoke up the area.
The quote by Stefano Putoni in the article connects with me deeply, “It’s about discovering what makes us smart, what we can do better.” The concept of intelligence is so intangible like some blurry faraway object that can only be slightly outlined. To be smart is to know that you shouldn’t put your hand on the stove, to know that your friend won’t appreciate it if you cancel plans last minute. To be smart is to know 2+2 is 4, but choosing to agree with your 4 yr old nephew because he says its 5 and refuses to listen further. To be smart is to angle a paint brush in a way it perfectly captures the light falling through the window. Artificial Intelligence can never be smart in any way that is meaningful. By meaningful I mean to be full of meaning, to express happiness in a hello and sadness in a goodbye. To be smart in knowing that maybe now is not the time to be completely honest-because the truth might hurt and the concept of hurting is far too familiar.