How Will You Use Technology to Shape Our Future?

When generative AI burst onto the scene a few years ago and tools like ChatGPT began to fill essay pages at the touch of a button, first we were dazzled – and then we were worried.
What would this mean for the future of critical thinking? When were we allowed to use generative AI to help complete an assignment? What was right and what was wrong? Commented Edward J., a high school student reflecting on Wharton Global Youth’s AI content and business ethics: “We are in another era of revolution, and it’s up to us to make sure we wield this authority with responsibility.”
As AI and technological advances become an integral part of our lives, so too do the related ethical implications – that balance between tech domination led by profit-driven goals and what is best and just for society.
The issues go well beyond students using AI to write essays. Challenges include preventing discriminatory algorithmic biases in hiring practices and elsewhere, safeguarding personal data privacy, and creating technology that maximizes societal benefits and serves human needs. Today’s AP Computer Science students are tomorrow’s coders, software developers, tech engineers, and corporate decision makers.
Teaching Python Is Not Enough
Sharon Chae Haver, who has a dual degree in international studies and business from the University of Pennsylvania and the Wharton School, has been giving a lot of thought to the intersection of ethics and technology — so much so that she and her husband, a teacher, have launched Students for Ethical Use of Technology. Their nonprofit advocates for educational programming for high school students to provide a well-rounded understanding of the impact of data and technology on shaping the future.
“We agree that people need to understand the fundamentals of computer science,” notes Haver, who works with data scientists, programmers and business owners in her job as a senior director in management consulting. “But it’s not enough for us to teach kids Python or the principles of computer science, or how to create the next blockbuster app. We need to teach them the impact of what they’re creating.”
Ethics in technology is not about rejecting innovation, stresses Haver, it’s about thoughtfully integrating technological advances with a comprehensive understanding of the potential impacts and responsibilities.
Here are 3 key takeaways from Wharton Global Youth’s conversation with Haver:
1️⃣ Ethical technology use involves minimizing harm and maximizing benefits across three key areas: future technology developers, current technology users, and business decision-makers. “Today’s students are going to be the next Mark Zuckerbergs and Elon Musks,” says Haver. “They are going to be executing on tech-driven visions, or they are going to be the visionaries themselves.”
2️⃣ Ethics in technology education is not just about technical skills, but understanding the broader historical, social and philosophical implications of tech innovations. “We can’t have that ethical compass without also educating people about the history and philosophical ideas around developing these technologies,” notes Haver, who has found inspiration from the author Yuval Noah Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow. “This needs to be part of what we teach so people can make informed decisions.”
3️⃣ Students should approach technology with a balanced perspective – neither completely rejecting nor blindly embracing new tools but always considering the potential consequences of technological applications. “Have a healthy sense of skepticism,” suggests Haver. “Again, put it through that filter of maximizing excellence or benefits, and minimizing harm or danger.”
Some wonder if the legal system could provide a guiding light around technology and ethics. Amy Sepinwall, a Wharton associate professor of legal studies and business ethics, uses her background in both law and philosophy to help students think foundationally about right and wrong and understand moral justice.
Dr. Sepinwall observes that students are “unavoidably drawn” to wanting to know what the law says about ethically charged issues. They believe that the law will always step in to make sure nothing bad happens. She pushes back on that idea. “They think that if something really is immoral, then a law will prohibit it,” says Sepinwall. “In business ethics, we have to say no to that. At best, the law is providing a floor, and sometimes the right thing to do is not the thing that you are legally required to do, but it is still the thing you ought to do anyway.”
From where Professor Sepinwall stands, the next generation of business leaders, while not always willing to voice moral judgment, is both earnest and ethical. “I think they see working in business as a way of making the world better. Sometimes that’s for lofty reasons, like providing clean water in developing countries. Maybe they want to make the next iPhone, which is massively important to the world too. They see business as a force for good.”
That’s why, adds Haver, it’s important to start having these deeper conversations around tech and ethics and realizing that rapid advancements in technology can have unintended negative consequences. “We must prepare our future business leaders to navigate the emerging technological complexities.”
What principles would guide your decisions as a future technology developer?
How much of your digital footprint are you comfortable with others accessing?
What responsibilities do technology creators have to society?
Hero Image Shot By: Getty Images, Unsplash
As a high school student, I am very passionate about ai and education. I have created an app called NeuroED combing the two with neuroscience to create the ultimate app for high schoolers just like me. I agree with Hager here that Python isn’t enough especially with the newest generation where tools like ChatGPT is being used at all time. In my personal opinion I feel like ai tools should be used however with limitations such as it can never interfere with your personal life, it should be used for self improvement and not self replacement.
I believe that AI offers powerful and cost-effective tools that should be used actively — especially in education.
However, overdependence on AI can weaken our ability to think critically, which is why we must teach AI and digital literacy from a young age through practical, project-based learning.
As AI becomes central to society, it may widen inequality: those who understand and use it well will advance, while others are left behind. To prevent this, schools must teach not just how to use AI, but how to question it.
If I were in charge of designing AI governance, I would ask AI itself to compare global laws, cultures, and ethics to identify weaknesses in current regulations. While AI is a powerful tool, it must not be used blindly — it can produce biased outputs or invade privacy.
That’s why I propose two key solutions:
① Make AI ethics a mandatory subject in schools.
② Require every AI tool to show a warning video or text before usage to ensure users understand the risks.
Technology is not about what we choose to use, but how responsibly we use it — and in the case of AI, it’s no longer enough to use it.
We must master it.
I believe this article highlights an important truth: the need for ethical systems regulating the use and creation of technology. As AI becomes more mainstream, it’s easy to forget that tools can cause harm if used without consideration. The emphasis on coding skills whilst overlooking other disciplines can be detrimental to the students complete development. As Haver and Sepinwall explain, understanding history, philosophy, and the impact of our choices is just as important as technical skill. Innovation should be paired with responsibility.
Daniel, you’re right that responsible innovation demands understanding history, philosophy, and impact beyond technical skills. Building on that, Kai’s insightful discussion of AI in education and their proposal for AI to analyze global ethics highlights the urgent need for a deeper approach to these non-technical dimensions.
However, relying on AI to objectively compare global laws, cultures, and ethical frameworks faces a significant challenge: the inherent relativism and often contradictory nature of diverse human moral traditions. Humans continually grapple with competing values that no algorithm can simply reconcile. True global ethical governance for AI will demand intercultural dialogue and human-driven meta-ethical frameworks to navigate what humans themselves often find irreconcilable. Mastering AI, then, fundamentally includes mastering these complex, cross-cultural human deliberations.
This article really resonates with me since it uncovers the complexities that AI brings into the world of ethics. I think it’s important to note AI blurs moral lines since it is so new and unexplored. It’s easy for developers to prioritize innovation, and it becomes hard to discern how these improvements will affect society in the long term. Furthermore, the overlap between the business and technology sectors can be worrisome. As Haver outlines, software advancements do not exist in isolation. They cause a ripple effect and will redefine industries in ways we can’t predict. The entanglement of these two worlds is inevitable, but any negative consequences can be controlled.
At the moment, this doesn’t seem like a pressing issue. The helpful abilities of AI like ChatGPT cloud our judgment and convince us it isn’t a threat. I think this mindset relates to the movie I, Robot. People trusted technology to follow the ethical guidelines of the Three Laws of Robotics. However, it becomes clear that their faith was misplaced.
While this is an unlikely application to current society, it serves as a cautionary tale. We need to actively enforce ethics in the technology industry and make sure they aren’t overlooked. It’s easy to assume that software is designed to help us, but its impact depends on how it is used.
Hey Alyssa, your comment really hit home for me. I’ve been thinking a lot about how easy it is to get caught up in the cool stuff AI can do—like how ChatGPT helps with homework or writing emails—and forget that there’s a whole messy ethical side underneath. Your I, Robot example is spot on. When I watched it, what stuck with me was how people just assumed the robots wouldn’t harm them because of those “laws.” But real life isn’t that simple, especially with AI that learns from us and the data we give it.
I’ve experienced this tension myself. Last year, I worked on a project where we used AI to help analyze student essays. It was exciting, but soon we realized the AI was unfairly penalizing certain writing styles, especially from students whose first language wasn’t English. It felt like the tech was supposed to be a helper but was actually creating new problems. That’s when I really understood your point about ethics not being an add-on—they have to be baked into the tech from day one.
And like you said, the overlap between tech and business makes things even trickier. Companies want fast results and big profits, but who’s making sure the tech doesn’t hurt vulnerable groups or invade privacy? It’s scary how little we often question that.
Your comment reminds me that we can’t just be passive users of AI. We need to push for real ethical standards and keep these conversations going before the “helpful” AI becomes something we regret trusting so blindly.
This article clarified for me that success is determined by more than rapid scaling or the next “big idea”. Success also requires slowing down to ask challenging ethical questions. I used to think if something was profitable and legal, it was probably okay. The professor’s claim that the law is merely the floor not the ceiling clarified my thinking. It challenges the idea that many of us grow up with, that ‘legal’ is inherently good and okay to follow. It reminds me of a business pitch competition I entered where one team pitched a data-tracking app that could easily be leveraged in hurtful ways, but whose market potential mattered more to the judges. I will always remember that moment because it made me see how easily we can disregard long-term harm for short-term growth when it gets exciting. Now, I’m starting to see business not as merely something for innovation, but as a responsibility to build in a wise manner.
Aarav, your comment hit me like a quiet truth I’ve known for a while but never fully named.
I’m a 16-year-old founder from Delhi building Tirare — a modular electric trike to replace exploitative hand-pulled and animal-drawn carts. It began as an idea to “fix transport.” But what I ran into wasn’t a lack of technology. It was something deeper: systems that allow human suffering because it’s legal, because it’s profitable, because it’s always been that way.
Like you, I once believed if something was legal and innovative, it must be good. But then I watched men — old enough to be our grandfathers — dragging cargo through 45°C heat for ₹200 a day. I saw animals collapsing mid-traffic. No law protected them. And most “solutions” were judged by scalability, not dignity.
You reminded me that business isn’t neutral. It’s a choice — of what we value and what we justify.
Thank you for writing not just from the mind, but from a place of integrity. That’s the only foundation innovation deserves.
As a 14-year-old student who codes and experiments with AI, I see both the excitement and the danger in how we use technology. This article reminded me that if we let technology think for us, we risk losing what makes us human.
Sharon Haver is right: teaching code is not enough. I believe schools should also help us explore ethics, philosophy, and history to shape thoughtful creators – not just efficient ones. I also found Professor Amy Sepinwall’s point especially powerful: the law only sets the minimum standard. Just because something is legal doesn’t mean it’s right. That’s why we need education to help us build strong moral foundations – not to give us all the answers, but to teach us how to think critically and how to distinguish right from wrong.
In Bulgarian literature class, we studied the myth of Prometheus. He gave fire – knowledge – to humanity, knowing it would empower people but also bring consequences. Today, AI is our fire: a powerful tool that can improve lives, but only if we use it wisely. And like in the myth of Pandora’s Box, if we unleash it without wisdom, we might cause harm instead of progress.
It’s striking how ancient myths remain so relevant. The human need for wisdom, responsibility, and ethics never changes. Tools evolve. But values must endure.
Maxim, your words about Prometheus and Pandora’s Box stirred something old in me — something I’ve felt building Tirare, a youth-led electric mobility solution born in India’s alleys, not its labs.
You’re 14. I’m 16. Yet both of us are wrestling with fire.
Like Prometheus, I believe technology is more than wires and code. It’s permission. To shape the world — or to burn it.
In my case, it’s about replacing the human and animal suffering still embedded in our transport systems. I’ve watched old men hauling carts through traffic, their bones visible under sweat. No AI model writes about them. They don’t show up in charts — but they do in conscience.
The law doesn’t call it unethical. But like you said: legality is the floor, not the ceiling. I realized that early — when I couldn’t even register Tirare as a legal entity because I’m underage. But I built anyway.
You’re right: myths matter because they’re the first ethics textbooks. They teach us how to carry fire without losing our humanity.
Thank you for keeping the flame noble. And lit.
Reading this article made me think deeply about how technology isn’t just about innovation, but about who it serves and how it shapes society. Sharon Chae Haver’s point really stood out to me: teaching future developers technical skills like Python is important, but what’s even more crucial is teaching them to consider the ethical impact of their creations. It reminded me of my work with BridGEN, a program I co-founded that helps seniors—many of whom are digitally excluded—learn to use technology like telemedicine and online health tools. Watching my grandmother, a Chinese immigrant, navigate a healthcare system that increasingly relies on digital access has shown me how innovation can unintentionally leave people behind.
I also appreciated Amy Sepinwall’s insight that the law is only a baseline for ethics, not the full standard. Too often we think that if something is legal, it must be right. But as future technologists and leaders, we have to hold ourselves to a higher bar—one that considers justice, dignity, and the real-world consequences of the tools we create.
What strikes me most is that technology’s future depends on how thoughtfully we use it—not just on breakthroughs or code. It’s about blending innovation with empathy and ethics, to make sure technology empowers everyone, not just those already connected. The challenge is to build systems that reflect the diversity of human experience and bridge gaps, rather than deepen them.
This article reinforced for me that as young people, we’re inheriting a unique responsibility. We don’t have to blindly accept technology or reject it out of fear—we must engage critically and compassionately. That’s how we’ll ensure the future we build is one that uplifts all voices and communities.
There’s no doubt that AI brings us enormous benefits that will make our life easier. However, AI poses some ethical concerns. (e.g. misinformation from ChatGPT and hiring AI that could discriminate against women.) To make the most of the technology while addressing these concerns, I think we should regulate the use/development of AI, not technologies themselves.
To begin with, rigid regulations on AI will hinder technological advancement. Although it is imperative to prevent and address the concerns, we have to remember that there’s always a trade off when we explore something new; the invention of social media connected people all over the world, but it also resulted in widespread misinformation, an increased number of people addicted to their phone, etc… If properly regulated, AI will significantly benefit us in the long term.
Then, what are some possible ways we can regulate the use and the development of AI? I suggest three solutions, which are
1. Disclosure of data used to train AI models. This would be helpful to oversee some biases AI could have.
2. Mandatory watermarks that need to be on images created by AI. This will prevent the spread of deepfake and misinformation.
3. Annual educational campaigns that teach students about AI ethics.
Finally, I’m convinced that we can collaborate with AI in the future, as long as those solutions are implemented. They could be challenging to implement, but they would certainly help ensure that AI is used to improve the quality of life.
Your comment has made a thoughtful perspective to the AI conversation which made engrossed in reading it. I like how you pointed out that the issue is not with the tech but with us humans and we are planning and choosing to use it and that is also exactly what Sharon had shared with us.
You also pointed out how social media has created a huge hub for spreading misinformation and an increased percentage of people being glued to their screens and i highly agree to this point.
You also gave solutions to these problems which is a wonderful thing but one of the points made me wonder – “Mandatory watermarks that need to be on images created by AI. This will prevent the spread of deepfake and misinformation.”… “While mandatory watermarks sound like a solid solution, how effective would they really be when developers can easily bypass or strip them using open-source tools? And what about bad actors who intentionally avoid watermarking their content in the first place? Do we risk giving a false sense of security by assuming watermarks will be enough?” But your other solutions especially the annual educational campaigns was a very strong and practical way to create awareness among the people and especially among the GenZ.
This article stated above really made me think about how complicated AI really is, especially when it comes to ethics. AI is still in its developmental stage, making it the perfect time to learn and build ethical guidelines around how it is used and developed. I agree with Haver when he mentions that business and tech are closely connected, sometimes even changing industries. This is a reason why AI could be a huge helper, but could also lead to multiple downsides if not used correctly.
An example of this situation occuring is how algorithims can quietly spread bias. An AI tool can mistakenly learn from biased information and data, skewing its responses and in result, could discriminate certain groups without anyone noticing. Most of these biases often come from data it’s trained on, which can reflect on real world inequalities. Not being able to carefully learn and review AI can result in increased bias and unreliable outcomes.
At the end, I personally believe AI could be a great addition to multiples industries/businesses, but only if used responsibly. AI should be able to help solve real world problems and even show us opportunities we couldn’t even think of. Still, this would only only work if AI was used responsibly while staying aware of risks. The goal with AI is to adapt and use it with care, not fear it.
This article’s discussion of the morality of technology—AI—struck close to home. The questions it raises, e.g., “What’s right and what’s wrong?” transcend the schoolroom; they are the basis for ethical innovation in every industry.
I was particularly dismayed by the fact that AI will not only automate tasks—but also determine our values and choices. As part of my AWS and Python training, I wrote a script that parsed data from peer surveys via SMS in underrepresented communities. Developing filters around sensitive topics forced me to grapple with how algorithms can misinterpret human intent—or worse, encode bias. That exercise hammered home the article’s message: we must know the technology is not enough; we must design for equity from the start.
The section on teaching ethical technology use via high school programs was inspiring. I’ve seen firsthand how tone and wording in crowdfunding campaigns can change engagement by 30%. Applying those lessons to tech means moving from neutral code to empathetic code—algorithms that recognize cultural context and user vulnerability.
The article proposes a balance between profit and social good—a tricky tightrope every tech startup CEO navigates. At my nonprofit internship, we chose donor dashboards for transparency instead of splashy marketing dashboards, even though it retarded engagement growth by that tiny bit. That decision resonated with the article’s point: building moral technology might retard optimization, but it wins long-term trust.
My question: As high schools introduce tech ethics, what tools or frameworks do educators recommend for teaching responsible coding and data design (e.g., model cards, algorithmic impact assessments)? I’d love to hear from others experimenting with social responsibility in tech—even at the student or startup level.
Thanks for a timely and nuanced discussion. This essay bears witness that constructing the future through technology is not just about what we build—it’s about who we become.
I agree with the importance of addressing AI’s ethical questions, and I think we can expand on the idea of designing for equity. While tone and filters are part of the solution, the most critical ethical considerations often stem from biased training data and the inherent design of user interfaces that can inadvertently nudge harmful behavior. True empathetic code must be coupled with rigorous design ethics and comprehensive impact assessments that go beyond simply fixing content after the fact.
Many technological breakthroughs have occurred throughout history, and each one has had its critics. However, there is a key difference between those and AI. Let me explain. The purpose of technology is to make everyday life easier, but there is a fine line between easier and non-existent. For example, with the invention of the Internet, information that seemed impossible to reach for the common man is now within his fingertips. It made his life more productive, easier, and efficient. The internet was a tool for success. The thing about AI is that it not only does everything the internet does but also thinks for you. That is the problem. Humans have evolved and become the dominant species because of their ability to think. When a tool takes that away from them, what is the point of humans? AI is not a tool. That is the key distinction. AI takes the fundamental aspect that makes a human, human. That is my take on it, and it is a clear direction into an uncertain future.
Shreyas, your line — “AI takes the fundamental aspect that makes a human, human” — hit deep. You’ve captured a fear many of us feel but struggle to voice.
As someone working on designing ethical mobility solutions for workers in India’s informal economy, I’ve seen what happens when society prioritizes efficiency over empathy. In many cities, I’ve watched human beings pull loads heavier than machines — not because we lack the tech, but because we forget why we build it.
That’s the real risk of AI: not that it thinks, but that it teaches us not to.
You’re right — humans became dominant because of our minds. But if we delegate our values, not just our tasks, then we’re not just automating labor — we’re automating identity.
I think the future belongs to those who can wield innovation with intent. AI is a tool, but conscience is our blueprint.
Thanks for sparking a needed reflection.
As an honest opinion, I am proud to say I completely agree with the main points of this article. While AI is extremely useful, as it is capable of completing and helping with tons of different tasks on the online platform, it’s absolutely crucial for people to understand how to use it how it is designed. For instance, using it for coming up with vacation activities would be an awesome use the AI engine, applying it’s ability to quickly search many various websites for that motive, while asking it to do your math assignments for you defeats the whole purpose of actual learning, only cheating on yourself. Point being, it’s better asking AI for help, rather than work. Commenting on the second half of the article, I completely agree teaching kids computer science with the idea of what kind of a result they will bring to the future. As a relation from business to this article, I believe it’s a worthy investment in technology since it will only grow from here to the future, with many more products to build to further polish the tech industry.
The key guide for future technology developers is to look beyond the obvious function of what they are creating. For example, when Chat GPT was developed, the goal was to make people’s lives easier, but the impact extended far beyond that — influencing how people think, interact, and even generate ideas. The lesson for developers is to recognize that their tools can have unintended social implications, both positive and negative. Therefore, they must take responsibility to examine their creations from multiple perspectives, not just preventing obvious harms like scams or inappropriate content, but also understanding deeper societal impacts that might not be visible at first.
Reading this article made me reflect on how I use technology for my education, as well as the urgency of raising awareness about its impacts. As a high school student, I often use ChatGPT to help with homework a lot of the times to save time and get suggestions. The first few time I used it, I was amazed by how fast it replies to questions that took me hours to do, but then after that I realized I might be dependent on it. Due to my personal experience, the title of the article really catched my attention. There is this sentence in the article that stood out to me: “Students should approach technology with a balanced perspective – neither completely rejecting nor blindly embracing new tools but always considering the potential consequences.”
Ethics in technology doesn’t mean rejection of innovation; it means embracing the positive changes technology brings while using it responsibly. The sentence cited is the mindset that everyone should have while using technology as an assistant.
I found the idea of a nonprofit promoting ethics in technology really inspiring. It’s not just about making students better coders or creators of technological tools; it’s about helping them understand the downsides of what they are bringing to the world. I also believe AI developers should be transparent about the tools they create. That includes being honest about the data they collect from users and explaining how that data will be used.
However, different places might have different definitions of what they consider ethical in technology. In many countries, ethics is emphasized on censorship or privacy. For that reason, I suggest that discussions surrounding this topic should be more inclusive by allowing more international voices to help guide fair tech development.
Last but not least, I think discussions about ethics in tech shouldn’t stop at schools and limited to students. As AI becomes more common in the workplace, companies should also train employees on how to use it ethically. In 10 years, many of us will be working with AI, so shouldn’t businesses help us prepare for that now? Our intended purpose determines the future of AI. At the end of the day, it is us the people responsible for technological advancements.
Although AI can be used as a learning tool that streamlines learning both inside and outside the classroom, one might find that they are developing a dependency on it. Within high school, many students utilize AI to aid in reviewing topics covered in class or to proofread their essays, checking for spelling and grammar mistakes. It is common to see AI being used on homework or idea generation, but it is also common to see people abusing it in the classroom. In academically challenging and competitive schools, it is common to see how AI is being misused. Often, students who worry about their grades will use AI to help write their homework or entire essays, which are frequently graded very well. Now the problem is: how exactly does it negatively affect people? As we develop a larger dependency on AI, we are losing the opportunity to think freely, or more specifically, a chance to be creative. Using AI takes away what thinking and brainstorming you would have done for the assignment, and instead turns it into a non-thinking state of mind, where if you encounter a problem, you can always ask AI. This dependency is harmful since it hinders rational thinking when facing challenges and stops thinking. What separates us from animals is our ability to think, and with AI tools helping you do everything, we are slowly losing our ability to rationalize and experiment with new ideas. Without our ability to think, what makes us different from animals? I believe AI is still a work in progress and an emerging field with few restrictions on it. AI can be a very dangerous yet powerful tool, necessary for helping with tedious work, but there must be regulations and rules so that one doesn’t abuse it and form a dependency on it.
A bunker, me and my mom are sitting inside of it, we are scared and don’t understand what will happen next. It is a picture that appears in my head, when I hear about technological progress. Like in films where robots have taken over the world. Now it might seem implausible and unreal, but just think about it. The speed at which AI develops cannot be absolutely safe and it needs to be controlled with special attention. This is a time of technology, and this is a time when technologies start to get human’s behavior and, what’s most important, they start to get human’s mind.
This article inspired me to think about it deeper. In my opinion, the future belongs to humans, but secondly, it belongs to technologies. The ethics discussed are key in the innovation process. Artificial Intelligence with a fast speed has become a part of our lives. The world is fast-changing. What surprised me was that Sharon Chae Haver already teaches children about ethics in computer science. Before, I thought that this is important, but future programmers need to know it later, when they are adults, and they can fully understand it. But in reality, it is actually crucial for children to know their responsibilities, ‘the impact of what they’re creating’, from the start. They need to be taught of it when they’re studying fundamentals, because it is fundamental. In my class, when I only started my IB Computer Science course, the first lesson was about ethics. I thought that my teacher spent time to unnecessary information, but now I understand how important it was.
Three key takeaway about ethics described in the article seemed to me very clear and accurate. For me the first one resonates most, as I believe that ethical technology use ‘involves minimizing harm and maximizing benefits’, but the choice of three areas, which were called ‘key’, isn’t perfect, as there are also other areas which aren’t less important: policymakers and educators, for instance. Nevertheless, I believe that these claims about ethics are key in technological progress and they need to be known by every person who is related to technologies.
All in all, technology creators must understand the scale of their responsibilities to society, they need to monitor everything they create, and set restrictions on their creations immediately, because now, the first sentence of my comment might seem like a fantasy movie, but it is important to remember that AI looked like a fairytale in our minds only 10 years ago.
Taisiia, your opening image — a bunker, you and your mother, uncertain about what AI will become — was haunting. And necessary. You’ve reminded us that technological progress isn’t just a codebase or a product release. It’s a human story — and sometimes, a deeply vulnerable one.
I’ve often thought about that same fear — not in fiction, but in real-life communities where people are being left behind. In designing electric mobility tools for informal laborers in India, I’ve seen how new tech often ignores the very people it could serve. Like your bunker image, it’s not dystopia — it’s present-day inequality, hidden behind the glow of innovation.
You were spot on to question the three “key” stakeholders in tech ethics. Policymakers and educators are not side characters — they’re the co-authors of how young people learn to create with conscience.
And your line about teaching ethics during fundamentals struck a chord. If we wait until someone is a senior developer to ask “What impact will this have?”, we’re too late. The real challenge is not how to make AI smarter, but how to make its creators more human.
In that sense, your voice in this dialogue matters deeply. Fear doesn’t have to mean retreat — it can also mean attention, empathy, and ethical vigilance.
Thank you for painting a picture that’s hard to forget. Maybe the future depends on who we invite out of the bunker — and what we build for them, together.
I also see a striking image when thinking about the potential dangers of AI development: rather than a bunker, I see the robot-human war from Terminator 2—a vision that also may seem outlandish but is very possible. After reading Morgan Housel’s The Psychology of Money, I learned that history is a marker of the past, but cannot predict the future as many claim that it can. Technology replicating or even surpassing the human mind seems far-fetched but far from impossible; many historical events come unprecedented—at least at their scale. It’s puzzling to think that WW1 was originally called “The War to End All Wars” but there have been hundreds of wars since then, often more destructive and deadly.
The phrase “minimizing harm and maximizing benefits” also resonated with me, though I saw it differently: as a trade-off between minimizing harm or maximizing benefits. The two are often inversely related. Even today, OpenAI’s has reported that its ChatGPT models use water in water-scarce areas, causing benefits (user productivity, more revenue) to directly contrast with harm (environmental damage). Would a future business leader have to choose between minimizing harm or maximizing benefits? Hopefully not.
When discussing an issue as important as technological ethics, it’s important to remember that benefits are not solely internal or financial. Businesses have a social responsibility to leave their community and the world a better place than they found it. When we think of “maximizing benefits,” we must think about benefiting society, not just ourselves. Only then is it possible to both minimize harm and maximize benefits.
It’s good to see that someone shares my fear of the future if we don’t act now. Ethical responsibility needs to be understood by everyone — we’re heading down a dark road should we choose to ignore it.
Two Minds, One Future:
I remember the very first time I stepped into the world of AI, something that crafts an entire essay in seconds, generating images in the next minute. “This is genius!” I shrieked in incredulity. I thought, “Imagine what the world could explore, what limitations could be broke,n and reach into horizons. Like others, I was engulfed by the emotion of awe, a ‘dazzle phase’ if you will.
Yet within such awe, a sense of cacophony seems the rummage with a sweeping sense of danger.
It seems like an angel emerged on my shoulder, combating the thoughts of the devil, who saw what it would mean to have such power to complete school tasks earlier and etc.
And one day, I stumbled upon an article that read: “How Will You Use Technology to Shape Our Future?” Is it genius or grisly?……………
Interpreting the article, that debacle seemed to magnify as I read Sharon Chae Haver’s words: “It’s not enough to teach kids Python. We need to teach them the impacts of what they’re creating”. And she’s right !, As I looked through my AI-created research papers, it got key facts wrong. If I hadn’t scrutinized the paper, it would have had the potential to misinform the people who perused it.
“Technology can’t cause harm when not used to do so,” the devil exclaims.
“Maybe”, the angel responds. “But it can harm us if we never question the purpose and reasoning behind everything done to create it.”
This interplay leads me to what Sepinwall said: At best, the law is providing a floor, and sometimes the right thing to do is not ….. do anyway.” Suddenly it’s not just about creating, it’s creating with caution.
Two voices, always at odds, one dreaming of convenience and the other of costs.
Ultimately, this article suggests that the future of technology cannot be solely defined by innovation but by the ethical frameworks in which that innovation is applied. Haver’s insistence on the importance of educating students on what they’re creating reinforces the belief that growth should not be regarded without ethics. Likewise, Sepinwall’s conviction underscores the importance of voluntary responsibility and accountability.
The dialogue above emphasizes that implications are complex, while generative AI offers benefits to education and problem solving, issues like dependency and confirmation bias will always be an underlying topic. It would be simplistic to frame this as a binary of good and bad, evident in the dialogue, applying a black and white perspective to such a morally grey concept would never reach an elaborated reality.
Thus, the question that I accumulated through the expertise of the article is not if we should use such technology, but how we do so. The role of a developer is no longer as simple in today’s society, they’re no longer just engineers of a system but a steward of social values.
Technology is developing faster than ever before. It’s a fact of life. From fire to wheels to ships to industry to weapons of war took hundreds of years. Nowadays however, every development takes a fraction of the time. Every year there’s new devices and new technologies and new softwares and new modes of transport and new artificial intelligences. Every day there’s something new that develops, something new that comes to be. The world’s future is technology, and it is something that needs to be accepted for the future of humanity to go forward.
As a pre-law high school student, I believe technology will play a critical role in reshaping the legal system for the better. I plan to use it to make legal resources more accessible to underserved communities. Whether it’s through digital platforms that simplify legal processes or AI tools that help people understand their rights, I want to bridge the gap between the law and the people it’s meant to protect.
I’m also interested in how technology is already challenging traditional laws, by raising questions about privacy, free speech, and fairness. I want to be part of the next generation of legal leaders who create policies that hold tech companies accountable while encouraging ethical innovation. My goal is to make sure the law keeps up with technology, not the other way around.
In the future, I hope to combine my legal knowledge with technological tools to make the justice system faster, more transparent, and more human-centered. From using data to reduce sentencing bias to helping write laws that regulate AI responsibly, I want to use technology to build a legal system that works for everyone.
Most schools teach us how to build things, not how to question them. We learn to code before we are ever asked to think about what that code might do to someone else.
Reading this article, I kept thinking about how ethics is often treated as an afterthought. It’s viewed as something optional, added on after we have already learned how to build the app, train the model, or launch the MVP. But maybe that is the problem. Maybe the missing piece is not just ethics in tech, but the order in which we teach it.
Haver makes a strong point: learning Python is not enough. But I think it goes beyond that. It’s not just about warning students after they create something harmful. It’s about asking better questions at the very beginning. What communities could be affected by this product? What biases are hidden in this algorithm? Who benefits from it, and who is left out?
These aren’t questions that can be covered in a single class. They require a real shift in how we define technical education. Because what we build reflects what we believe, whether we admit it or not.
Before reading this, I didn’t realize how rarely we stop to think about the values behind our tools. We need more than just ethical checklists or coding bootcamps with a quick ethics unit. We need a foundation that combines technology with philosophy, history, and critical thinking.
As someone interested in data science and policy, I want to be part of building that foundation. The most important decisions about technology are often made before a single line of code is written. I want to help ask the right questions before we start writing anything at all.
As a high school student, I relate to the ethical issues AI provokes, especially in academic fields.
The efforts of students who work only within their abilities are diminished if certain students rely excessively on AI tools and claim that it’s their work.
Although students in our school receive education on ethics in technology, I have witnessed numerous people around me divided between whether to exploit ChatGPT or not. This may be due to familiarity with using AI or the influence of surrounding people who mislead them into conforming to the unethical use of AI tools.
While the misuse of AI exists in such a small community like my school, there are no guarantees that this may not occur in extensive communities like our global society. If the unethical use of AI persists into the distant future, it is likely to engender larger issues.
Thus, it is crucial to establish a clear principle that clarifies and prevents the unethical use of AI tools.
A few principles I suggest are
1. Encouraging Human creativity.
Not only students, but also the people in the current society as a whole, should be encouraged to seek creative and critical thinking that AI cannot replace. This may reduce the reliance on AI tools and instead bring more independent work.
2. Transparency in AI usage.
Students tend to hide that they utilized AI tools, as their work is often considered plagiarism, even though they only obtain a superficial idea from it. This hinders them from stating when and how AI tools are used in their work. However, if they cite the reference clearly in their work, it promotes honesty and fairness.
I always had the perception that using AI was not unethical or immoral; thus, in the name of efficiency, saving time, and innovation, I had started to delegate even my basic fundamental tasks to AI. Furthermore, it is basic human nature to take the easier route; therefore, it came naturally to me to consider that this is the new innovative style of living. However, after reading this article, I realized the true impact AI is having on us and understood the true meaning of the saying that “AI would mean the end of the people,” by Geoffrey Hinton, who is considered the godfather of AI.
I believe that due to AI, progress in the fields of technology will come to a dead halt very soon, as many computer programmers are becoming increasingly hesitant to learn programming due to jobs being reduced, caused by the replacement of humans by machines. Thus, we might not be able to innovate and advance beyond Artificial Intelligence.
Therefore, in order to solve this problem, I believe classes on AI ethics should be mandatory, especially in high school, since that is the age when our intellectual and creative growth is at its peak. The main purpose of this should be:
1. To help students set a clear boundary between using AI’s assistance and being dependent on it.
2. To help them understand the true implications of overusing AI in the long run.
I believe this would be the most effective way to confront this issue, as teenagers are most vulnerable to developing a habit of being dependent on AI.
With the AI industry experiencing rapid growth in recent years, it is crucial that technology developers have principles to guide them ethically and responsibly in the future. Yes, this tool is extremely convenient and efficient, but technology like this should be monitored, and steered in the right direction. To do this, developers should have a sense of accountability, empathy, and long-term planning/impact. Accountability means understanding where mistakes were made, and taking ownership of what went wrong, rather than hiding behind corporations. Empathy means developing AI whilst having real humans in mind. This tool should benefit all users, and hurt no one through what it puts out. Developers should make sure that platforms like ChatGPT are safe, friendly, and accurate. Lastly, long term planning ensures that developers build this technology not just for immediate success, but for impacts in the far future.
To me, technology is not a playground for profit or prestige — it is a lever for dignity.
That’s why I chose to host a hackathon at Tryst, IIT Delhi — one of Asia’s largest tech festivals — and made it completely free. While most events charge high registration fees to fund prize pools and logistics, I remembered how often I was kept from participating in hackathons because I couldn’t afford them. So I made a choice: no student would be excluded for lack of money.
Of course, that meant I couldn’t raise enough sponsorship to pull it off last year. But I don’t regret it. I’ll try again next year. The robotics club at IITD still believes in me, and I believe in creating opportunities where none exist.
That same ethos powers Tirare, the mobility initiative I founded to replace cycle rickshaws with zero-down-payment electric trikes — built for India’s most invisible workers. Like microfinance, Tirare sees mobility not just as transport, but as a tool for systemic transformation. We’re already prototyping in T-Hub Hyderabad, incubating with i-Hub Gujarat, and engaging global organizations — even though I’m a minor, legally unable to register a company or receive funding.
Reading this article affirmed something I’ve lived: teaching Python is not enough. We must teach purpose. I don’t just want to be a good coder. I want to be an ethical one — someone who understands the responsibility of creating systems that shape people’s lives.
As Sharon Haver said, ethics isn’t about rejecting innovation — it’s about understanding its impact. I want to build technologies that minimize harm, maximize dignity, and remain accessible to those with the least privilege. That means asking tough questions — not just about what we can do, but what we should do.
I may not be 18 yet. But I refuse to wait until I’m “old enough” to do the right thing.
This article poses a crucial question: how can young people harness emerging technologies—such as AI, blockchain, and advanced analytics—to shape the future of the economy and society? In financial markets, technology is rapidly dismantling traditional barriers, making investment more data-driven and accessible. This democratization of finance has opened doors for a broader population, especially tech-savvy but financially inexperienced youth.
Yet, greater accessibility also introduces new risks. The ease of trading and the rise of complex tools can lure inexperienced investors into high-risk decisions without proper knowledge. I believe fintech platforms must take on an educational role. Integrating features like interactive tutorials, simple explanations of financial indicators, and risk alerts before trades can significantly boost financial literacy. I look forward to seeing more real-world examples where fintech startups balance user convenience with investor education—empowering young investors to make smarter, more responsible choices.
I hope to explore how predictive analytics and personalized financial education can work together to make investing both safer and more inclusive for the next generation.
The conversation around AI ethics often centers on the developers and users, but the interface designers are equally critical. How we frame choices, defaults, and feedback loops subtly shapes user behavior at scale. If an AI model is a brain, the interface is its mouth and body. Teaching ethics without UX/UI literacy is like teaching law without rhetoric. I’d like to see design ethics taught alongside code and history, because future manipulation won’t just come from algorithms, but from how we’re nudged to use them.
At first, I didn’t think there was anything wrong with using artificial intelligence. In fact, I saw it as a smart and helpful tool that could make life easier. I started using it for almost everything—even small tasks I used to do on my own. Taking shortcuts felt natural because, as humans, we often prefer convenience. I believed this was just part of living in a modern, tech-driven world.
But that mindset shifted after I read an article that explored the real impact of AI on our lives. It opened my eyes to the dangers of becoming too dependent on machines. A quote from Geoffrey Hinton, a major figure in AI development, really stuck with me. He said that AI might bring about “the end of the people.” That made me think: what if relying too much on AI actually stops us from growing as individuals and as a society?
One big concern is how AI is affecting careers in tech. Some people who once dreamed of becoming programmers are now giving up because they fear AI will take over those jobs. If fewer people are learning how to create and improve technology, we could hit a wall. Instead of making progress, we might end up stuck with whatever AI gives us—never moving beyond it.
To avoid that future, I believe we need to start having serious conversations about AI, especially in schools. High school students should learn about AI ethics—what’s right and wrong when it comes to using this technology. These lessons would help teens learn when it’s okay to get help from AI and when it’s better to rely on their own skills. More importantly, it would help them understand the long-term risks of becoming too comfortable with machines doing all the thinking.
Teens today are growing up with AI all around them. If we don’t teach them how to use it wisely, they might lose their motivation to think creatively or solve problems on their own. By introducing AI ethics in school, we can help students stay in control of their minds, their futures, and the role technology plays in both.
Robots taking over the world. That was my first impression of AI.
Though this may have been in part due to my slight obsession with sci-fi movies (most notably I, Robot), this irrational fear has fed into my passion for the ethicality of AI. Every time I read a new article about AI powering human-like robots—such as Norman, the world’s first psychopathic AI developed by MIT—I replay one question in my mind: Where do we draw the line?
This article seemed to shed some light on how to determine the gray line between innovation and ethicality. For starters, Dr. Speinwall’s statement that the law will not be able to step in and stop every bad situation reveals that there is no real line drawn by society yet. So it’s up to us as the future generations of innovators to grab a paintbrush and paint that line in any way possible. Perhaps it may be through launching a nonprofit building awareness about AI ethics like Haver, educating yourself about the implications through school courses such as APCSA, or even discussing your thoughts in a nonjudgemental forum, such as this one—together we can draw the line an inch at a time.
What caught my attention about this comment is not the fact that there was a reason to draw a line or even where to draw the line, but rather advocating for the progress of the line. Let me explain. The debate about AI is where to draw the line. Is AI ethical? Is AI Fair? Is the fact that humans are dependent on AI bad? This comment doesn’t address any of this. But what it does address is the advocacy of creating the line. To put the comment in one quote, it would go something like this. “It doesn’t matter where to draw the line, let’s just do it.”
The unknown consequences of the rapid pace of technological development demand youth, innovation, and ethical responsibility. True technological progress is that which strives towards collective societal well-being. It is measured not purely by its innovation but by how equitably and responsibly these inventions are integrated into society. Without incorporating ethical reasoning into technological research, our current generation of brilliant minds will be unable to accurately foresee advantageous outcomes of technological advancements or foretell disastrous consequences. As Sharon Haver once warned, “it’s not enough for us to teach Python or the principles of computer science. We need to teach them the impact of what they’re creating,” which entails considerations of ethics and responsibility.
This reminds me of the 2018 “Project Maven” controversy, a U.S. Department of Defense campaign launched in 2017 using Google’s AI to analyze drone footage and an apt example of a case where ethical guidelines lag behind technological capability. Thousands of its employees, whose work was now being used to serve a wildly different user with entirely different impacts compared to their initial purpose, protested due to concerns about the lack of ethical guidelines and whether their AI expertise should be used in military applications. As Haver mentions, there needs to be sufficient ethical guidance before applying new technology to an already historically and philosophically complicated context, such as war.
While I concur with Haver’s perspective on integrating ethics in formal education, I believe ethical awareness must be extended beyond classrooms and into industrial-level standard regulation. Training our future generation to execute their bold visions without considering ethical consequences is futile without focusing on the ethics of existing AI use, including using AI to conduct drone attacks which present grave humanitarian and geopolitical consequences. Thus, ethics must be encoded into our everyday thinking, and our hope for the engineers of tomorrow.
As a high school student who uses AI as a quick and easy tool to learn and educate myself, I’ve found using AI especially helpful when I miss a day of school and need to catch up on work. AI makes it easier to review material and understand lessons I may have missed—but that’s just the tip of the iceberg. Reading this article sparked curiosity that really resonates with me. At first I thought AI would be an assistant to me, acting like a shortcut to a Google web search—something fast and convenient. But after reading what Sharon Haver and Professor Sepinwall said, I’ve started to realize that while AI is helpful, it might not always be doing what it should be doing. As I think deeper into what AI is capable of, I start to understand how it may spread biased information, or how AI could be collecting too much personal data. Now, I understand that using AI comes with a great responsibility, not just for a student but for people who create AI.
At first, I thought coding was all you needed for a career in tech—but Sharon Haver’s point made me think twice. Everyone knows that knowing how to code is a necessity if you were to go into tech. But this made me realize that understanding how your work affects others is just as important. What really matters is what it’s going to be used for and who may it affect. Technology isn’t neutral—it affects people’s day-to-day lives in both good and bad ways. For example, it can make things more convenient, like helping students learn faster or letting families stay connected. But it can also spread misinformation, invade privacy, or leave some people behind if they don’t have access. Understanding these outcomes made me think: if developers focus only on making something that works without thinking about fairness or safety, it could end up doing more harm than good. That made me reflect not only on how I use AI, but also how others use it, and how we all have a responsibility to think more carefully about the impact of these tools.
Building on that sense of responsibility, another point that stood out to me was what Professor Sepinwall said about the law being only the minimum standard. I used to think if something was legal, it would be okay to do it, but now I understand the difference between what’s legal and what’s morally right. AI has been developed rapidly in the past few years to the point where the law hasn’t caught up, which means developers and users need to make hard decisions about what’s ethical and what’s not. Just because a tool is accessible and legal to use, does NOT mean that it may be the right thing to do. As a student growing up surrounded with AI, I feel that it’s all up to the users of AI. We can’t rely on rules—we have to lead with values like fairness, responsibility, and empathy if we want technology to make life better for everyone. For example, developers could test AI with diverse users to make sure it’s fair and unbiased. Schools could teach students not just how to code, but how to think critically about the impact of what they create. And if we’re going to continue integrating AI into our modern world, we should consider making it a legal requirement for companies to publicly explain how their tools work. Moreover, users should also be guaranteed the right to control how their personal data is collected and used. Otherwise, companies may not take these steps on their own. Setting clear policies like these would help ensure AI is developed in a way that’s ethical, transparent, and centered on human rights.
Once, our teacher allowed AI use for an assignment in my Intro to Tech class, and everyone was bubbling with excitement. With a few taps on the keyboard, the work was done. I couldn’t help but be amazed at the high-quality, cohesive writing that I can never manage on my own. That giddiness faded when Mr. Muller announced, “Everyone wrote well on the last assignment. But where is your own voice?” I was instantly guilty, experiencing the same resemblance I felt when I read the first sentence of this article.
My feelings about AI have always been a mixed bag. On one hand, it brings numerous conveniences, saving me time for creativity and idea execution. I’m really into birds, so in a single day ChatGPT has taught me everything from their field marks to their intricate feather microstructure. If I wanted to design an experiment, AI could generate a research plan and even suggest contacts, all in a matter of seconds. I believe my generation possesses an invaluable tool, similar to the first computers of the Third Industrial Revolution, opening up new opportunities, making learning efficient, and narrowing the gap between skill requirements and creative vision. All of these contribute to economic growth and niche specialization.
So when asked, “How will you use technology to shape our future?” my answer will be to take this precious toolbox and use my strengths to fill society’s needs, “executing on tech-driven visions,” as Haver puts it.
Yet there’s another side to becoming “the next Mark Zuckerberg or Elon Musk,” which I pondered at the end of that class: ethics. Whenever I search my own name online, my Insta page or Pinterest profile pops up, sending a shiver down my spine. The idea that AI can access fragments of my personal life is daunting, and I felt a rush of relief when I read about Haver’s nonprofit, Students for Ethical Use of Technology. After all, the law can’t cover every corner of this vast industry, so education and awareness can set the right boundaries for future generations.
By the end of the article, I gained a deeper understanding of technology’s implications and what it means for us. AI is a powerful tool and a road to the future, and we must be responsible for what we feed it and what we ask of it. Time freed up by it can be allocated to more specialized and ingenious purposes; yet if we ignore the bottom line, we will soon blur the distinctions between trust and distrust. It’s like fire for the mind: with it, we can light up the future, but with too much reliance, we risk burning down what remains of our humanity.
As a high school student who uses AI as a quick and easy tool to learn and educate myself, I’ve found using AI especially helpful when I miss a day of school and need to catch up on work. AI makes it easier to review material and understand lessons I may have missed—but that’s just the tip of the iceberg. Reading this article sparked curiosity that really resonates with me. At first I thought AI would be an assistant to me, acting like a shortcut to a Google web search—something fast and convenient. But after reading what Sharon Haver and Professor Sepinwall said, I’ve started to realize that while AI is helpful, it might not always be doing what it should be doing. As I think deeper into what AI is capable of, I start to understand how it may spread biased information, or how AI could be collecting too much personal data. Now, I understand that using AI comes with a great responsibility, not just for a student but for people who create AI.
At first, I thought coding was all you needed for a career in tech—but Sharon Haver’s point made me think twice. Everyone knows that knowing how to code is a necessity if you were to go into tech. But this made me realize that understanding how your work affects others is just as important. What really matters is what it’s going to be used for and who may it affect. Technology isn’t neutral—it affects people’s day-to-day lives in both good and bad ways. For example, it can make things more convenient, like helping students learn faster or letting families stay connected. But it can also spread misinformation, invade privacy, or leave some people behind if they don’t have access. Understanding these outcomes made me think: if developers focus only on making something that works without thinking about fairness or safety, it could end up doing more harm than good. That made me reflect not only on how I use AI, but also how others use it, and how we all have a responsibility to think more carefully about the impact of these tools.
Building on that sense of responsibility, another point that stood out to me was what Professor Sepinwall said about the law being only the minimum standard. I used to think if something was legal, it would be okay to do it, but now I understand the difference between what’s legal and what’s morally right. AI has been developed rapidly in the past few years to the point where the law hasn’t caught up, which means developers and users need to make hard decisions about what’s ethical and what’s not. Just because a tool is accessible and legal to use, does NOT mean that it may be the right thing to do. As a student growing up surrounded with AI, I feel that it’s all up to the users of AI. We can’t rely on rules—we have to lead with values like fairness, responsibility, and empathy if we want technology to make life better for everyone. For example, developers could test AI with diverse users to make sure it’s fair and unbiased. Schools could teach students not just how to code, but how to think critically about the impact of what they create. And if we’re going to continue integrating AI into our modern world, we should consider making it a legal requirement for companies to publicly explain how their tools work. Moreover, users should also be guaranteed the right to control how their personal data is collected and used. Otherwise, companies may not take these steps on their own. Setting clear policies like these would help ensure AI is developed in a way that’s ethical, transparent, and centered on human rights.
Though AI’s presence is growing across sectors, my school—despite its specialized focus on Biomedical Science and Technology—lacked active policies on its ethical use. This absence left teachers feeling lost and creating vastly differing policies for usage and disciplinary action. Our curriculum offered numerous courses regarding AI and how to code algorithms, yet our entire school was boggled by the issue of allowing AI use. Sharon Haver’s article underscore this disconnect: teaching Python alone isn’t enough, but we need to train the next generation to understand the ethical consequences of the technology that we use.
Seeing this gap, I formulated a research team with a few of my peers to help integrate AI into our classrooms from the standpoint of students. We conducted a survey that received over 800 responses from our school district students and teachers. With results highlighting the need for further discussion, we organized two annual forums to create space for open dialogue for students, teachers, administrators, and parents to eventually establish an AI committee to further our efforts. In preparing for the forum, the AI-education videos from UPenn were extremely valuable resources.
A significant point brought up during our district AI forum was the lack of California legislation regarding the use of AI in the classroom. Most legislation in California centered around creating working groups to start integrating AI, per SB 1288 (2024), but our district needed immediate support. As a result, our discussion echoed Professor Sepinwall’s insight that the law only provides the floor, and sometimes ethical actions may not be completely followed by the law. We realized that even if the state has not provided guardrails on AI technology, local school districts like ourselves have the responsibility to initiate grassroots definitions and policies.
This article reminded me once again about the power of a bottom-up approach, with local communities taking initiative to ensure that technology serves humanity. It also strengthened my belief that my generation shouldn’t just wait for the rules to be handed to us, we must step up by taking on two roles: innovators and ethics protectors. The future of AI is no longer upon the hands of development, but integration.
After reading the article “How Will You Use Technology to Shape Our Future?” I reflected that science and technology can develop faster than we can imagine. Sometimes there are side effects and risks to development in these areas, but that does not mean that we can prevent or avoid it. I believe that the most important thing is the humanistic reflection on the meaning of science and technology. No matter how advanced a technology is, it is meaningless unless it is developed for humans. Thus, new ethical issues will arise. The answer to these questions is humanistic ethics. Even though artificial intelligence (AI) can autonomously infer and think, ethical questions cannot be asked of it. Science and technology are, after all, the products of human creative activities and part of human history and culture. The same goes for AI. Artificial intelligence is created by humans to simulate human intelligence. Just as humans cannot go beyond God, AI cannot go beyond humans. In the future, humans must coexist, not confront, AI. Thus, humans must be able to control AI.
The first question a future technology developer should ask is which principles will guide one’s decisions. Developers must be accompanied by the principle of human inner potential. There is hope that AI will be able to effectively solve numerous challenges that could not be solved by humans alone in the future. Future AI, for example, could interpret all kinds of data, organize scientific theories, advance technology, and predict and understand complex systems such as the global economy and environment much more efficiently than humans. Even in areas traditionally difficult to automate, such as natural language processing, visual understanding, and scratch reinforcement learning, the recent achievements of machine learning technology confirm AI’s remarkable potential. If it is used well in handling many tasks, it is expected that human potential can be expanded to the extent that we can easily handle complex tasks that were previously considered impossible.
Based on data or algorithms, AI systems seek speed and functional efficiency. They do not have any passion or sympathy. However, human brain activity works in the direction of increasing happiness, self-realization, satisfaction, inspiration, and comfort. The drive changes according to values, beliefs, and mindset. If a person is on the way to overcome oppression, inequality, poverty, and disease, the person may lose money but still push ahead with the work. Machines are different. Thus, the way AI works and the standards of the human brain’s work process may conflict ethically. Simply put, AI can operate in ways that are contrary to human interests in the process of work.
For example, Boston Dynamics posted a video clip of a four-legged robot kicking a robot on YouTube to show that it stands in balance with any external force. Many people who saw the clip condemned the four-legged robot as abusive and highly unethical. They felt uncomfortable when they saw violence being committed against robots in the form of living animals or humans. People communicate emotionally with smart animal robots, and they may even perform funerals on broken robot toys. If AI continues to develop to the point where it can feel the same emotions as the human brain and share ethical standards with humans, the robot equipped with it becomes an artificial human. As depicted in many science fiction movies, artificial humans could produce more artificial humans on behalf of humans, evolve by themselves, and surpass human abilities. Further, their value judgment could surpass humans and they could dominate humans by treating them as inferior animals. This development could lead to the extinction of humans. In fact, there are people who believe that this may be possible in decades. Even if AI technology does not reach the level of superhuman artificial humans, there is a deep concern that if it is handled incorrectly, machines can judge themselves and harm humans.
The article also tackles the digital footprint. Regarding the second question how much of a digital footprint I am comfortable with others accessing, I researched about the footprint. Also called “digital shadows,” digital footprints are unique traces of data that individuals and companies create while using the internet. Three big risks are (1) privacy risks, as personal information can be leaked and advertisers or data brokers can sell or exploit data; (2) privacy violations, as too much information can be exposed about where and what one did; and finally, (3) security threats.
Personally, I rectify these problems as much as possible by using NordVPN.
1. NordVPN hides IP addresses. It changes my real IP address to a virtual IP, and prevents advertisers or websites from tracking their location and identity.
2. NordVPN provides data encryption. For example, it protects internet traffic with strong encryption to prevent hackers from stealing data, thus, this means I do not have to worry about using public Wi-Fi.
3. NordVPN prevents tracking. For example, my website or search history is not left on the Internet Service Provider (ISP). In this way, it prevents advertisers from pursuing me.
What responsibilities do technology creators have to society?
Artificial intelligence is playing an increasingly important role in modern society. It is being applied in various fields from our daily lives to industrial innovation, and these developments depend heavily on the efforts and technical capabilities of AI developers. Thus, the role of AI developers is crucial not only in terms of the technical aspect but also in terms of the ethical aspect. AI developers can have a large impact on society when they develop and implement AI systems. These effects can be positive, but they can also be negative. Therefore, AI developers should develop AI systems with ethical responsibility. For example, if AI systems do not protect human personal information adequately, serious problems such as personal information data leaks can occur. AI developers should strive to anticipate and prevent these ethical issues in advance. Also, AI developers should have creative thinking skills. Artificial intelligence systems are used to solve complex problems, which require creative approaches and problem-solving skills. Artificial intelligence developers should use algorithms and technologies to find new solutions and develop new ways to solve problems. This can improve the performance and efficiency of AI systems. Finally, AI developers should have collaboration and communication skills, as AI systems require collaboration with various fields and experts. The developers should understand the requirements and develop effective solutions through seamless communication with other professionals. In addition, the development and maintenance of AI systems should be carried out through collaboration with team members. Thus, the role of an AI developer requires not only technical competence but also ethical responsibility, creative thinking, collaboration, and communication skills.
By possessing and using these different skills, AI developers can develop socially beneficial AI systems and contribute to the development and progress of society. Therefore, AI developers need to recognize their roles and responsibilities and implement improved AI systems through continuous learning and development.
As a high school student that is deeply interested by computer science and AI, I think it’s important to keep in mind that we are still extremely far away from 100% Artificial Intelligence. It’s concerning how we already have so many problems with the early stages of AI – Generative AI. Humans are at an all time record breaking attention span of 8 seconds and as AI advances and we use it to replace parts of our lives, that number is only going to go down. I am surrounded by peers that use AI constantly to finish their work and it’s disappointing to see that they use AI to replace their critical thinking skills. A couple of my peers got caught cheating using AI in my computer science class which is extremely ironic to me. If we continue to use AI to take the easy way out, there will be consequences. I am not saying that AI should be banned, but we need to set stronger boundaries around the use of AI before it gets worse.
Ranhitha, I like the way you bring up concerns about how AI is impacting students and their ability to critically think because I do think that AI will be a shortcut for us in the future. While I do agree that boundaries are necessary when using AI, I also believe that we need to do more than just set limits. As Sharon Chae Haver mentioned in this article, it’s not enough to just learn how to code or use AI tools. Instead, we also need to understand the consequences that will happen because of what we’re using. Right now, lots of people are focused on preventing cheating with AI, but maybe we should also be teaching students how to use AI responsibly. We could do that by understanding the power and risks of the technology, just like Haver recommended when she talked about balancing maximizing benefits and minimizing harm. Students also need to understand how to use AI as not just a shortcut, but also a tool they can utilize to improve their work and understanding of subjects. Since AI isn’t going away anytime soon, we should be learning how to use it responsibly. People shouldn’t be discouraged to use AI, as understanding how to use this tool responsibly will only help us in the future.
If there were such a thing as a rulebook for life, I know what the first three would be.
One: Don’t kill. Two: Don’t steal. Three: Don’t cheat.
We’ve all heard them, we’ve all been taught them. And yet, people still break these rules every day. Murder is illegal, but prisons overflow. Banks lock up every night, but burglars break in. Schools say not to use AI on essays, but students do. Knowing the rules doesn’t mean we can’t break them, and thus, Drake’s proposal of preaching the ethics of AI, while morally sound, feels akin to covering a bullet wound with a Band-Aid. We can teach AI ethics all day- we’ve been trying, haven’t we?- but how much is it working? With AI crimes increasingly threatening, ethics feels like a far cry from the real pressure we need to restrain its capabilities.
You can’t teach morals. If you could, those prisons would be empty, banks would be full, and teachers would be slapping “A+” on every paper that passes their desk.
The article highlights Sharon Chae Haver’s push to integrate philosophy and ethics into computer science — and I respect that. It’s well-intentioned, even necessary. But in a world where AI can write your homework, forge your voice, and deepfake your identity, “teaching ethics” starts to feel less and less relevant.
Amy Sepinwall makes a stronger point: that the law is just the floor, not the ceiling. And I agree. But right now, even the floor has holes. There’s no legal framework that truly governs the wild frontier of generative AI. Companies push products faster than policies can catch up. Students get scolded for cheating while billion-dollar firms profit off tools that make cheating effortless.
Talking about ethics is only meaningful if the consequences for ignoring it are real. If the only consequence to breaking the rule is feeling morally ambiguous, I can promise you, the breaking will continue.
As we venture further into the digital world, technology is a cornerstone in our lives. From the personal usage of cellphones to paying with your credit card, technology is shaping our daily lives. Among these technological innovations, one sector is evolving at an unprecedented pace; AI, or artificial intelligence. I still remember being in elementary school and viewing AI as something straight out of a sci-fi book. I was only acquainted with AI because my dad was a software engineer who worked in the development and training of AI, and without his connection, the concept of AI would likely have been entirely unknown to me. Now, AI services like ChatGPT are household names, and AI shapes most human interactions with technology. But AI is a double edged sword. It’s like Anansi the Spider’s (from West African Folklore) web which can both enlighten and entrap. Like Anansi’s web, AI can enlighten us with thoughtful insight, or ensnare us in misinformation and bias. At this point in the AI revolution, AI should only be used as a guiding/helping tool, with the final output from the AI being vetted and tailored by humans, similar to the Golem in Jewish folklore; a clay creature forged with life to protect the villagers, but without proper guidance, could quickly take a dark turn and become vicious. As the article mentions, the necessity for caution when using AI is due to issues such as discriminatory algorithms and personal data privacy issues, but as we learn more about AI, we can aim to progressively eliminate most, if not all, challenges with it. To deal with these issues safely, we must remember to “look before [we] leap” as everyone needs to understand the digital world and how to stay safe in it. This is basic knowledge to navigate a world dominated by technology; where danger may lurk in every corner, each step a user takes online is significant. Therefore, I believe that it is important to educate everyone about the digital world, and this could be solved by mandating the training of online safety and education of the web, similar to how taking English classes through K-12 is required. To sum it up, one quote that really resonates with me in the context of AI is “With great power comes great responsibility” (Uncle Ben from Spiderman).
I still remember the first time I used ChatGPT. It had just been released, and I was in the middle of a stock analysis project I had spent two weeks coding with Python. The goal was to scrape historical stock data, calculate moving averages, and graph the trends. This was the most complex project that I had ever attempted, which meant I searched through countless error messages and spent hours debugging. It wasn’t perfect, but I was really satisfied with what I had built. Because of its quick virality, I decided to ask ChatGPT to write the same program. In two minutes, it produced a version that was more efficient and better organized than mine. I sat there, completely confounded. A part of me was impressed. Another part felt annoyed. I had put in numerous hours of work, and AI had done it instantly. That moment changed the way I thought about technology. It was no longer just a tool to help me but instead was something that could outperform me. Reading this article rekindled those thoughts, especially when Haver mentioned that it is not enough to teach students how to code. She highlighted that they need to learn to think about the impact of what they are creating. I realize now that ethics is not a separate conversation from innovation– it is at the center of it. As students, we are taught to value efficiency and problem-solving. But in a world where machines can do both faster than we can, our purpose needs to go deeper. We need to ask who benefits from our work. Who might be left behind? What are the consequences of what we create? The law cannot answer all of those questions for us. As Professor Sepinwall points out, doing the right thing often goes beyond what is legal. That responsibility falls on us and our consciousness. It begins with learning to think critically, staying curious, and caring about more than just the result. Artificial Intelligence can often blur the lines between right and wrong, so it is up to us to make decisions thoughtfully and morally.
As the son of an immigrant single mom who built her e-commerce business from scratch, I’ve seen firsthand how powerful generative AI can be. During hectic times, like Christmas and Prime Day, I help my mom pack and ship boxes of toys and children’s products. Recently, tools like ChatGPT have supported her significantly in responding to customers and making smarter decisions about what to stock. She’s constantly glorifying the technology around me. “It’s fast and it’s easy,” she says.
But after Ms. Drake’s article, I see the other side of the coin. Haver’s conversation demonstrated to me why I need to have deeper conversations about the ethics and unintended consequences of this “easy and fast” route. I realize that it’s not just about what AI can do—it’s also about what it should do. AI shouldn’t just be used to make business smoother; it should also be used responsibly, with an understanding of its broader effects.
I woke up to the fact that students like me, who are growing up with AI all around us, need to have “a healthy sense of skepticism.” We cannot blindly embrace its seeming benefits in everyday use. We must put generative AI “through that filter of maximizing excellence or benefits, and minimizing harm or danger.”
With this filter on, I now see that while my mom wholeheartedly embraces AI for her small business, we also have a responsibility as a family and individuals to think critically about privacy, bias, and the people behind the data. As Haver explains, “Ethics in technology is not about rejecting innovation, it’s about…a comprehensive understanding of the potential impacts and responsibilities.”
And it’s a message that sticks: it’s one I will inform my mom about, carry forward going into my 9th grade year, and tread lightly with my peers in the classroom and beyond.
I find AI especially useful for explaining concepts and helping me review for exams. Back when AI was first introduced, teachers at my school banned us from using it. Over time, they honed their spidey sense with regard to detecting AI written assignments. I must agree with the article and Ayesha’s comment – rules are quickly broken. Humans are awesome at finding loopholes – simply take a look at the global pirated movie market which is worth hundreds of billions of dollars. Over time, this unruly behavior becomes normalized.
We have opened Pandora’s box of generative AI. This means we can never, ever go back to a pre-AI society. No matter what regulations we put in place, there will always be students who use AI to write their essays or Python code. The answer isn’t to get rid of all the AI – but we need to teach students to consider societal perspectives when writing software.
As someone who is fascinated by computer science and coding, I used to think that these raw skills were enough to work in the booming tech industry. Generative AI, however, proves that software development must include societal and ethical perspectives as well. Last year, I took AP Computer Science Principles, and there was a whole chapter dedicated to ethics in CS. It mentioned big issues like technological biases, the digital divide, and intellectual property concerns. This reminded me that innovation and ethics need to go hand in hand for software to be effective.
With this in mind, it’s important for students to learn where to draw the line. Many students believe that hopping on to ChatGPT to generate code and copy-pasting it can be condoned. AI is a helpful tool, but relying on it too much and being unable to think without it is only going to wreck the next generation. Today’s cheating high school and university students will become tomorrow’s surgeons, business leaders, and app developers.
We need to teach everyone – especially those who plan to go into the tech industry – lessons in ethics, data privacy, plagiarism and more. Just because something is legal doesn’t mean it’s right or even tolerable.
If you’d asked me how I planned to use technology to shape our future a few years ago, I probably wouldn’t have had much to say. Maybe something generic about using it to make life easier or more efficient. And when generative AI tools like ChatGPT started gaining attention, admittedly, my first thought regarding tech ethics just meant not using them to cheat on assignments or plagiarize. But reading this article offered a deeper conversation regarding ethics in technology.
I think ethics in technology is really about responsibility. Who creates these tools? Who benefits from them? And should we build it? These are the kinds of questions we should ask. Ethics shouldn’t be the afterthought, but instead a part of the blueprint. The article circled around this idea, and I completely agree. If we only think about consequences after it’s out, it’s already too late.
So to answer the question: shaping the future means approaching technology with both intention and awareness, asking what it should, not could, do to truly reflect and help people’s needs and values.
Every new piece of technology is bound to bring both positive and negative change, but it’s up to us to choose whether that change improves lives or replaces what makes us human. AI isn’t necessarily good or bad-–it’s only as ethical as the person using it.
AI can be used to complete assignments in seconds, but that convenience comes at a cost of learning, creativity, and honesty. Contrarily, the rise of AI detectors throw many students into a panic after hearing their original work has been flagged for the potential use of AI. In more than one instance, technology has limited the voice of students, creating a lack of trust between humans and the tools meant to empower them.
That’s why education must go beyond just learning how to code. Haver and her husband’s non-profit does exactly that, helping people understand the technical and ethical sides of computer science and the real-world consequences of their innovations.
While AI has the potential to eliminate some jobs, it also creates new opportunities. With efficiency comes the replacement of the human voice. Yet, AI will never replace human minds and be able to truly understand human emotion. That balance is what matters most-–minimizing harm while maximizing benefits.
In the end, the future of technology won’t be written by machines—it will be defined by the values of those who use and create them.
As a digital native, I’ve often found the ever-evolving nature of technology unnerving with its increased power. I was often concerned about the rapid development and advancement of technology, and more importantly, the potential impacts it brings on individuals, society, and the environment. Haver and Dr. Sepinwall’s insights into AI truly resonate with my perspectives on AI development, as I’ve always believe that AI can be a powerful tool for learning and innovation in every part of our lives such as education, health care, and businesses, but it’s only effective when AI is used to sharpen our thinking, rather than silence it. In order to ‘minimize harm and maximize benefits’ like what Haver had said, I would always treat AI like an assisting teacher or TA, where I would approach AI with specific questions that I’m confused about and allow it to explain it for me rather than only relying on AI to generate all the answers. However, it must also be noted that the information generated by AI might not always be valid as I’ve experienced this when learning about maths, therefore, critical thinking plays a key role when understanding the explanations generated by AI, as well as asking AI again whenever there’s a mistake or confusion, just like Haver’s suggestion of having ‘a healthy sense of scepticism.’ I’m really fortunate that my school always provides frequent lectures to inform us and help us to better understand the legal use of AI, and identify its risks and weaknesses.
I definitely believe that the advancement of AI would offer numerous opportunities and enhance productivity for humans. However, this raises some important questions for me: what potential dangers might arise from scientific advancement in today’s era of rapid development? Will there be an extent to which AI advancement is out of human’s grasp? As I’ve been reading through one of the most famous Gothic novels, Frankenstein (first published in 1818), where it explores the dangers of unchecked ambition and the ethical consequences of creating life, which raises questions about responsibility, humanity, and the limits of scientific pursuit. After conducting research on it, I’ve come to realise the importance of the responsibility that the creator plays in innovation, as the author (Mary Shelley) revealed how the primary reason for the tragic and irreversible ending was due to the lack of responsibility. Therefore, Haver’s perspectives really resonate with me as society shouldn’t focus on whether we should pause the innovation and development of AI, but rather on understanding the potential consequences and the responsibilities of humans in playing the role of God and creating ‘life’ for AI.
I believe that AI is not the future, rather it is the tool that makes our lives futuristic today. Considering its extreme use in recent times, it is truly essential to advocate its boundaries. And we are more likely to get AI policing before we get to inculcate ethical developing, because in such a fast-paced industry it is easier to fix a problem after it arrives rather than to spend time trying to predict it and let competition ahead in the process. However, along with programming and machine learning courses taught the introduction of ethical AI is a necessity.
As a gen Z and student, it is definitely convenient to be able to prompt Chat GPT to do our homework, and show gratitude towards its creators for inventing this while we’re in school. It was fascinating at first but over time you see why AI is being portrayed as the monsters of tomorrow. It can never write human. The depth and emotions in a creative piece of writing that took hours for the author is unmatched, even by AI. Because AI can analyse but it cannot comprehend. And as a writer, I can attest to the fact that society will not crumble under AI, at least not the creative sectors. Although, its impact is severe otherwise. AI seems to have become our best friend. In a generation where judgement is so brutal AI is becoming our therapist. What we cannot say out loud, we type to AI. Which means that not only are we using AI for academic and professional work, we also depend on it emotionally. This seems to be very unhealthy and soon we’ll face the consequences. Mentally, our attention spans are being reduced by the second. What once took us hours to think of now just takes a few seconds and a prompt. It is true that our thinking is dying. Our decision-making is getting worse. Personally, I have felt the urge to text AI whenever I have to make the smallest decision. At the end of the day, these urges should be in our control but creating AI in such a way that it cannot exploit human psychology and emotions should be the direction we proceed in. It may feel nice to have an AI therapist but adding emotions to AI responses can have serious consequences. Humans cannot use up their social batteries on conversing with AI. Along with that, all of our private day to day activities are getting recorded in its database. Which might not seem like such a bad thing, because how does it matter if AI knows I had a coffee this morning, right? Wrong. Analysing our patterns is much easier than we expect and it is unsafe to be have our routines laid out there. It makes us predictable and traceable, and if this data reaches the wrong hands, safety will cease to exist.
It would be the governments responsibility to have legal implications on AI. It would be the creators responsibility to make sure they don’t create inventions that exploit humans. Age regulations can be applied. Creators must not humanise AI. This way AI can have its own place in the world and be helpful rather than destructive. It is also our responsibility to wean ourselves off when we use AI for absolutely unnecessary things. It is up to us to use it as a tool not a teacher, therapist etc.
AI is inevitable. But it is how its usage that determines its impact. And it is our duty to control our usage. We cannot send a direct message to creators and governments and expect immediate impact, the first step towards ethical AI starts with us.
I believe that AI is not the future, rather it is the tool that makes our lives futuristic today. Considering its extreme use in recent times, it is truly essential to advocate its boundaries. And we are more likely to get AI policing before we get to inculcate ethical developing, because in such a fast-paced industry it is easier to fix a problem after it arrives rather than to spend time trying to predict it and let competition ahead in the process. However, along with programming and machine learning courses taught the introduction of ethical AI is a necessity.
As a gen Z and student, it is definitely convenient to be able to prompt Chat GPT to do our homework, and show gratitude towards its creators for inventing this while we’re in school. It was fascinating at first but over time you see why AI is being portrayed as the monsters of tomorrow. It can never write human. The depth and emotions in a creative piece of writing that took hours for the author is unmatched, even by AI. Because AI can analyse but it cannot comprehend. And as a writer, I can attest to the fact that society will not crumble under AI, at least not the creative sectors. Although, its impact is severe otherwise. AI seems to have become our best friend. In a generation where judgement is so brutal AI is becoming our therapist. What we cannot say out loud, we type to AI. Which means that not only are we using AI for academic and professional work, we also depend on it emotionally. This seems to be very unhealthy and soon we’ll face the consequences. Mentally, our attention spans are being reduced by the second. What once took us hours to think of now just takes a few seconds and a prompt. It is true that our thinking is dying. Our decision-making is getting worse. Personally, I have felt the urge to text AI whenever I have to make the smallest decision. At the end of the day, these urges should be in our control but creating AI in such a way that it cannot exploit human psychology and emotions should be the direction we proceed in. It may feel nice to have an AI therapist but adding emotions to AI responses can have serious consequences. Humans cannot use up their social batteries on conversing with AI. Along with that, all of our private day to day activities are getting recorded in its database. Which might not seem like such a bad thing, because how does it matter if AI knows I had a coffee this morning, right? Wrong. Analysing our patterns is much easier than we expect and it is unsafe to be have our routines laid out there. It makes us predictable and traceable, and if this data reaches the wrong hands, safety will cease to exist.
It would be the governments responsibility to have legal implications on AI. It would be the creators responsibility to make sure they don’t create inventions that exploit humans. Age regulations can be applied. Creators must not humanise AI. This way AI can have its own place in the world and be helpful rather than destructive. It is also our responsibility to wean ourselves off when we use AI for absolutely unnecessary things. It is up to us to use it as a tool not a teacher, therapist etc.
AI is inevitable. But it is how its usage that determines its impact. And it is our duty to control our usage. We cannot send a direct message to creators and governments and expect immediate impact, the first step towards ethical AI starts with us.
I believe that AI is not the future, rather it is the tool that makes our lives futuristic today. Considering its extreme use in recent times, it is truly essential to advocate its boundaries. And we are more likely to get AI policing before we get to inculcate ethical developing, because in such a fast-paced industry it is easier to fix a problem after it arrives rather than to spend time trying to predict it and let competition ahead in the process. However, along with programming and machine learning courses taught the introduction of ethical AI is a necessity.
As a gen Z and student, it is definitely convenient to be able to prompt Chat GPT to do our homework, and show gratitude towards its creators for inventing this while we’re in school. It was fascinating at first but over time you see why AI is being portrayed as the monsters of tomorrow. It can never write human. The depth and emotions in a creative piece of writing that took hours for the author is unmatched, even by AI. Because AI can analyse but it cannot comprehend. And as a writer, I can attest to the fact that society will not crumble under AI, at least not the creative sectors. Although, its impact is severe otherwise. AI seems to have become our best friend. In a generation where judgement is so brutal AI is becoming our therapist. What we cannot say out loud, we type to AI. Which means that not only are we using AI for academic and professional work, we also depend on it emotionally. This seems to be very unhealthy and soon we’ll face the consequences. Mentally, our attention spans are being reduced by the second. What once took us hours to think of now just takes a few seconds and a prompt. It is true that our thinking is dying. Our decision-making is getting worse. Personally, I have felt the urge to text AI whenever I have to make the smallest decision. At the end of the day, these urges should be in our control but creating AI in such a way that it cannot exploit human psychology and emotions should be the direction we proceed in. It may feel nice to have an AI therapist but adding emotions to AI responses can have serious consequences. Humans cannot use up their social batteries on conversing with AI. Along with that, all of our private day to day activities are getting recorded in its database. Which might not seem like such a bad thing, because how does it matter if AI knows I had a coffee this morning, right? Wrong. Analysing our patterns is much easier than we expect and it is unsafe to be have our routines laid out there. It makes us predictable and traceable, and if this data reaches the wrong hands, safety will cease to exist.
It would be the governments responsibility to have legal implications on AI. It would be the creators responsibility to make sure they don’t create inventions that exploit humans. Age regulations can be applied. Creators must not humanise AI. This way AI can have its own place in the world and be helpful rather than destructive. It is also our responsibility to wean ourselves off when we use AI for absolutely unnecessary things. It is up to us to use it as a tool not a teacher, therapist etc.
AI is inevitable. But it is how its usage that determines its impact. And it is our duty to control our usage. We cannot send a direct message to creators and governments and expect immediate impact, the first step towards ethical AI starts with us
This comment felt like a mirror to the thoughts so many of us silently carry but never really express. The line “AI is not the future, rather it is the tool that makes our lives futuristic today” immediately pulled me to reply.
I couldn’t agree more with your thoughts on emotional dependence. Gen Z is wired to crave instant answers and safe spaces, and somewhere in that, AI is becoming our “comfort zone.” But as you rightly said — AI can analyze, not comprehend.
Also, your point about AI knowing your morning coffee routine? we brush it off, but the more predictable we become, the easier it gets to control us and that is when it becomes dangerous.
When you mentioned about how we share everything to AI, that’s when i started to relate with our comment as I could compare it with my situation as well. When i had started to explore AI i noticed that i started to stay away from my family and friends and just typed in every single thought i had and i was thinking that it will help me through my emotional stress but it only made it worse and i learnt that the hard way.
But these lines of yours raised doubts in my mind – “It would be the governments responsibility to have legal implications on AI. It would be the creators responsibility to make sure they don’t create inventions that exploit humans.” What i want to say is that the government can actually step into the issue but practically it cannot do much as it is the public who is foolishly using AI to destroy their minds. Understand it this way – “You can pull a horse to the water but not force it to drink.” The government can put restrictions but their will always be other applications or third party apps that users WILL find to again come into the same loop.
“If AI can do my Homework, what is stopping it from making my future a horror story” My parents always taught me “Tech is not always about innovation, but also about responsibly” and this article made me reflect upon those words. In this world of opinion, AI is not ‘Good’ or ‘Bad’, it is how we use it and define our standards. While using AI to write our school assignments and projects and other creative activities, we are not making our work easier, we are letting AI replace us and that’s when it becomes what we term as ‘Bad us of AI’.
As a 16 year old who was taught to learn python and java from grade 8, I believed it to be the limit till where this is, but upon reading this article I realize that i am just a drop in the vast ocean. All I have heard from from my surrounding elder people is “You should never use AI”, “AI is going to stop your brain development” or “When we were young we never had AI”. But what I believe is “You cant expect a boy born in 2009 to use owls to send messages”. The world is growing and we need to grow with it but in a smart and conscious manner, not flow away with it.
Reading this article, I was especially struck by Sharon Chae Haver’s call to go beyond Python and teach purpose. She’s absolutely right — what’s the point of building the next big app if it unintentionally causes harm? Her push for an “ethical compass” in tech is powerful.
Dr Sepinwall’s words teach us that not everything that is legal is ethical. Law only defines that bottom limit not the extreme boundaries which we need to understand.
After listening to the people around me I started questioning myself “Should i use AI or not?”, but now my questions have changed to – “How should i use AI?”. Learning AI without knowing its proper application is like giving a toddler car keys.
There is a question that now I would like to ask the society – “Should we be concerned about robots becoming humans, or humans becoming robots?”