Business ethics are the moral principles that guide corporate decision-making. Wharton’s Brian Berkey is a professor in this area, considering the principles that guide decision-making around critical economic justice issues like climate change and food security. His latest research, “How should Autonomous Vehicles Redistribute the Risks of the Road?” looks at how companies that produce expensive self-driving vehicles should program these cars to behave when they get into accidents with cars driven by humans.
In this video interview, Berkey talks about the crossroads between business and social justice, as well as helping to explain how his training as a philosopher shapes his work.
An edited version of the interview appears below.
Wharton Global Youth Program: Brian Berkey is a Wharton assistant professor of business ethics and legal studies. One of his areas of study is environmental ethics and where it stands today within the context of climate change and the justice issues we are facing. He is here with us today to talk about his research interests. Hi, Brian. Thanks so much for joining us today.
Brian Berkey: Thanks for having me.
Wharton Global Youth: You work in the area of legal studies and business ethics. Can you talk generally about what that means?
Berkey: My areas of focus are mainly in business ethics and what that means is that I think about what people acting in roles within companies should do in their decision-making. So, if you’re the CEO of a company, you might face a decision about whether to adopt a policy that would lead your company to reduce its greenhouse gas emissions, but that would cost a significant amount of money and potentially reduce the profitability of the company somewhat. That’s an ethical decision, and what I think about are the principles that should guide decision-making in contexts of that kind.
Wharton Global Youth: What is your personal connection to climate change and business ethics?
Berkey: Well, climate change is an issue that affects everyone. It’s a big issue. It’s global. And so, for a long time I’ve been thinking about our obligations in the context of very large-scale global problems like poverty and so on. And climate change was a natural extension of some of the work that I had done in graduate school. I started working on climate change a bit later on, but it’s an issue that we all should care about and that raises deeply complicated ethical issues, both within business and more generally for our decision-making as individuals.
Wharton Global Youth: Do you think that climate change is actually one of the most pressing issues of today?
Berkey: So, I think it clearly is one of the most pressing issues.
Wharton Global Youth: Or the most pressing issue?
Berkey: Hard to say whether it’s the most pressing issue, but it’s obviously one of a relatively small number of large-scale global problems that we need significant coordination among people and corporations and political entities, countries, the UN [United Nations] and so on to address. And so, it’s important as philosophers like myself to think about the fundamental principles that should guide our thinking about these kinds of problems, what we should do, what our obligations to future generations are, and how we should try and work together to solve these really important problems.
Wharton Global Youth: So many teens and employee activists are demanding that businesses do more to address climate change. How do you see this playing out? Do you think that businesses are doing enough to meet climate change issues? And, do you think that climate change might put some businesses actually out of business?
Berkey: It’s a good thing that there are more and more activists who are trying to push companies to do more on the climate change issue. I think, overall, the business world is failing pretty miserably, currently, to do what needs to be done. I’ve been working on a paper with a colleague about this. I think there’s much more to do. I think the evidence suggests that current emissions trajectories are really nowhere near where they need to be if we’re going to meet even the two-degree warming goal, let alone, the more ambitious one-point-five-degree goal, which we have good reason to aim for if we can. In terms of whether climate change and the responses that might be made to climate change going forward will put some companies out of business, the answer is maybe. I’m not really sure. It should have that effect for, at least, fossil fuel companies that don’t take steps to transform their business models pretty radically. Otherwise, I think we’re not going to be able to meet the targets that we need to meet.
Wharton Global Youth: Moving out of the business sector and more into politics. Some countries have set certain economic policies, while others have refused to set the same level of economic policies. Do you think that this will have an impact going forward on how much we are able to address with climate change?
Berkey: Because climate change is a global problem, we really do need cooperation from everyone to adequately address it. And so, it’s a real problem that some countries are reluctant to contribute what really is their fair share to efforts to reduce emissions and deal with the problem more generally. Hopefully, that will change going forward. It will be interesting to see what happens in the U.S. in particular with the upcoming election next year. As I’m sure many of you know, President Trump has initiated steps to withdraw from the Paris Climate Agreement. And, if he wins, that will come into effect in 2020. So, that would be pretty disastrous for the global coordinated effort to address climate change. We’re already in a pretty bad position. We’ll see how things play out.
Wharton Global Youth: Your latest research is about autonomous vehicles. What’s that about?
Berkey: I just finished up a paper that’s going to come out in a volume about the principles that should govern the programming of autonomous vehicles for conditions in which they will share the road with the human-driven vehicles that we’re familiar with. So, the worry that motivated the paper is that autonomous vehicles are programmed to prioritize the interests of their occupants and autonomous vehicles are a luxury item that primarily wealthy people can afford, at least initially. The effect of that will be, basically, to shift a significant amount of the risk of death and injury from motor vehicle collisions away from richer people and on to everyone else on the road. I think this is a morally problematic possibility. And so, I argue that companies are obligated to program their vehicles, their autonomous vehicles in ways that don’t involve placing the overwhelming proportion of the risks onto people outside of the vehicle. Sometimes, if it would minimize the total amount of harm, the cars are just going to have to be programmed to behave in ways in conflict situations that might end up harming the occupants.
Wharton Global Youth: Among all the justice-related issues that you’ve researched, what has been the most interesting to you?
Berkey: Every issue that I’ve written about is, of course, interesting and I’m motivated to do the research because I think the issues are important. The thing that I’ve spent the most time on is arguing that individuals should think of themselves as agents of justice in a fairly direct way. This goes against the mainstream thinking among political philosophers [who think that] justice is primarily a matter for governments and other large-scale institutions. In my view, individuals who benefit from unjust institutional arrangements can be obligated to redirect the benefits that they get from unjust systems to people who are disadvantaged by those systems. For example, if you’re a millionaire in the U.S. today, you might have an obligation of justice in my view to give away a significant portion of your money, voluntarily, even if the government isn’t doing things like raising taxes on you in order to fund the kinds of things that are required as a matter of justice, like universal health care or better education for disadvantaged groups, and so on.
Wharton Global Youth: That’s so interesting. What career options are there for people who are also interested in business ethics?
Berkey: One obvious career option is to become a professor. Business ethics is an area of academic study that crosses disciplinary boundaries. So, there are people based in business schools doing work in business ethics, people in philosophy departments doing work on issues related to economic justice more broadly and how this applies to business contexts. There are people like me who did a PhD in philosophy, but now teach in a business school. So, that’s the academic route where there are a range of options. But also, more and more companies are actually hiring people to think about ethics as part of the company decision-making process. This seems to have started in the tech industry where they’ve actually hired people in many companies with PhDs in philosophy — so people with the kind of training in ethics that I have. This hasn’t extended that far beyond the tech industry yet, although there are efforts to broaden it. Another option for people who are interested in business ethics is to do something like study philosophy maybe alongside business in college and then seek to find a job where the philosophical training in thinking about ethics and other kinds of issues would be particularly relevant and where you might have an opportunity to influence company decision-making in an ethical direction.
Wharton Global Youth: You mentioned that you got your PhD in philosophy. Could you talk a little bit about what it means to study philosophy?
Berkey: Philosophy is a little bit of a unique academic field. So, we attempt to answer questions that can’t necessarily be answered by doing things like gathering data or running experiments. So, for example, questions about how we should live that aren’t determined by looking at how people, in fact, live because people might be doing the wrong things. And, it’s characterized by an effort to think very carefully and slowly and systematically through the kinds of questions that we address, recognizing potential inconsistencies in our own views, listening carefully to the arguments and reasons offered by others, and trying to work together to think through complicated issues. [We hope to] arrive at more consistent, more well thought out, more systematically justified views on important, philosophical questions — like how we should live our lives and what kinds of principles should guide our decision-making.
Wharton Global Youth: What can we do if we’re concerned about the ethical practices of a company or an industry?
Berkey: One thing that, of course, consumers have done for a long time is to try and organize boycotts of companies that they think are behaving unethically in one way or another. We can limit our purchasing of products that we think are potentially produced in an unethical way or by companies that don’t treat their workers well, and so on. There are a lot of things that we can do in our individual lives that are aimed at trying to improve corporate, ethical behavior. A lot of times this requires coordination. Boycotts only work if a lot of people are involved. So, another thing we can try and do is have discussions with people in our lives about things that we think are problematic that are done by companies and what we think they should be doing differently and better. [We can] try and persuade other people to change their buying habits and change aspects of their own lives in ways that would contribute to addressing these issues.
Wharton Global Youth: So very interesting. Thank you so, so much for being here today with us.
Berkey: Thanks for having me.
Related Links
- Brian Berkey’s Autonomous Vehicle Research
- Wharton Legal Studies and Business Ethics
- Edmond J. Safra Center for Ethics
- New York Times: Business Ethics
Conversation Starters
Brian Berkey says, “Individuals who benefit from unjust institutional arrangements can be obligated to redirect the benefits that they get from unjust systems to people who are disadvantaged by those systems.” This makes us all responsible for making decisions that help society. What does he mean by this? What are some examples? How do you feel about this idea that the individual, and not government or other entities, should be responsible for creating a fairer playing field? Debate in a group or with your classmates.
In today’s data-obsessed world, there is room to consider decisions outside the numbers. How do Brian Berkey’s point of view and profession help to further this idea? Why can’t all decisions just be reduced to numbers and the story they tell?
Are you concerned about the ethical practice of a company or an industry? What actions have you taken to raise awareness or make your opinion known? Share your story with our readers in the comment section of this article.
While I am concerned about how businesses do act, I haven’t really done anything about it because I don’t really know what to do at the moment.
I think it’s very interesting to attribute philosophy to business. I believe that analyzing human and corporate decisions, we gather a better understanding of if these actions are right or wrong, good or bad. This is important because businesses have the power to make large impacts, and if there is something wrong with the data or ideas that the corporation is basing its decisions on, many people can be negatively affected. Furthermore, if that company is exposed later on, through boycotts or the media, its bad business practices could essentially destroy the business, regardless of if the actions were purposeful or not. For example, there was an AI resume reader tool that was trained on “successful resume applications” and would accept or reject an application based on keywords in the resume. The problem is, the training data was from years prior, which meant that a great majority of successful applications were from men. It was later found out that the algorithm was very biased in that if it saw the word “female” on an application, it was less likely to accept it, simply because this keyword was not in the training data. As a result, thousands of women were negatively impacted because they could have been completely qualified to work at a company, but they were rejected just because they were women. From this example, it is important to be more cautious about business decisions even if it takes time, and I think that’s where ethics comes into play.
Ethics will play a major role in shaping business debates in the 21st century. WIth the fast pace advent of Artificial Intelligence in nearly every business field, entrepreneurs have a lot of questions to answer before they can allocate work to big data algorithms. And these questions will be far more challenging than what the dilemmas they had to face till now. When entrepreneurs use Artificial Intelligence, they have to be specific about how the program will run, as mentioned before in the AI Resume Reader Tool. The extent to which personal biases of companies enter the applications filtering algorithms will play a huge role in the shaping of future markets. Nationality, race, religion, gender and several other factors could become the basis of division while recruiting employees. With the use of Artificial Intelligence, it will be far easier for a company to discriminate against applicants who do not belong to their desired pool of applicants and business ethics will be vital in these situations. In a Xenopobic country, will a company stop giving jobs to immigrants?
The companies have an even bigger problem to consider. As Artificial Intelligence overpowers humans in an increasing number of jobs, would companies be willing to provide employment to large numbers of humans? Using machines and algorithms marginally reduce production costs of companies and yield better results. In such a situation, is it right to take away jobs from humans? These ethical questions will have a gigantic impact on the future economy. Is it ethical for a company to let AI do most of the company’s work? These decisions of these questions will have significant influence over several macroeconomic factors like the employment rate and the income inequality of the country.
Will government intervention be required to stop companies from acting solely for profits? In a world that has favoured free markets to provide the best results, will governments force companies to hire a certain number of employees (also putting guidelines on demographic composition)? These questions need special attention in the current times where definite pessimism is the dominant economic thought, and markets are failing to provide the required results.
Lucy, I want to commend you for the fantastic comment you wrote. The specific example you gave solidifies your stance and your argument. Ethics in business and new tech fields is becoming a bigger topic with the rapid advancement in these industries, where laws can not keep up. To be honest, it will be hard for the government to draw a clean divisive line of what is ethical vs not: who gets the final word on ethical AI and machine learning?
Doing the right thing often seems easy, but when talking about business, doing ethical things is almost always the arch-nemesis of making profits. This is largely attributed to the fact that no company is willing to sacrifice to make ethical choices; The choices they make are companies not installing new filters on their chimneys, cutting too many trees in one area, etc. It is innate greediness that stops us from making this choice.
Lucy, the stellar example you gave shows an example where machine learning violated ethical codes and was biased towards women. I am deeply involved in machine learning/coding and used tensor flow to complete a research project. With the usage of big data to make decisions, there is bound to be bias in the original data set, as shown in Lucy’s example, and the bias will be amplified when machine learning is applied to make thousands of similar decisions. The only method to reduce the bias in data is to use diverse data sources with adequate representations of the population. Another example to contribute to your convention is when Google Photos only used white males to train for image recognition for humans. When they trained the data, this bias expanded and other races were often misidentified or missed out as being classified as humans. When projecting this bias to the development of autopilot vehicles, the mistake can place many innocent lives at risk. This leads me to believe that we are not ready to fully rely on these technologies to make ethical and moral decisions. Maybe in the future, we might find a way to reduce the bias and improve the training method to complete tasks more efficiently.
Following the topic of business ethics, I would like to introduce another rising topic: the relationship between big businesses and the environment. The debate between whether or not companies should be more ethical towards the environment or more profit-driven is a rising topic since the climate is getting worse, and we are. Environmental advocates, like Greta Thunberg, argue that business has the responsibility to make eco-friendly decisions, which will hurt profits. When companies choose not to do so, since there are no incentives, environmental advocates are blaming capitalism and businesses while these advocates are not considering the alternative. They are calling out capitalism, but without capitalism, all business will most likely be run by the government. The most notable example of a government running a business is in China, and most US citizens would be very against some of the policies they enforce. I believe a solution to this problem is government tax cuts. Our government could provide tax cuts for eco-friendly so companies are encouraged to take measures to protect our Earth.
Again, Lucy, your comment has inspired me to be cautious about the consequences that might come from machine learning. It begs the question, should we use AI and machine learning to make our decisions, or do we need to wait till we have open-source data and full transparency in the technological industry for machine learning to become a viable technology humans can depend on? Even though this issue is difficult to tackle, I still believe that we should face that problem since our society would advance much faster with the help of machines and AI. I hoped that my comments about the rising topic of environmental and business responsibility introduced a new facet to this topic and can leave you pondering about the correct role the business should take given the impending danger of climate change and gave you a sense of the potential problems with AI and machine learning and how they may be unethical.
It’s no secret that ethics appears as a controversial topic in business. Especially when factoring in how for the past years, ethics and financial gain share the same poles of a magnet – in short, they do not work together and would otherwise repel. Tying into what you said Lucy, I agree with your points about how as you have a greater status, you naturally gain greater responsibilities, those being the kind of decisions you make and if that decision would benefit the majority of the people you are appealing to. As well as your other point about how banding together to oppose or support a stance gives an overall more assertive tone and impact. This is especially the case when you mentioned rejecting something as immature as only setting up AI resume readers to take in recruits which fit more into an idealistic form at that certain company.
The software scans applications based on specific criterias set up by the employer, indicating that they organize by eliminating those who lack some of those criterias rather than by what more they can offer the company despite those disadvantages. In fact, statistics have shown that “hidden workers” (Candidates that actively seek employment, yet are denied or discouraged based on lacking something from the original set of credentials, are referred to as “hidden workers.”) fit into three categories: 63% juggle one or multiple part time jobs (but would like a full time one), 33% seek employment but have been unemployed for a long time, and the remaining 4% are classified under “missing from the workforce.” Implying that they did not actively seek employment, however, are willing to work under the right circumstances (such as pregnant women or caregivers). This system is biased towards a certain minority and makes it more difficult for women or not as well off people to find jobs, even more so as we step further into a technologically advancing world.
Professor Berkeley’s commentaries on autonomous vehicles and its ethics provides another example of how technology is very biased and seems unreliable. His points align with your own concerns about technology impacting a certain majority of people, especially those who are not as well off. He believes there should be principles that govern the program of autonomous vehicles. The major concern for him is that as autonomous vehicles are considered a luxury for the rich in the eyes of the common people, those said common people are also the greatest victims in car accidents dealing with these self-driving cars. While the concept of autonomous cars seems exhilarating and highly advanced, there are always crucial flaws and fears in blindly trusting an AI. According to the Brookings Institution, 61% of adult internet users said they would be uncomfortable riding in a self-driving vehicle. An additional 75% of individuals preferred to drive a car than ride in one that is self-driving. These statistics have a common factor, which would be that adults or people who are able to drive are more cautious in trusting in self-driving vehicles. Additionally, autonomous vehicles were involved in more crashes. 9.1 crashes per million miles traveled in comparison to the 4.1 for conventional cars. While AI is programmed to minimize casualties of the passenger, they do not factor in all the possibilities of a simple accident/collision with another pedestrian (in a life or death situation) – in which case comes the question of whether they are actually worth the risk. Companies should realize what is most important, their clients. To minimize damages, they should consider methods of avoiding such collisions, such as swerving as the vehicle senses another one nearby, perhaps even pull over to avoid direct confrontation with something the same size or living. The autonomous vehicle market is expected to be worth a trillion dollars by 2025, according to estimates. With the rising popularity of these self-driving cars, it’s even more crucial to make sure that they don’t collide with each other.
As business ethics majors are heavily praised upon the subject of decision making for companies, it creates a lot of pressure as to who would be best fit for this position or if they are making the “right decisions”. Professor Berkley and you, Lucy, have both mentioned that companies should take responsibility for their products, in a way that doesn’t involve placing the risks onto other people. I agree with this statement, but it does scare me for the future, how some companies don’t have similar beliefs. I believe that companies should take responsibility for their inventions. In terms of ethics especially, they should consider the purpose, intent, and the consequences of their creation, otherwise it would only lead to accidents and sentences.
Personally, there should be more of a balance on how much influence technology has on our lives. The public is made aware of all the benefits of AI at first, but when really experiencing it, they begin to notice several flaws and take matters into their own hands. Humans are the most aware of whether something is ethical or not, when they find a purpose to fight for, eventually, voices can be heard and changes can be made.
I believe that business ethics is ever more important to consider with many companies venturing into fields that laws may not cover. For example, technology and the rise of artificial intelligence can be argued to have many benefits for humans or deemed to be invasive of our privacy and rights. One recent company currently in the spotlight is Clearview AI, an organization that has created a groundbreaking facial recognition app. Large organizations have refrained from developing this type of technology, but Clearview AI’s app is being used by hundreds of police agencies. Although according to law enforcement, this app can help identify criminals, the existence of this program also takes away the privacy of billions of people. I strongly agree with Berkey when he says, “fundamental principles … should guide our thinking” and that we should consider our “obligations to future generations.” Despite AI revolutionizing our world, we shouldn’t the consequences lightly. This “amazing” technology could become a tool for governments to have absolute control over their people. Just because there are no laws in place regulating new innovations, doesn’t mean it is right. The ultimate goal for a company is to generate value for its shareholders, but the process through which it achieves its goals shouldn’t be weighed lightly. Although I agree with Berkey’s point about individual justice, I believe that governments also play an important role. In the case of Clearview AI, the government should gauge the long term effects it has on the entire population before utilizing it. I think raising awareness among friends and family is a great way to start fighting against unethical businesses. Although a high schooler like me may think that their fight against a large company may be futile, if you light the fire in a few people to support your cause, the combined effort will create a significant impact, and possibly change the world for the better.
I agree with your claim and would like to add a different example to it. Like artificial intelligence, drones can be viewed as either a benefit or a breach of our privacy and rights.
On the surface level, you may think that the future of drones is simply pizza being delivered straight to your house. What you may have not realized is that it could also be a means of increased criminal activities. Just imagine narcotics and weapons being smuggled through a drone. In 2018, officials added prison breaks as a potential illegal use of drones after a convict escaped prison by tools that were delivered by a compact drone. Technologies have advanced so far that innovations once thought only to be used under tight regulation by the massive funding military can be simply bought by a civilian for a couple hundreds of dollars. The unknown and wild potential within our reach can lead to vast impacts if not carefully regulated.
On the other hand, the potential good that these drones bring cannot be denied. Just imagine drones by the dozen traveling along the world delivering supplies to people in aid. Well, you don’t even have to imagine it, because in 2018, a drone delivered a rescue pod that saved two swimmers in Australia. There is no doubt that drones in the right hands will cause great things to happen, but they can have the completely wrong consequences in the wrong hands. Drones are a double-edged sword held by individually unique wielders. The question now is how to regulate them, or even if we should just ban them for the public. I believe that the government is in charge of deciding these actions, because they are the only one with the authority to do so. However,I also believe civilians should have an active role in this and be allowed to voice their opinions since it affects their lives and privacy. Once the government and civilian can make a compromise when we can effectively move forward as a society.
I find it hard not respecting those who factor ethics into their conduct. I am so fascinated with the implementations of moral practices, yet find it unfathomably difficult to characterize definitely. Because differentiating right from wrong is so nuanced, you must define it case by case. Yet, there seems to be a common theme, a somewhat bleak reality, when describing ethics in the business world: the financial opportunity costs of moral-driven administration.
Even in this interview, the prospect of ethical management is paired with that of financial loss. Seemingly, by acknowledging your demographic’s needs, you’ve forfeited to companies who can ignore them efficiently and cost-effectively. It’s saddening that profit is the well-known arch-nemesis of doing the right thing, but this is not an absolute truth–only a pattern. It is a testament to an imperfect system but not a defeated one. It is possible to benefit internally through external benevolence, and I think that’s something consumers and business persons alike should keep in mind. It’s no mantra, but I feel it is too often that we default to a zero-sum mindset; entrepreneurship can and should be a positive-sum game.
Unfortunately, as an economy, we may not be there yet. I’m not a CEO, COO, CFO, or CMO. Believe it or not, I’m not associated with any high authority three-letter acronym (maybe “IDK”; or “LOL” on a good day). Still, I see where Professor Berkey is coming from when he says [specifically in respect to climate change] that “the business world is failing pretty miserably.” He says that climate change should put the more stubborn of fossil fuel companies out of business. I suppose it makes sense. If they must go down, why not by the monster they knowingly created? It all brings to mind the story of PG&E–the corporation coined climate change’s first bankruptcy.
I remember this chronicle made countless headlines throughout late 2019. Pacific Gas & Electric Company was a compelling case. Despite having opted for relatively clean energy sources, they found themselves in the center of a climate change scandal. How? Well, their demise wasn’t a leaky oil rig or a sudden shortage of dinosaur bones. Instead, it was a combination of seedy practices and gross negligence. Combine that with climate change, and you’ve created a perfect storm–capable of taking down California’s biggest energy provider.
For a while, PG&E held membership in the Global Climate Coalition. For those who don’t know, the GCC functions to stall carbon regulations and spread misinformation about the science of climate change. Additionally, PG&E had forgone vital safety precautions for the sake of saving money. That’s not to say they did nothing–they had removed a large number of dead trees and buried numerous power lines around their facilities–but they didn’t do enough.
In recent years, California has faced devastating droughts, a product of climate change. The consequences of which did not bode well for PG&E’s stinginess. On more than one occasion, a tree’s colliding with a PG&E powerline led to a spark. Unrestricted, these sparks grew to some of the most devastating wildfires California had seen. Along with fines and plummeting stocks, PG&E faced a class-action suit, alleging the company responsible for the almost ninety lives taken by these horrific wildfires.
PG&E is accredited with the most destructive wildfire in Californian history–destroying over 14,000 homes and 100,000+ acres. Last year, they declared bankruptcy. I’ve lived in Kentucky my entire life, but this incident sent metaphorical shock waves throughout the country. It’s more than a headline or sad story; it’s a wake-up call. I’d even say it’s a call to action.
I believe PG&E may be the epitome of Professor Berkey’s remarks: the unyielding company brought down by climate change. Maybe the bankruptcy is precisely the poetic justice Berkey is warning about. PG&E disproved their very own pretenses, and though we all wish the circumstances had been less gut-wrenchingly devastating, this is not something we can forget.
Tragedies like this are precisely what makes business ethics so crucial. It’s why, as business people, we can’t see morals as the enemy. We need compassion just as desperately as we need change. PG&E was dubbed the first “casualty of climate change.” Who will be the last?