Leadership for the Future: Ethics and Leadership in a Tech World

Rado Kotorov's picture
 By | juli 22, 2016
juli 22, 2016

When it comes to ethics in the tech world we have two big topics to discuss. The first is identifying the ethical challenges and the second is determining how we nurture ethical awareness in employees. Both are big topics to tackle, so I’ll address identifying ethical challenges today and will share more thoughts on the second point in the coming weeks.

To me, there are three emerging fundamental ethical issues: privacy, machine co-existence, and decision automation.

Privacy and Transparency

The first issue is well known: personal privacy. We live in a connected world that makes it possible to track nearly each and every one of our actions. We also live in bio tech world, which makes it possible to know intimate details about our current and future health status. DNA tests, as well as other medical tests, provide fairly reliable predictions of our future health. Thus, technology has made human beings completely transparent and, to a great extent, predictable. While transparency has big potential for good, we humans are deeply suspicious of it. Why?

Look at it from an evolutionary and survival perspective. When humans gained enough knowledge about certain animals to predict behaviors, they used this insight to hunt them or to raise them for food. Hence, we cannot easily overcome our fears about the impact of technology on our privacy. We fear that prediction would be used to exploit individuals for profit or other goals contrary to their personal interests. We need a whole new approach and regulations to be able to eradicate this deeply seated and instinctive fear.


The second emerging ethical issue centers on the robot and homo sapient work environment. Up until now, we have co-existed with sentient beings in our workplace. We have cooperated with various animals to get certain jobs done, but we have not had to cooperate with intelligent machines. Historically, machines were tools operated by humans 100 percent of the time. But this is changing.

Robots are becoming increasingly autonomous, and humans must learn how to cooperate with them. For the first time, humans will not be the major force in the workplace. They will have to relinquish some of the control and power to robots. As in the case of privacy, this shift provokes deep evolutionary fears as relinquishing power has often led to extinction.

Given these fears, a whole new ethical framework around how to treat and cooperate with intelligent machines needs to emerge. Creating it won’t be without challenges, as ethics are largely about our dealings with humans, sentient beings, and nature. How do we incorporate artifacts and machines into our ethical considerations? If an employee gets angry, and then hits and breaks a robot’s arm, does the employee deserve to go to jail? There is damage inflicted, but there is no pain or emotional suffering. How do we deal with cases like that?

Decision Automation

The third ethical issue is largely about decision automation and decision delegation to robots and machines. The real issue lies in whether delegating certain decisions to autonomous machines excuses us from ethical responsibility. To put it more in economic language, does decision optimization make decisions value neutral? Many people argue that economic decisions are value neutral – if we want to achieve optimal results and the steps are deterministic (dictated by a mathematical formula), then there is no room for ethics. We simply have to do what we have to do. Adam Smith did not think so, but this is a separate discussion.

Let’s look at an extreme example. If a company produces the ultimate border patrol soldier to deter illegal immigration, who is responsible for the victims of this efficient machine. Is it just a job? Who is going to program it? How would the programmer feel? If we take this to an extreme and assume that programmers become solely concerned with the efficiency of the machine and not with the outcomes, are we creating a society in which people will become value neutral? A programmer may have the perfect excuse, as he or she was tasked just to produce an efficient machine. The office deploying the machine will similarly have a perfect excuse as it was tasked to turn on the machine. Everyone in this chain will have a perfect excuse not to take on ethical responsibility. And this is a slippery slope to a value neutral world.

Throughout history major technological changes have led to big ethical questions. So far humanity has used ethics to overcome the naysayers and to gain more from the benefits of technologies than to use them destructively. I am a firm believer that our ethics will evolve and provide deeply human answers to all this big issues.

While we’ve laid the groundwork here on some of the fundamental ethical challenges we’re sure to face in the years ahead, there is still much more to discuss on the topic. Stay tuned in the coming weeks for my thoughts on how we can nurture employees to make ethical decisions in light of these new challenges. I’d love to hear your thoughts on the ethical challenges that you think will emerge in the coming years as a result of technology if you’d like to comment below.