By Thomas Keil

With the rapid development of Artificial Intelligence (AI), more and more ethical and moral questions arise. One of the most common is the extent to which we want to automatically let algorithms make decisions.

From Robo-Advisors to optimize our investments to self-driving cars, there are possibilities everywhere in modern analytics that will make human decision-makers superfluous. What will be left for us to do? This discussion goes far beyond technological issues and must be conducted on a broad basis; it affects us all.

That’s why I am with the Heidelberg cultural anthropologist Dr. Stephanie Sommer from KulturBroker. She advises organizations on the implementation of research and development projects, innovation management and dealing with cultural change. Her most recent projects have centered on digital transformation processes and technological progress.

Stephanie, what does “Artificial Intelligence” mean to you? Is it a promise or a threat?

I think that, among other things, AI means that software, in combination with “smart things”, will be able to do a whole range of activities that we genuinely thought would—and could—only ever be done by people.

It can see, hear, speak, compose poetry, give directions, filter potential employees and life partners, control industrial plants, means of transport and household appliances, conduct business and serve customers. Until recently, most people would have said that these activities could never be done by machines.

As citizens, I would recommend that we demand transparency and traceability from companies, the scientific community, public institutions and politics.

It is promising in several ways. For example, it could decrease the number of tedious tasks to be done or, for example, promise profit maximization, cures for disease, a longer and healthier life or victory in the fight against climate change.

That said, there is very little proof that this promise will ever be fulfilled. At the same time, at least in the German media landscape, there is concern about the effect of AI on work. We do not currently have a viable social model beyond gainful employment.

As a result, AI is perceived as a threat, especially where people are being replaced by AI applications in the workplace. The same applies if AI is able to replace what we might call special gifts, such as those of a composer or game player, or where professionals have worked for many years to acquire knowledge and skills.

In most cases, however, AI is now so deeply embedded into our everyday lives that its presence is taken for granted and barely even perceived. I am glad to see that there is a broader public debate opening up on the subject. “Businesses should consider AI projects in technological, economic, and socio-cultural contexts” says Dr. Stephanie Sommer

What are the really complex questions that we need to address?

On the one hand, AI is fundamentally about the question of how it changes our everyday lives, our coexistence and our humanity. On the other hand, AI applications raise practical, legal, moral and ethical issues. Some of these questions have been discussed in a recent Technology Review.

Let’s take the example of health insurance, which now uses AI applications to check that I am doing enough for my health. Do I want to change my current lifestyle to stay healthy? Should I voluntarily submit to behavioral control from my insurance?

Is the health insurance company even allowed to do that? What happens to my data? Should health insurance companies and their AI be allowed to say what a healthy and desirable life looks like? What about those people who have no time or resources to take care of their health?

We are currently facing a whole host of questions like these. Previously unquestioned norms and values ​​seem to be losing their validity. As a society, we need to reconcile how we want to live and how we should deal with each other.

From the question of whom should an AI application serve, and for what purpose, further questions about human dignity and legality arise, moving into the realms of security, supervision, control, behavior control, (data) sovereignty, privacy, and freedom.

That sounds very philosophical, so what are we left with for humans to do? Empathy?

If we are talking about AI replacing human capabilities and traits, I think it’s likely that one day AI will be able to understand and even show emotions.

What is already possible today, and is a reality in countries like Japan in many ways, is shown in the series “Homo Digitalis” [www.homodigitalis.tv]. The radical answer to your question would be: there will be nothing left that only humans can do.

However, I do not think that comparisons between people and intelligent machines are really helpful. The limits of the imaginable have repeatedly shifted towards machines in recent decades. When we get into comparisons, people can only lose out.

Ultimately, they lead to the loss or downgrading of the uniqueness of individuals. Instead, I think it makes more sense to consider AI applications as a separate form with which we are only partially competitive. Through AI’s ongoing penetration into our everyday world, we are challenged to develop new values ​​and re-think how we organize society.

What practical tips do you have for companies and ordinary citizens for dealing with the topic?

Businesses should consider AI projects in technological, economic, and socio-cultural contexts. This includes understanding how the products and services are perceived and used in the appropriate context. They will need to integrate departmental and hierarchical knowledge and experience from their own company and from external partners.

This will give them a series of different perspectives that will help them to deliver projects successfully. Through transparency and ethical action, companies can also position themselves as responsible actors. If they then communicate in an understandable language that conveys meaning, this will create the best conditions for trust and acceptance.

As citizens, I would recommend that we demand transparency and traceability from companies, the scientific community, public institutions and politics. Only in this way we can develop sound opinions and decisions. We should also work together to ensure that we -as humans- are protected against any unwanted consequences of AI and also share the benefits.

And how should we set a dialogue in motion?

The prerequisite for a successful dialogue is a common language. I see this as a particular responsibility of the scientific community and industry, who need to make sure that they are understood. For anyone who wants to know more, there are plenty of websites, blogs, books, social media sources and other contributions. I was recently impressed by the book Homo Deus by Yuval Noah Harari, but I would also recommend publications by Jaron Lanier, Yvonne Hofstätter, Eva Wolfangel and Erik Brynjolfsson.

More and more public agencies, associations and citizens’ initiatives are dealing with this topic and offering new forums. Here in Heidelberg, for example, there is a series of event run under the title “Digitality @ Heidelberg”, and it is also available on Youtube. Where this kind of event is not already happening, it could be initiated. Discussion forums and comment columns also offer space for mutual exchange.

This article was republished with permission from SAS.