The Cultural Paradox of Artificial Intelligence

We are currently facing a whole host of questions like these. Previously unquestioned norms and values ​​seem to be losing their validity. As a society, we need to reconcile how we want to live and how we should deal with each other.

From the question of whom should an AI application serve, and for what purpose, further questions about human dignity and legality arise, moving into the realms of security, supervision, control, behavior control, (data) sovereignty, privacy, and freedom.

That sounds very philosophical, so what are we left with for humans to do? Empathy?

If we are talking about AI replacing human capabilities and traits, I think it’s likely that one day AI will be able to understand and even show emotions.

What is already possible today, and is a reality in countries like Japan in many ways, is shown in the series “Homo Digitalis” [www.homodigitalis.tv]. The radical answer to your question would be: there will be nothing left that only humans can do.

However, I do not think that comparisons between people and intelligent machines are really helpful. The limits of the imaginable have repeatedly shifted towards machines in recent decades. When we get into comparisons, people can only lose out.

Ultimately, they lead to the loss or downgrading of the uniqueness of individuals. Instead, I think it makes more sense to consider AI applications as a separate form with which we are only partially competitive. Through AI’s ongoing penetration into our everyday world, we are challenged to develop new values ​​and re-think how we organize society.

What practical tips do you have for companies and ordinary citizens for dealing with the topic?

Businesses should consider AI projects in technological, economic, and socio-cultural contexts. This includes understanding how the products and services are perceived and used in the appropriate context. They will need to integrate departmental and hierarchical knowledge and experience from their own company and from external partners.

This will give them a series of different perspectives that will help them to deliver projects successfully. Through transparency and ethical action, companies can also position themselves as responsible actors. If they then communicate in an understandable language that conveys meaning, this will create the best conditions for trust and acceptance.

As citizens, I would recommend that we demand transparency and traceability from companies, the scientific community, public institutions and politics. Only in this way we can develop sound opinions and decisions. We should also work together to ensure that we -as humans- are protected against any unwanted consequences of AI and also share the benefits.

And how should we set a dialogue in motion?

The prerequisite for a successful dialogue is a common language. I see this as a particular responsibility of the scientific community and industry, who need to make sure that they are understood. For anyone who wants to know more, there are plenty of websites, blogs, books, social media sources and other contributions. I was recently impressed by the book Homo Deus by Yuval Noah Harari, but I would also recommend publications by Jaron Lanier, Yvonne Hofstätter, Eva Wolfangel and Erik Brynjolfsson.

More and more public agencies, associations and citizens’ initiatives are dealing with this topic and offering new forums. Here in Heidelberg, for example, there is a series of event run under the title “Digitality @ Heidelberg”, and it is also available on Youtube. Where this kind of event is not already happening, it could be initiated. Discussion forums and comment columns also offer space for mutual exchange.

This article was republished with permission from SAS.