The Scariest Things AI Can Do Right Now | ETF Trends

Artificial intelligence (AI) is progressing so rapidly that many of the things people initially became familiar with in science-fiction novels and films are now possible in real life.

However, those advancements mean AI also has unsettling capabilities. Here are six of them.

1. Fool Fingerprint Readers

Many smartphones have biometric fingerprint scanners, and before those arrived, high-security facilities regularly used fingerprint readers for access control.

Those methods seem secure since people’s fingerprints are unique. But, researchers discovered it’s possible to trick fingerprint readers with either partial real fingerprints or ones digitally generated with AI. Scientists exploring this issue also realized that as they continued to successfully infiltrate fingerprint readers, it was possible to create more-convincing fakes.

Even without AI, people could log into smartphones with fingerprint readers using comparatively low-tech methods, like a fingerprint captured in Play-Doh. Now, since AI-generated fingerprints get past systems, too, people need to be more aware than ever that fingerprint-based security isn’t as failsafe as it seems.

2. Reach Gory Conclusions From Innocent Images

There’s a long-standing and valid argument that the AI technology is only as good as the data from which it learns. A team at MIT proved that point when it created Norman, an AI psychopath. They trained it on videos and graphic images of death gleaned from Reddit. For comparison, the researchers also taught another algorithm in a standard way.

It then evaluated how both algorithms viewed Rorschach inkblots. Whereas Norman only saw deathly images, the other algorithm noticed positive things, like a wedding cake or flowers.

This case study is particularly important for AI developers to keep in mind as they feed information into the algorithms that make AI technology work. The Norman instance is extreme but shows how wrong things can end up due to if there’s biased data.

3. Choose to Harm Humans

A fear people frequently have about AI is that it could become so smart that it revolts against humans. Even before that day potentially comes, military personnel are developing AI drones that decide who to kill. Human pilots still confirm the actions of drones used now, but ones that don’t need human involvement to make decisions about fatalities are seemingly not far away.

Such developments recently spurred Google to clarify its AI principles, including that the company will not pursue AI projects that cause or likely cause overall harm. Even though killer robots don’t exist yet, a researcher recently made one that can decide to hurt humans.

Known as “The First Law” robot, the machine decides whether or not to prick fingertips, and it learned that it could avoid being turned off by choosing not to inflict that pain. In essence, it made decisions about causing human pain to serve its own interests.

4. Wrongly Identify People as Criminals or Peg Them for Future Crimes

AI excels in many tasks compared to humans. Doctors use the technology to analyze patient-related illnesses and diagnose illnesses faster than they otherwise could, for example.

But, there are other, more chilling uses of AI that make people concerned about potentially being accused of things they didn’t do. Many police departments are experimenting with software that offers real-time facial recognition. It scans the faces of people in a crowd and matches them with databases of wanted criminals. But, it makes mistakes, particularly with people of non-Caucasian ethnicities.