Artificial intelligence (AI) is progressing so rapidly that many of the things people initially became familiar with in science-fiction novels and films are now possible in real life.

However, those advancements mean AI also has unsettling capabilities. Here are six of them.

1. Fool Fingerprint Readers

Many smartphones have biometric fingerprint scanners, and before those arrived, high-security facilities regularly used fingerprint readers for access control.

Those methods seem secure since people’s fingerprints are unique. But, researchers discovered it’s possible to trick fingerprint readers with either partial real fingerprints or ones digitally generated with AI. Scientists exploring this issue also realized that as they continued to successfully infiltrate fingerprint readers, it was possible to create more-convincing fakes.

Even without AI, people could log into smartphones with fingerprint readers using comparatively low-tech methods, like a fingerprint captured in Play-Doh. Now, since AI-generated fingerprints get past systems, too, people need to be more aware than ever that fingerprint-based security isn’t as failsafe as it seems.

2. Reach Gory Conclusions From Innocent Images

There’s a long-standing and valid argument that the AI technology is only as good as the data from which it learns. A team at MIT proved that point when it created Norman, an AI psychopath. They trained it on videos and graphic images of death gleaned from Reddit. For comparison, the researchers also taught another algorithm in a standard way.

It then evaluated how both algorithms viewed Rorschach inkblots. Whereas Norman only saw deathly images, the other algorithm noticed positive things, like a wedding cake or flowers.

This case study is particularly important for AI developers to keep in mind as they feed information into the algorithms that make AI technology work. The Norman instance is extreme but shows how wrong things can end up due to if there’s biased data.

3. Choose to Harm Humans

A fear people frequently have about AI is that it could become so smart that it revolts against humans. Even before that day potentially comes, military personnel are developing AI drones that decide who to kill. Human pilots still confirm the actions of drones used now, but ones that don’t need human involvement to make decisions about fatalities are seemingly not far away.

Such developments recently spurred Google to clarify its AI principles, including that the company will not pursue AI projects that cause or likely cause overall harm. Even though killer robots don’t exist yet, a researcher recently made one that can decide to hurt humans.

Known as “The First Law” robot, the machine decides whether or not to prick fingertips, and it learned that it could avoid being turned off by choosing not to inflict that pain. In essence, it made decisions about causing human pain to serve its own interests.

4. Wrongly Identify People as Criminals or Peg Them for Future Crimes

AI excels in many tasks compared to humans. Doctors use the technology to analyze patient-related illnesses and diagnose illnesses faster than they otherwise could, for example.

But, there are other, more chilling uses of AI that make people concerned about potentially being accused of things they didn’t do. Many police departments are experimenting with software that offers real-time facial recognition. It scans the faces of people in a crowd and matches them with databases of wanted criminals. But, it makes mistakes, particularly with people of non-Caucasian ethnicities.

Also, an AI system being developed in the United Kingdom reportedly found almost 1400 indicators that could predict future crimes, including more than two-dozen exceptionally serious ones. The idea for an upcoming prototype is that the system will flag people who have high-risk scores for committing crimes, then offer them counseling or similar services.

That technology is understandably raising ethical questions. People are concerned that the AI might invade privacy and embarrass people by reaching out to them with the help they don’t need.

Using AI for crime prevention is still in its early stages. But, it’s crucial that alarmed individuals continue to make their feelings known and emphasize that it’ll always be necessary for human professionals to confirm what the AI suggests — especially for facial recognition that might incorrectly identify someone as a lawbreaker.

5. Mimic Human Voices

Google has made headlines recently with its projects that use AI to recreate human voices with near-perfect accuracy. Also, people were shocked when a demonstration of the Google Duplex initiative showed off how the AI is so authentic that it works for booking appointments over the phone, and even says “Mmm-hmm.”

Similar software developed by Baidu is impressive, too, because it can start recreating the human voice after listening to it for less than four seconds.

The smart speakers people use every day for various tasks respond to vocal cues, too. Some complementing skills for those gadgets let people buy things or even do e-banking. As in the fingerprint example above, individuals must realize that even the voice, with its inflection and accent, could be copied by AI.

6. Potentially Contribute to Reputational Damage

Celebrities have dealt with the aftermath of doctored or otherwise misleading images of themselves for years. Frequently, the distributors of the photos — primarily tabloids or gossip sites — claim they represent photographic evidence of events that could damage a star’s reputation, such as extramarital affairs.

But, AI can now generate fake celebrity photos that are indistinguishable from real ones. Researchers achieved that feat after training the AI for only 20 days.

For now, the only instances of those images have occurred in labs for study purposes. But, in this image-driven society where social media posts can quickly go viral, the public relations representatives of the near future may have their hands full giving clarifications about fake pictures of their clients.

Exciting, Yet Eyebrow-Raising Times

These examples are undoubtedly thought-provoking. Although some will help humans, others could spark catastrophes.

With that in mind, members of society should take care to exercise critical thinking skills even more than usual and realize technology sometimes has a dark side.