Technology is changing rapidly: autonomous vehicles, connected devices, digital transformation, the Internet of Things (IoT), machine learning, artificial intelligence (AI), automation. The list goes on. And it has only begun.
I am often asked, “What is next for SAS? What will the future of analytics look like in 20 years?” My answer is simple: I do not try to predict the future. Instead, I examine the trends in technology and look for disruptive forces that affect the role and approach to analytics.
As 2018 begins, I see two common characteristics of disruptive technology trends: intelligence and automation.
I do not try to predict the future. Instead, I examine the trends in technology and look for disruptive forces that affect the role and approach to analytics.
Tweet this thought.
If we want smart factories, smart cities, smart automobiles and smart homes, we expect the systems that drive them to be intelligent. That requires active, real-time learning systems that can generalize and optimize from a common set of rules, are always on and can personalize.
Over the last decade, a tremendous amount of effort has gone into optimizing how algorithms train. As a result, AI has made impressive advances, supported by supervised deep learning that trains deep neural networks to perform narrow, single-domain tasks. Although with supervised learning we tell the algorithm the correct answer (the label) as it is exposed to many examples (big data), it is powerful and can create systems with superhuman capabilities.
Scientists at Stanford University trained a neural network to diagnose skin cancer as accurately as board-certified dermatologists. It required nearly 130,000 medical images. Humans learn differently. We do not need such a large amount of data, but we cannot learn as quickly as machines. As a result, it is often faster to train an algorithm than it is to train a human expert.
AI is not intelligent; it is a breathtaking example of algorithmic technology.
Still, AI is not intelligent; it is a breathtaking example of algorithmic technology. Precisely because algorithms learn differently than humans, they look at things differently. They can see relationships and patterns that escape us.
As Tom Gruber points out in his TED Talk, the conversation we should have is how machines and algorithms can make us smarter, not how smart we can make the machines. Maybe we do not let the algorithm operate the supply chain autonomously, but look to it to suggest optimizations given the current configuration or state of the system. Suggest the next move; it might surprise us.
Our focus is shifting to optimizing how unsupervised methods can relate the patterns of the world and take the best actions in complex environments. And it is achieving impressive performance.
DeepMind’s AlphaGo consisted of neural networks trained on expert moves and millions of games. DeepMind’s AlphaGo Zero system trained entirely by playing itself, starting only from the rules of the game. It beat AlphaGo 100 to none. The chess version, Alpha Zero, beat one of the best existing chess programs after only 24 hours of training.
Beginning this year, systems based on reinforcement learning with little supervision will go beyond game play. Supply chain optimization, customer journey, predictive maintenance, data center operation and building automation are examples of a myriad of domains where we have built rule-based systems. We can now quickly train systems that apply these rules more optimally than human-generated logic.
We live in the era of big data. Data volumes will continue to increase, amplified by growing connectivity between us and inanimate objects. And in this era, automation has become a necessity.