By Yue Qi, SAS
Sequence models, especially recurrent neural network (RNN) and similar variants, have gained tremendous popularity over the last few years because of their unparalleled ability to handle unstructured sequential data. The reason these models are called “recurrent” is that they work with data that occurs in a sequence, such as text data and time stamped data. In this blog post, I’ll be focusing on text data as an example. I hope to write more blog posts about other interesting applications using an RNN later, so tell me the applications you want to see discussed next in the comments.
Since most machine learning models are unable to handle text data, and text data is ubiquitous in modern analytics, it is essential to have an RNN in your machine learning toolbox. RNNs excel at natural language understanding and how language generation works, including semantic analysis, translation, voice to text, sentiment classification, natural language generation and image captioning. For example, a chatbot for customer service must first understand the request of a customer, using natural language understanding techniques, then either route the request to the correct human responders, or use the natural language generation techniques to generate replies as naturally as possible.
RNN models are good at modeling text data because they can identify and “remember” important words in a sentence easily, such as entities, verbs, and strong adjectives. The unique structure of RNNs evaluate the words in a sentences one-by-one, usually from the beginning to the end. It uses the meaning of the previous word to infer the meaning of the next word or to predict the most likely next word.
To read the complete article on SAS, click here.
For more news on robotics and artificial intelligence, visit the Robotics & AI Channel.