In IDF logic, if a word occurs in all documents, it is not very useful. This determines how unique the word is in the entire corpus.

Contents

Voice assistants, NLP and neural networks Magical Marketing
Voice assistants, NLP and neural networks

Voice assistants

Many articles have been written about “conversational” artificial intelligence (AI), most of the developments focus on vertical chatbots, messenger platforms, opportunities for startups (Amazon Alexa, Apple Siri, Facebook M, Google Assistant, Microsoft Cortana, Yandex Alice). AI's ability to understand natural language is still limited, so creating a full-fledged conversational assistant remains an open task. However, the work below is a starting point for people interested in making a breakthrough in voice assistants.

Researchers from Montreal, Georgia Tech, Microsoft and Facebook have created a neural network capable of generating context-sensitive responses in conversation. This system can train on a lot of unstructured Twitter conversations. The architecture of a recurrent neural network is used to answer sparse questions that arise when integrating contextual information into a classical statistical model, which allows the system to take into account what was said earlier. The model shows a strong improvement over the content-sensitive and content-insensitive baselines of machine translation and information retrieval.

NLP

Developed in Hong Kong, the Neural Responding Machine (NRM) is a response generator for short text conversations. NRM uses a common codec framework. First, the creation of a response is formalized as a decryption process based on the latent representation of the input text, while encoding and decoding is implemented using recurrent neural networks. NRM is trained on big data with monosyllabic dialogues collected from micro-blogs. It has been empirically established (ref.: doctranslator) that NRM is able to generate correct grammatical and relevant in this context answers to 75% of the input texts, outperforming modern models with the same settings.

The latest model, Google's Neural Conversational Model, offers a simple approach to modeling dialogues using a sequence-to-sequence framework. The model keeps the conversation going by predicting the next sentence using previous sentences from the dialogue. The strength of this model is its ability to learn end-to-end, which requires far fewer man-made rules.