So how does an AI platform manage to carry out the process of answering questions or creating content?

In order to answer a question, an application needs to perform a document search where it looks for documents containing all of the terms in a given query in order to first ascertain that these documents are more likely than not to contain a relevant answer. It requires correctly labelled data in order to do this.

It then uses the weighting, or distance, given to documents deemed to be more relevant and can pick out specific areas of the documents that refer to the bundled analysis it has given to our question, in order to provide an answer.

Sentiment analysis is required to work out similar or exactly the same words that have different meanings and this is crucial for language processing. It relies on being able to match words that appear with other words often in order to determine meaning. It also relies on word embedding which take into account not only the frequency of a word but also it’s context.

Sentiment analysis therefore needs to be able to contextualise not only the question but also the retrieved documents in order to offer answers that match the sentiment of the question. It will then need to perform information extraction from the relevant document collection and perform an extraction pattern.

The application of text generation and chatbots take this to another level using a language model referred to as “generative”. This is because it regenerates words pulled from different “bags” in order to create a narrative or a conversation flow. These bags can be labelled according to the topics that they each represent.

In these instances, the text generated is pulled from word bags in proportion to the topical distribution in the narrative or conversation.

See the article below on how this is applied in the real world.