Question answering in this context refers to question answering over your document data. This process involves fetching multiple documents, and then asking a question of them. The LLM response will contain the answer to your question, based on the content of the documents.
C:\Users\user\source\repos\DocumentAIBot-template> node .\searchpinecone.js
🧑🦱 Query: what is this document about?
🤖 Response: This document is about the effects of alcohol on pilot performance, including tolerance, impairment, and hangovers.
C:\Users\user\source\repos\DocumentAIBot-template> node .\searchpinecone.js
🧑🦱 Query: List me the effects alcohol has on a pilot
🤖 Response: Alcohol impairs the pilot’s ability to perform the required tasks during the operation of an aircraft. It can cause drowsiness, slowed reflexes, impaired judgment, and distorted vision and hearing. It also increases the risk of accidents and other dangerous situations.
C:\Users\user\source\repos\DocumentAIBot-template> node .\searchpinecone.js
🧑🦱 Query: Which type of beverage has the highest alcohol content?
🤖 Response: Vodka, as it contains .50 ounces of pure alcohol per 1 ounce serving.
```
Response: Vodka, as it contains .50 ounces of pure alcohol per 1 ounce serving.
❗ Notice! In query 3 how the ai bot was smart enough to digest the chart data as well! Pretty neat. 🧙♂️
```