Today, we dive into Retrieval Augmented Generation. This is a way to augment LLMs with additional data coming from a database. The data is first encoded into vectors, and they are stored in a vector database for fast retrieval. We are going to cover the following points:
Indexing the data in a local index and augmenting an LLM with it
Indexing the data in a Pinecone data and augmenting an LLM with it
Providing the data source when answering questions with LLMs
Indexing a website and using its data to augment an LLM
Indexing GitHub repo to ask questions about the code base
Below is the code used in the video!
Watch with a 7-day free trial
Subscribe to
The AiEdge Newsletter
to watch this video and get 7 days of free access to the full post archives.