GenAI Stack, developed by AI Planet, is a no-code, end-to-end developer platform designed for building production-grade LLM applications. Creating LLM applications with GenAI is as easy as making a powerpoint presentation. The platform streamlines the creation of dynamic apps with an intuitive drag-and-drop interface, allowing developers, data scientists, and anyone with the knowledge of how LLMs work, to quickly experiment with LLM applications from prototype to production with minimal effort.

Try Now: https://app.aiplanet.com
Quick Guide
This guide introduces the use case: Chat with your own PDF, which will allow you to ask questions in plain language and get relevant answers from your documents making your information more accessible and useful than ever before.
Once we're done, we will essentially have a chatbot that can answer any of your queries related to any page of AI Planet's GenAI Stack Gitbook! Let's begin!
Note: Make use of circular icon to connect the next component.
Document Loader
The first step, of course will be to load your documents. Our platform offers a multiple loaders to suit your data needs. A PDF loader, for instance will require you to upload your document by clicking on the file upload icon. For this use case, we will be using the GitBook loader to load this documentation! Simply drag and drop the loader and paste the URL.

Text Splitters
For our data to be ingested, we use Text Splitters which will chunk our data into smaller defined chunks, that will make it easier to retrieve relevant content as you will see later. We can specify the size of these chunks, the overlap while chunking and the seperator.

Embeddings and Vectorstore
Now, this chunked data cannot be stored as it is, as this will make it difficult for fast retrieval. To avoid this, each chunk is converted into a unique numerical code, like a fingerprint, using a process called Embedding. This code captures the key information and meaning within the chunk.

These codes are then stored in a special database called a Vector store. This database is optimised for efficiently searching and retrieving information based on these numerical representations.
Large Language Model
The soul of this entire pipeline is the LLM or Large Language Model. Large language gets the context and returns the intelligent response. We can use LLM from OpenAI's GPT, HuggingFaceHub, Anthropic- Claude, VertexAI and other models.

Memory
We want our chatbot to remember the conversation history to be able to answer questions related to the current conversation and previous queries. For this we have various memory based components, for now we use the ConversationWindowBuffer Memory.

Chain
Now its time to integrate all our components!

Chains are sequences of calls that can be made to an LLM, a tool, or an external data processing step. These are used for crafting multi-step workflows and simulating intricate interactions with language models. We will be using the RetrievalQA chain in this guide. This will allow us to pass the previously defined prompt and add memory to our chat bot as well.
Build Stack
Now we just click the run icon on the bottom right cornet to build the stack. This will validate our entire stack and tell us if all the components are working.

Great! Now that everything is built, the chat icon is now activated meaning the chatbot is now ready to answer your queries! Click on this icon to open up the chat interface!

The chatbot can now answer queries related to this Documentation! That's great! This brings us to the end of this guide. Note that the quality of the responses depends on each of the components used! Feel free to try out different chunkers/LLMs, prompts etc. Only when you explore can you build something great!