llm_challenges_assessment

Task-1: I developed a generation pipeline for task 1 by using a pre-trained model called "codellama/CodeLlama-7b-hf" with certain generation parameters. The code then generates text using the defined pipeline depending on the prompt "Write a Python function to add two numbers?" and prints it to the terminal. The generated text is the result of the model's attempt to complete the supplied prompt while adhering to limitations such as length, temperature, and top-k selection. Task-2 I developed a document chatbot that remembers our previous conversations. We used chromadb as the database, huggingface's "all-mpnet-base-v2" model for embedding, huggingface's "t5-small" model for text summarization, and Dolly 2.0 3B parameters LLM for text generation. Hopefully, this post helped to remove some of the obscurity surrounding embeddings, vector stores, and parameter tweaking on chains and vector store retrievers. I used pdf data with Indian constitution material.

10/26/2023
142 views

Tags:  

#python 

#deep-learning