_assessment

Task-1 For task 1 I have created generation pipeline using a pre-trained model called "codellama/CodeLlama-7b-hf" with specific generation parameters. The code then generates text based on the prompt "Write a Python function to add two numbers?" using the configured pipeline and prints the generated text to the console. The generated text will be the result of the model's attempt to complete the given prompt, and it does so with certain constraints like length, temperature, and top-k selection. Task-2 We built a document chatbot that remembers our chat history. We have used chromadb as Database, “all-mpnet-base-v2” model from huggingface for embedding, “t5-small” model from huggingface for text summarization and Dolly 2.0 3B parameters LLM for text generation. Hopefully, the article helped to take some of the mystery out of embeddings, vector stores, and parameter tuning on the chains and vector store retrievers. I have used pdf data which has content of Indian constitution

10/26/2023
63 views

Tags:  

#deep-learning