Assessment of LLM Bootcamp

This notebook contains the answer for 2 questions. 1. Code Generation: We load the dataset using the datasets library from Hugging Face. We define a function generate_code_from_instructions that takes an instruction as input, tokenizes it, generates code, and decodes the generated code. We iterate through the dataset, extracting instructions and actual code outputs for each example. For each instruction, we generate code using the generate_code_from_instructions function and compare it with the actual code output. 2. ChatGPT on your own data: A custom PDF chatbot, created on top of the Llama2 13B model from huggingface using Langchain.

10/9/2023
30 views

Tags:  

#deep-learning