BeyondLLM 0.2.2 Release

BeyondLLM 0.2.2 Release

Aug 5, 2024

Aug 5, 2024

We are excited to announce the release of Beyond LLM version 0.2.2. This update introduces several significant enhancements designed to improve the functionality and usability of our framework. Below, we delve deeper into the key features of this release, providing insights and code examples to help you get started.

Key Highlights:

  • Memory Integration- The memory feature allows Beyond LLM to retain context from previous interactions, enabling the model to generate more personalized and contextually relevant responses. This is particularly beneficial for applications like chatbots, where maintaining continuity in conversations is essential.

  • Python 3.12 Support- Beyond LLM now supports Python 3.12, ensuring compatibility with the latest features and improvements in the Python ecosystem.

  • Langchain RAG Evaluation using BeyondLLM- The integration with Langchain allows users to leverage Retrieval-Augmented Generation (RAG) capabilities, enhancing the model’s responses by retrieving relevant information from external data sources.

  • Ollama Server URL Configurable from User- Users can now configure the Ollama server URL, offering greater flexibility for deployment and integration.

  • WeaviateDB Support with Example Update- Beyond LLM now includes support for WeaviateDB, a powerful vector database for storing and querying embeddings. The integration is accompanied by example updates to help users get started quickly.

Memory Integration

The memory feature allows Beyond LLM to retain context from previous interactions, enabling the model to generate more personalized and contextually relevant responses. This is particularly beneficial for applications like chatbots, where maintaining continuity in conversations is essential. By integrating memory, Beyond LLM can provide more engaging and coherent responses based on the user’s previous inputs.

This feature enhances the overall user experience and makes the interactions more natural and meaningful. For instance, if a user has previously asked about a specific topic, the model can recall that context in future interactions, leading to a more fluid conversation. This capability is crucial for applications that require a deeper understanding of user intent and history.

Why We Added?

By integrating memory, Beyond LLM can provide more engaging and coherent responses based on the user’s previous inputs. This feature enhances the overall user experience and makes the interactions more natural and meaningful.

Code Implementation:


Weaviate Integration

Beyond LLM now includes support for WeaviateDB, a versatile and scalable vector database designed for high-performance similarity search and efficient management of vector embeddings. This integration enables users to store and query embeddings effectively, making it ideal for applications that require efficient data handling and retrieval.

By leveraging WeaviateDB, users can take advantage of its powerful indexing and querying capabilities. This support enhances the overall performance and scalability of Beyond LLM, making it a more robust and reliable framework for building language model applications.

What It Can Do?

The Weaviate integration allows users to store and query embeddings effectively, making it ideal for applications that require efficient data handling and retrieval. It enables users to leverage the power of a robust vector database for their language model applications.

Code Implementation:


Langchain Evaluation with BeyondLLM

Another significant addition in version 0.2.2 is the integration of Langchain’s Retrieval-Augmented Generation (RAG) capabilities. This feature allows users to evaluate their language models by leveraging Langchain’s powerful document retrieval and processing tools.

By combining the strengths of Beyond LLM and Langchain, users can create and assess sophisticated question-answering systems. The RAG evaluation process focuses on three key aspects: context relevancy, answer relevancy, and groundedness. This comprehensive assessment ensures that the generated responses are relevant, accurate, and well-supported by the retrieved documents.

Once you are done building the RAG pipeline with LangChain, you can evaluate it using BeyondLLM as follows:

# Evaluate Langchain RAG using BeyondLLM evals

## Get Context Relevancy
def get_context_relevancy(llm, query, context):
    total_score = 0
    score_count = 0
    
    for content in context:
        score_response = llm.invoke(CONTEXT_RELEVENCE.format(question=query, context=content))
        
        # Access the content attribute directly
        score_str = score_response.content
        
        # Accumulate the score
        score = float(extract_number(score_str))
        total_score += score
        score_count += 1
    
    average_score = total_score / score_count if score_count > 0 else 0
    return f"Context Relevancy Score: {round(average_score, 1)}"

# Example query
query = "what causes heart diseases?"

# Retrieve relevant documents based on the user query using the updated method
retrieved_docs = retriever.invoke(query)

# Prepare the context from the retrieved documents
context = [doc.page_content for doc in retrieved_docs]

Complete notebook is added in the BeyondLLM Cookbook:

https://github.com/aiplanethub/beyondllm/blob/main/cookbook/evaluate_langchain_rag_pipeline_beyondllm.ipynb

Conclusion

With version 0.2.2, Beyond LLM continues to evolve, providing powerful new features such as memory integration, Weaviate support, and Langchain RAG evaluation. We encourage you to explore these enhancements to create more interactive and intelligent applications.

Call for Community

Beyond LLM is completely open source. Feel free to raise any questions by opening issues on our GitHub repository. We value your feedback and look forward to connecting with you there.

Open your PR here: GitHub Repository

In order for you to get started, check out the documentation for more details: Documentation

Don’t forget to ⭐️ and fork the repository!