insAIde #51 : LLMs Can Implement RAG + 1st edition of the AI Governance Lab
A quick dive into Retrieval Augmented Generation
Before delving into the valuable contribution on LLMs by our guest authors Dr. Silvia Cascianelli, PhD and Davide Morelli (see below), we are pleased to announce the opening of registration for our new ‘AI Governance Lab’.
🎓 The educational programme consists of 17 modules, with a total duration of 50 hours (🗣️ 🇮🇹 taught in Italian), all in full remote mode, and is designed to train experts in the field of artificial intelligence governance, an area that is undergoing rapid development.
The AI Governance Lab benefits from a faculty of the highest level, recognised at national and international level, composed of professionals and academics with extensive experience in the field of AI and its regulation.
Thanks to the collaboration with insAIde and DEXAI, we guarantee a state-of-the-art training course with a practical and theoretical focus that responds to contemporary challenges.
📅 Course: from 05/12/24 to 14/02/25 - with Early Bird Rate -30% until 12/11/2024
For further details and to register, please visit our website: https://lnkd.in/dni3nppu
We are excited to accompany you on this journey towards excellence in the field of AI Governance! We look forward to seeing you in the classroom.
LLMs Can Implement RAG
🦄 Imagine you’re online shopping on your favorite horseriding gear website and you wonder if Jessica von Bredow-Werndl was using a Baucher bit during the Paris 2024 Dressage Individual Grand Prix Freestyle event so that you can decide if it’s time for you to learn how to use one. You try asking the customer service chatbot of the website, just like you would do with your favorite horseriding gear physical shop clerk. Most likely, the chatbot is a Large Language Model (LLM), and even more likely, it will struggle to provide an accurate and helpful answer to such a complex and domain-specific question.
But fear not, fellow equestrian, RAG comes to the rescue!
💊 What’s RAG - Retrieval Augmented Generation?
The concept of RAG has emerged from research in both information retrieval and natural language processing. RAG is a technique to combine traditional information retrieval systems with the capabilities of LLMs. It entails retrieving relevant information from a vast knowledge base and then using that information to enhance the quality and relevance of the LLM's answers.
💊 How does RAG work?
When a user provides a query to an LLM that integrates RAG…
The LLM processes the query to extract its semantic meaning;
Relevant information is Retrieved from a knowledge base (via e.g., keyword search, semantic search, or dense passage retrieval);
The retrieved information is provided to the LLM as additional (Augmented) context;
The LLM Generates a response based on the user’s query and the retrieval-enriched context.
Easy-peasy 🤓
💊 Why is RAG Useful?
Implementing RAG into an LLM
Can help the LLM generate more accurate responses thanks to the retrieval of factual information from external sources
Ensures that the LLM's output is more relevant to the query
Allows LLMs to access a much broader range of updated information than what is stored in their internal parameters
Enables LLMs to provide expert-level responses in specific domains
Makes LLMs more reliable by making sure they use controlled information sources for answering
🦄 So, to recap: by incorporating RAG, your favorite horseriding gear website’s chatbot can access a vast knowledge base of information (equestrian websites, the shop catalog, horseriding eventing newsletters) and tell you that yes, Jessica von Bredow-Werndl was using a Baucher bit in Paris, but this shouldn’t influence your decision on whether to start using one (better asking your trainer… and your horse!)
Thanks for the contribution:
Dr. Silvia Cascianelli, PhD
Ricercatrice del Dipartimento di Ingegneria "Enzo Ferrari"
Università degli Studi di Modena e Reggio Emilia
Davide Morelli
Phd Student - National Doctorate in Artificial Intelligence
Università degli Studi di Pisa
Host University: Università degli Studi di Modena e Reggio Emilia
Dipartimento di Ingegneria "Enzo Ferrari"
⏰ That's all for us, see you insAIde, next time!
Rocco Panetta , Federico Sartore , Vincenzo Tiani, LL.M. , Davide Montanaro , Gabriele Franco