Workshop

Agentic LLMs in Action: A Deep Dive from LLMs to Multi-Agent Systems

Thursday, 24. April 2025, 9:00 –17:00, Gurten Pavillon

Description

Large Language Models (LLMs) have become the backbone of modern AI systems, powering applications from natural language understanding to complex, multi-agent systems. This Masterclass explores advanced techniques for designing, optimizing, and deploying LLM-based systems, with a particular focus on agentic architectures, retrieval augmented generation (RAG), advanced reasoning strategies, and hands-on implementations. Participants will gain practical knowledge of cutting-edge tools and frameworks, including LangGraph Studio, and will work with Jupyter notebooks provided for live coding demonstrations and experimentation.

While the hands-on sessions will focus on RAG, agentic setups, and deployment, attendees will receive notebooks covering all topics, including fine-tuning, optimization, and more, enabling them to continue exploring independently after the workshop.

Topics

  • Introduction to LLM Architectures
  • Overview of transformer models, including attention mechanisms and embeddings.
  • Key distinctions between Encoder-Only, Decoder-Only, and Encoder-Decodertransformers.
  • Optimization Techniques
  • Fine-tuning strategies, including PEFT, LoRA QLoRA.
  • Quantization and sharding to optimize performance and reduce resource consumption.
  • Sampling and decoding methods
  • Prompt Engineering
  • Retrieval-Augmented Generation (RAG)
  • Fundamentals of RAG: how it combines retrieval mechanisms with generative models.
  • Five levels of text splitting: effective strategies to improve performance of your languagemodel applications.
  • Advanced RAG architectures, including Corrective RAG, Self-RAG and Fusion RAG.
  • How to generate an evaluation dataset and improve and monitor the RAG system.
  • RAG pain points and how to solve them.
  • Introduction to Agents
  • Core principles of agentic LLMs and their applications.
  • One-step agent architectures and tool calling.
  • Introduction to LangGraph Studio for debugging and refining agent workflows.
  • Advanced Multi-Agent Architectures and Systems
  • Advanced Agent Architectures:
  • Language Agent Tree Search (LATS): Use reflection and reward-driven Monte Carlotree searches to explore agent actions.
  • Planning and Execution Agents: Implement basic planning agents capable of executing a series of tasks.
  • Advanced Reflection: Prompt agents to reflect on and revise outputs for improved reasoning.
  • Reflection: Guide agents to critique missing or superfluous details in their responses.
  • Self-Discovering Agents: Analyze and design agents capable of learning about and optimizing their own capabilities.
  • Building Multi-Agent Systems:
  • Specialized agents for retrieval, query transformation, hallucination checking, andhelpfulness assessment and complex multi-step tasks.
  • Visualizing data with code execution agents in secure sandbox environments likeE2B.
  • Deploying and Monitoring Agentic Systems
  • Hands-on implementation of agentic systems using:
  • Streamlit for creating interactive front-end interfaces.
  • DigitalOcean for secure deployment and real-time monitoring.
  • LangGraph Studio for debugging, visualization, and refinement of agent architectures.
  • Secure sandbox environments (e.g., E2B) for coding and execution agent architecture.
  • Using LangSmith and LangFuse to track agents and LLMs.
  • Practical Applications
  • Building intelligent systems for summarizing, retrieving, and generating context-sensitiveresponses.
  • Deploying agents for decision-making, task planning, and multi-step problem-solving.
  • Implementing LLMs for sentiment analysis, entity recognition, and more.
  • Responsible AI Practices
  • Strategies for maintaining AI transparency, explainability, and fairness.
  • Safeguards for ethical deployment of agent-based systems.

Requirements

  • Intermediate-level Python programming experience.
  • Familiarity with basic machine learning concepts, including neural networks and transformers helps, but is not a must, the basics of transformer will be covered.

Target Audience

This workshop is designed for AI practitioners, data scientists, and developers seeking to build advanced agent-based systems powered by LLMs. Participants will learn how to design, implement, and optimize intelligent agents for real-world applications, leveraging state-of-the-art tools and frameworks.

Speaker

Joshua Starmer
Nicole Königstein
Chief AI Officer & Head of AI at quantmate, Author