Description
This workshop will introduce you to the concepts of building Large Language Model-powered applications in Java using Quarkus and the LangChain4j library.
In the first part of the workshop, we will do a guided tour through the possibilities of the LangChain4j framework. LangChain4j makes it a piece of cake to chat with virtually any LLM provider (OpenAI, Gemini, HuggingFace, Azure, AWS, ...) and generate AI images straight from your Java application with DALL·E and Gemini. Have LLMs return POJOs and interact with local models on your machine. We will explain the fundamental building blocks of LLM-powered applications, show you how to chain them together into AI services, and how to interact with your knowledge base using advanced RAG.Then, we take a deeper dive into the Quarkus LangChain4j integration. We'll show how little code is needed when using Quarkus, how live reload makes experimenting with prompts a breeze, and finally, we'll look at its native image generation capabilities, aiming to get your AI-powered app deployment-ready in no time. We will also look at interoperating using the Anthropic Model Context Protocol (MCP), as well as doing in-process inferencing (for fun!). By the end of this session, you will have all the technical knowledge, along with plenty of inspiration for designing the apps of the future.In the second part, we get our hands dirty and develop together an enterprise application that integrates generative AI. Starting from the basics, we will add more complex features, showcasing the different aspects of working with LLMs.
By the end of this day, you should have a good understanding of prompt engineering, model parameters, AI security aspects, and RAG, as well as the foundations for building agentic applications.
Topics
- How to use the Quarkus DevUI to try out AI models even before writing any code.
- How to integrate LLMs (Language Models) in your Quarkus application.
- How to build a chatbot using Quarkus.
- How to configure and how to pass prompts to the LLM.
- How to build agentic systems that respond to function calling from the LLM.
- How to build simple and advanced RAG (Retrieval-Augmented Generation) patterns.
Requirements
- JDK 21.0 or later - Download it from Adoptium
- Model Endpoint (provided by the workshop organizer)
- Podman or Docker - See Podman installation or Docker installation
- If you use Podman, Podman Desktop provides a great user experience to manage your containers: Podman Desktop
- Git (not mandatory) - See Git installation
- An IDE with Java support (IntelliJ, Eclipse, VSCode with the Java extension, etc.)
- A terminal