Join us for a one-day workshop on Large Language Model Operations (LLMOps). This event aims to provide participants in-depth knowledge of the latest advancements in development and productionization of Generative AI systems.
During the workshop you will develop and deploy a LLM-powered application that adopts LLMOps best practices when it comes to data ingestion, observability, prompt management, cost control and LLM evaluation.
Upon completing the workshop, attendees will gain a comprehensive understanding of the importance and applicability of specific LLMOps building blocks.
Grzybowska 56, 00-844 Warszawa
This is a one-day event (9.00 - 16.00), there will be some breaks between sessions.
All participants will get training materials in the form of PDF files containing slides with theory and exercise manual with the detailed description of all exercises.
Session #1 - Large Language Models (LLM) and LLMOps
Session #2 - Building an LLM-powered application
Session #3 LLM systems deployment, observability, management and evaluation
Session #4 Advanced topics
Last tickets at this price:
STANDARD PRICE
Participation in additional on-site workshops (additional fee, on April 8th)
1 290*
PLN NET
1 690*
PLN NET
In this workshop, we’ll introduce the key components of a multitier architecture designed to scale and streamline LLM productization at Team Internet—a global leader in online presence and advertising, serving millions of customers worldwide. For us, scalability and speed are critical to delivering high-performance services, including LLM applications.
Through hands-on coding exercises and real-world use cases from the domain name industry, we'll demonstrate how standardization enhances flexibility and accelerates development.
By the end of the session, you’ll know the keybuilding blocks that will help you efficiently building and scaling LLM applications in production.
Basic familiarity with LLMs, API integrations, and software engineering principles is recommended.
This presentation delves into creating a real-time analytics platform by leveraging cost-effective Change Data Capture (CDC) tools like Debezium for seamless data ingestion from sources such as Oracle into Kafka. We’ll explore how to build a resilient data lake and data mesh architecture using Apache Flink, ensuring data loss prevention, point-in-time recovery, and robust schema evolution to support agile data integration. Participants will learn best practices for establishing a scalable, real-time data pipeline that balances performance, reliability, and flexibility, enabling efficient analytics and decision-making.
I want to show you the principles behind the most popular DBMS, enumerating non-trivial use cases like self-containing dynamic reports, WebAssembly support and pure serverless DB hosting. While SQLite works well with aggregated datasets, DuckDB, its younger cousin, focuses on full OLAP support, allowing processing gigabytes of data in no time on low-end boxes (or even laptops). We will browse various useful features and interfaces in DuckDB, emphasizing scenarios that anyone can implement in their daily work.
Data engineers, analysts and architects. Knowledge of SQL is required. If you'd like to take part in all the exercises, make sure to have DuckDB installed (https://duckdb.org/#quickinstall)
Join us for a hands-on workshop where we’ll break down the core concepts behind Steep—the metric-first BI tool. We’ll cover the semantic layer, metric catalog, entities, and what true self-service analytics can look like. Then, we’ll put it all into practice by building a Steep workspace from scratch and inviting workshop participants to join. In just 60 minutes, you’ll go from connecting a database to defining metrics and creating reports.
In just 60 minutes, you'll learn how to shift from traditional, dashboard-heavy BI to a modern, metric-first approach. You'll see how Steep simplifies data workflows, improves collaboration between data and business teams, and empowers anyone to explore and trust metrics. Whether you're a data analyst or a business stakeholder, you'll walk away with a new perspective on how BI should
work.
This workshop is perfect for data professionals who want a faster, scalable way to deliver trusted data across the business and anyone looking to explore a new, better approach to BI. No specific technical knowledge is required, but basic familiarity with the modern data stack will help.
Join us for an interactive workshop exploring how Apache Flink powers real-time data platforms. Through a real-world use case, we’ll demonstrate how Flink enables seamless data ingestion (including CDC), real-time analytics ( including SQL), and AI-driven applications. Expect a mix of technical capabilities showcased in practical examples and strategic insights, making this session valuable for both engineers and decision-makers. We’ll discuss the business impact of real-time processing, showcase Flink’s core capabilities in achieving it, and share success stories that highlight its role in modern data architectures. Don’t miss this chance to see Flink in action and get inspired to embrace data streaming!