<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE FL_Course SYSTEM "https://www.flane.de/dtd/fl_course095.dtd"><?xml-stylesheet type="text/xsl" href="https://portal.flane.ch/css/xml-course.xsl"?><course productid="35181" language="en" source="https://portal.flane.ch/swisscom/en/xml-course/nvidia-blape" lastchanged="2026-04-16T13:58:26+02:00" parent="https://portal.flane.ch/swisscom/en/xml-courses"><title>Building LLM Applications with Prompt Engineering</title><productcode>BLAPE</productcode><vendorcode>NV</vendorcode><vendorname>Nvidia</vendorname><fullproductcode>NV-BLAPE</fullproductcode><version>1.0</version><objective>&lt;p&gt;By the end of the workshop, you will:
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Understand how to apply iterative prompt engineering best practices to create LLM-based applications for various language-related tasks.&lt;/li&gt;&lt;li&gt;Be proficient in using LangChain to organize and compose LLM workflows.&lt;/li&gt;&lt;li&gt;Write application code to harness LLMs for generative tasks, document analysis, chatbot applications, and more.&lt;/li&gt;&lt;/ul&gt;</objective><essentials>&lt;p&gt;This course is primarily intended for intermediate level and above Python developers with a solid understanding of LLM fundamentals.&lt;/p&gt;</essentials><outline>&lt;h4&gt;Course Introduction&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Orient to the main worshop topics, schedule and prerequisites.&lt;/li&gt;&lt;li&gt;Learn why prompt engineering is core to interacting with Large Languange Models (LLMs).&lt;/li&gt;&lt;li&gt;Discuss how prompt engineering can be used to develop many classes of LLM-based applications.&lt;/li&gt;&lt;li&gt;Learn about NVIDIA LLM NIM, used to deploy the Llama 3.1 LLM used in the workshop.&lt;/li&gt;&lt;/ul&gt;&lt;h4&gt;Introduction to Prompting&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Get familiar with the workshop environment.&lt;/li&gt;&lt;li&gt;Create and view responses from your first prompts using the OpenAI API, and LangChain.&lt;/li&gt;&lt;li&gt;Learn how to stream LLM responses, and send LLMs prompts in batches, comparing differences in performance.&lt;/li&gt;&lt;li&gt;Begin practicing the process of iterative prompt development.&lt;/li&gt;&lt;li&gt;Create and use your first prompt templates.&lt;/li&gt;&lt;li&gt;Do a mini project where to perform a combination of analysis and generative tasks on a batch of inputs.&lt;/li&gt;&lt;/ul&gt;
&lt;h4&gt;LangChain Expression Language (LCEL), Runnables, and Chains&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Learn about LangChain runnables, and the ability to compose them into chains using LangChain Expression Language (LCEL).&lt;/li&gt;&lt;li&gt;Write custom functions and convert them into runnables that can be included in LangChain chains.&lt;/li&gt;&lt;li&gt;Compose multiple LCEL chains into a single larger application chain.&lt;/li&gt;&lt;li&gt;Exploit opportunities for parallel work by composing parallel LCEL chains.&lt;/li&gt;&lt;li&gt;Do a mini project where to perform a combination of analysis and generative tasks on a batch of inputs using LCEL and parallel execution.&lt;/li&gt;&lt;/ul&gt;&lt;h4&gt;Prompting With Messages&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Learn about two of the core chat message types, human and AI messages, and how to use them explictly in application code.&lt;/li&gt;&lt;li&gt;Provide chat models with instructive examples by way of a technique called few-shot prompting.&lt;/li&gt;&lt;li&gt;Work explicitly with the system message, which will allow you to define an overarching persona and role for your chat models.&lt;/li&gt;&lt;li&gt;Use chain-of-thought prompting to augment your LLMs ability to perform tasks requiring complex reasoning.&lt;/li&gt;&lt;li&gt;Manage messages to retain conversation history and enable chatbot functionality.&lt;/li&gt;&lt;li&gt;Do a mini-project where you build a simple yet flexible chatbot application capable of assuming a variety of roles.&lt;/li&gt;&lt;/ul&gt;&lt;h4&gt;Structured Output&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Explore some basic methods for using LLMs to generate structured data in batch for downstream use.&lt;/li&gt;&lt;li&gt;Generate structured output through a combination of Pydantic classes and LangChain&amp;#039;s `JsonOutputParser`.&lt;/li&gt;&lt;li&gt;Learn how to extract data and tag it as you specify out of long form text.&lt;/li&gt;&lt;li&gt;Do a mini-project where you use structured data generation techniques to perform data extraction and document tagging on an unstructured text document.&lt;/li&gt;&lt;/ul&gt;&lt;h4&gt;Tool Use and Agents&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Create LLM-external functionality called tools, and make your LLM aware of their availability for use.&lt;/li&gt;&lt;li&gt;Create an agent capable of reasoning about when tool use is appropriate, and integrating the result of tool use into its responses.&lt;/li&gt;&lt;li&gt;Do a mini-project where you create an LLM agent capable of utilizing external API calls to augment its responses with real-time data.&lt;/li&gt;&lt;/ul&gt;&lt;h4&gt;Final Review&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Review key learnings and answer questions.&lt;/li&gt;&lt;li&gt;Earn a certificate of competency for the workshop.&lt;/li&gt;&lt;li&gt;Complete the workshop survey.&lt;/li&gt;&lt;li&gt;Get recommendations for the next steps to take in your learning journey.&lt;/li&gt;&lt;/ul&gt;</outline><objective_plain>By the end of the workshop, you will:



- Understand how to apply iterative prompt engineering best practices to create LLM-based applications for various language-related tasks.
- Be proficient in using LangChain to organize and compose LLM workflows.
- Write application code to harness LLMs for generative tasks, document analysis, chatbot applications, and more.</objective_plain><essentials_plain>This course is primarily intended for intermediate level and above Python developers with a solid understanding of LLM fundamentals.</essentials_plain><outline_plain>Course Introduction



- Orient to the main worshop topics, schedule and prerequisites.
- Learn why prompt engineering is core to interacting with Large Languange Models (LLMs).
- Discuss how prompt engineering can be used to develop many classes of LLM-based applications.
- Learn about NVIDIA LLM NIM, used to deploy the Llama 3.1 LLM used in the workshop.
Introduction to Prompting



- Get familiar with the workshop environment.
- Create and view responses from your first prompts using the OpenAI API, and LangChain.
- Learn how to stream LLM responses, and send LLMs prompts in batches, comparing differences in performance.
- Begin practicing the process of iterative prompt development.
- Create and use your first prompt templates.
- Do a mini project where to perform a combination of analysis and generative tasks on a batch of inputs.

LangChain Expression Language (LCEL), Runnables, and Chains



- Learn about LangChain runnables, and the ability to compose them into chains using LangChain Expression Language (LCEL).
- Write custom functions and convert them into runnables that can be included in LangChain chains.
- Compose multiple LCEL chains into a single larger application chain.
- Exploit opportunities for parallel work by composing parallel LCEL chains.
- Do a mini project where to perform a combination of analysis and generative tasks on a batch of inputs using LCEL and parallel execution.
Prompting With Messages


- Learn about two of the core chat message types, human and AI messages, and how to use them explictly in application code.
- Provide chat models with instructive examples by way of a technique called few-shot prompting.
- Work explicitly with the system message, which will allow you to define an overarching persona and role for your chat models.
- Use chain-of-thought prompting to augment your LLMs ability to perform tasks requiring complex reasoning.
- Manage messages to retain conversation history and enable chatbot functionality.
- Do a mini-project where you build a simple yet flexible chatbot application capable of assuming a variety of roles.
Structured Output


- Explore some basic methods for using LLMs to generate structured data in batch for downstream use.
- Generate structured output through a combination of Pydantic classes and LangChain's `JsonOutputParser`.
- Learn how to extract data and tag it as you specify out of long form text.
- Do a mini-project where you use structured data generation techniques to perform data extraction and document tagging on an unstructured text document.
Tool Use and Agents


- Create LLM-external functionality called tools, and make your LLM aware of their availability for use.
- Create an agent capable of reasoning about when tool use is appropriate, and integrating the result of tool use into its responses.
- Do a mini-project where you create an LLM agent capable of utilizing external API calls to augment its responses with real-time data.
Final Review


- Review key learnings and answer questions.
- Earn a certificate of competency for the workshop.
- Complete the workshop survey.
- Get recommendations for the next steps to take in your learning journey.</outline_plain><duration unit="d" days="1">1 day</duration><pricelist><price country="IT" currency="EUR">500.00</price><price country="AT" currency="EUR">500.00</price><price country="SE" currency="EUR">500.00</price><price country="SI" currency="EUR">500.00</price><price country="CH" currency="CHF">500.00</price><price country="DE" currency="EUR">500.00</price></pricelist><miles/></course>