{"course":{"productid":34499,"modality":6,"active":true,"language":"en","title":"Rapid Application Development Using Large Language Models","productcode":"RADLLM","vendorcode":"NV","vendorname":"Nvidia","fullproductcode":"NV-RADLLM","courseware":{"has_ekit":false,"has_printkit":true,"language":""},"url":"https:\/\/portal.flane.ch\/course\/nvidia-radllm","objective":"<p>By participating in this workshop, you&rsquo;ll learn how to:\n<\/p>\n<ul>\n<li>Find, pull in, and experiment with the HuggingFace model repository and the associated transformers API<\/li><li>Use encoder models for tasks like semantic analysis, embedding, question-answering, and zero-shot classification<\/li><li>Use decoder models to generate sequences like code, unbounded answers, and conversations<\/li><li>Use state management and composition techniques to guide LLMs for safe, effective, and accurate conversation<\/li><\/ul>","essentials":"<ul>\n<li>Introductory deep learning, with comfort with PyTorch and transfer learning preferred. Content covered by DLI&rsquo;s Getting Started with Deep Learning or Fundamentals of Deep Learning courses, or similar experience is sufficient.<\/li><li>Intermediate Python experience, including object-oriented programming and libraries. Content covered by Python Tutorial (w3schools.com) or similar experience is sufficient.<\/li><\/ul>","outline":"<p><strong>Introduction<\/strong>\n<\/p>\n<ul>\n<li>Meet the instructor.<\/li><li>Create an account at courses.nvidia.com\/join<\/li><\/ul><p><strong>From Deep Learning to Large Language Models<\/strong>\n<\/p>\n<ul>\n<li>Learn how large language models are structured and how to use them:<ul>\n<li>Review deep learning- and class-based reasoning, and see how language modeling falls out of it.<\/li><li>Discuss transformer architectures, interfaces, and intuitions, as well as how they scale up and alter to make state-of-the-art LLM solutions.<\/li><\/ul><\/li><\/ul><p><strong>Specialized Encoder Models<\/strong>\n<\/p>\n<ul>\n<li>Learn how to look at the different task specifications:<ul>\n<li>Explore cutting-edge HuggingFace encoder models.<\/li><li>Use already-tuned models for interesting tasks such as token classification, sequence classification, range prediction, and zero-shot classification.<\/li><\/ul><\/li><\/ul><p><strong>Encoder-Decoder Models for Seq2Seq<\/strong>\n<\/p>\n<ul>\n<li>Learn about forecasting LLMs for predicting unbounded sequences:<ul>\n<li>Introduce a decoder component for autoregressive text generation.<\/li><li>Discuss cross-attention for sequence-as-context formulations.<\/li><li>Discuss general approaches for multi-task, zero-shot reasoning.<\/li><li>Introduce multimodal formulation for sequences, and explore some examples.<\/li><\/ul><\/li><\/ul><p><strong>Decoder Models for Text Generation<\/strong>\n<\/p>\n<ul>\n<li>Learn about decoder-only GPT-style models and how they can be specified and used:<ul>\n<li>Explore when decoder-only is good, and talk about issues with the formation.<\/li><li>Discuss model size, special deployment techniques, and considerations.<\/li><li>Pull in some large text-generation models, and see how they work.<\/li><\/ul><\/li><\/ul><p><strong>Stateful LLMs<\/strong>\n<\/p>\n<ul>\n<li>Learn how to elevate language models above stochastic parrots via context injection:<ul>\n<li>Show off modern LLM composition techniques for history and state management.<\/li><li>Discuss retrieval-augmented generation (RAG) for external environment access.<\/li><\/ul><\/li><\/ul><p><strong>Assessment and Q&amp;A<\/strong>\n<\/p>\n<ul>\n<li>Review key learnings.<\/li><li>Take a code-based assessment to earn a certificate.<\/li><\/ul>","summary":"<p>Recent advancements in both the techniques and accessibility of large language models (LLMs) have opened up unprecedented opportunities for businesses to streamline their operations, decrease expenses, and increase productivity at scale. Enterprises can also use LLM-powered apps to provide innovative and improved services to clients or strengthen customer relationships. For example, enterprises could provide customer support via AI virtual assistants or use sentiment analysis apps to extract valuable customer insights.<\/p>\n<p>In this course, you&rsquo;ll gain a strong understanding and practical knowledge of LLM application development by exploring the open-sourced ecosystem, including pretrained LLMs, that can help you get started quickly developing LLM-based applications.<\/p>\n<p><em>Please note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.<\/em><\/p>","objective_plain":"By participating in this workshop, you\u2019ll learn how to:\n\n\n\n- Find, pull in, and experiment with the HuggingFace model repository and the associated transformers API\n- Use encoder models for tasks like semantic analysis, embedding, question-answering, and zero-shot classification\n- Use decoder models to generate sequences like code, unbounded answers, and conversations\n- Use state management and composition techniques to guide LLMs for safe, effective, and accurate conversation","essentials_plain":"- Introductory deep learning, with comfort with PyTorch and transfer learning preferred. Content covered by DLI\u2019s Getting Started with Deep Learning or Fundamentals of Deep Learning courses, or similar experience is sufficient.\n- Intermediate Python experience, including object-oriented programming and libraries. Content covered by Python Tutorial (w3schools.com) or similar experience is sufficient.","outline_plain":"Introduction\n\n\n\n- Meet the instructor.\n- Create an account at courses.nvidia.com\/join\nFrom Deep Learning to Large Language Models\n\n\n\n- Learn how large language models are structured and how to use them:\n- Review deep learning- and class-based reasoning, and see how language modeling falls out of it.\n- Discuss transformer architectures, interfaces, and intuitions, as well as how they scale up and alter to make state-of-the-art LLM solutions.\nSpecialized Encoder Models\n\n\n\n- Learn how to look at the different task specifications:\n- Explore cutting-edge HuggingFace encoder models.\n- Use already-tuned models for interesting tasks such as token classification, sequence classification, range prediction, and zero-shot classification.\nEncoder-Decoder Models for Seq2Seq\n\n\n\n- Learn about forecasting LLMs for predicting unbounded sequences:\n- Introduce a decoder component for autoregressive text generation.\n- Discuss cross-attention for sequence-as-context formulations.\n- Discuss general approaches for multi-task, zero-shot reasoning.\n- Introduce multimodal formulation for sequences, and explore some examples.\nDecoder Models for Text Generation\n\n\n\n- Learn about decoder-only GPT-style models and how they can be specified and used:\n- Explore when decoder-only is good, and talk about issues with the formation.\n- Discuss model size, special deployment techniques, and considerations.\n- Pull in some large text-generation models, and see how they work.\nStateful LLMs\n\n\n\n- Learn how to elevate language models above stochastic parrots via context injection:\n- Show off modern LLM composition techniques for history and state management.\n- Discuss retrieval-augmented generation (RAG) for external environment access.\nAssessment and Q&A\n\n\n\n- Review key learnings.\n- Take a code-based assessment to earn a certificate.","summary_plain":"Recent advancements in both the techniques and accessibility of large language models (LLMs) have opened up unprecedented opportunities for businesses to streamline their operations, decrease expenses, and increase productivity at scale. Enterprises can also use LLM-powered apps to provide innovative and improved services to clients or strengthen customer relationships. For example, enterprises could provide customer support via AI virtual assistants or use sentiment analysis apps to extract valuable customer insights.\n\nIn this course, you\u2019ll gain a strong understanding and practical knowledge of LLM application development by exploring the open-sourced ecosystem, including pretrained LLMs, that can help you get started quickly developing LLM-based applications.\n\nPlease note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.","skill_level":"Beginner","version":"1.0","duration":{"unit":"d","value":1,"formatted":"1 day"},"pricelist":{"List Price":{"US":{"country":"US","currency":"USD","taxrate":null,"price":500},"DE":{"country":"DE","currency":"EUR","taxrate":19,"price":500},"AT":{"country":"AT","currency":"EUR","taxrate":20,"price":500},"SE":{"country":"SE","currency":"EUR","taxrate":25,"price":500},"SI":{"country":"SI","currency":"EUR","taxrate":20,"price":500},"GB":{"country":"GB","currency":"GBP","taxrate":20,"price":420},"IT":{"country":"IT","currency":"EUR","taxrate":20,"price":995},"CA":{"country":"CA","currency":"CAD","taxrate":null,"price":690}}},"lastchanged":"2025-07-29T12:18:27+02:00","parenturl":"https:\/\/portal.flane.ch\/swisscom\/en\/json-courses","nexturl_course_schedule":"https:\/\/portal.flane.ch\/swisscom\/en\/json-course-schedule\/34499","source_lang":"en","source":"https:\/\/portal.flane.ch\/swisscom\/en\/json-course\/nvidia-radllm"}}