{"course":{"productid":34498,"modality":6,"active":true,"language":"fr","title":"Efficient Large Language Model (LLM) Customization","productcode":"ELLMC","vendorcode":"NV","vendorname":"Nvidia","fullproductcode":"NV-ELLMC","courseware":{"has_ekit":false,"has_printkit":true,"language":""},"url":"https:\/\/portal.flane.ch\/course\/nvidia-ellmc","objective":"<p>By the time you complete this course you will be able to:\n<\/p>\n<ul>\n<li>Apply parameter-efficient fine-tuning techniques with limited data to accomplish tasks specific to your use cases<\/li><li>Use LLMs to create synthetic data in the service of fine-tuning smaller LLMs to perform a desired task<\/li><li>Drive down model size requirements through a virtuous cycle of combining synthetic data generation and model customization.<\/li><li>Build a generative application composed of multiple customized models you generate data for and create throughout the workshop.<\/li><\/ul>","essentials":"<ul>\n<li>Professional experience with the Python programming language.<\/li><li>Familiarity with fundamental deep learning topics like model architecture, training and inference.<\/li><li>Familiarity with a modern Python-based deep learning framework (PyTorch preferred).<\/li><li>Familiarity working with out-of-the-box pretrained LLMs.<\/li><\/ul>","summary":"<p>Enterprises need to execute language-related tasks daily, such as text classification, content generation, sentiment analysis, and customer chat support, and they seek to do so in the most cost-effective way. Large language models can automate these tasks, and efficient LLM customization techniques can increase a model&rsquo;s capabilities and reduce the size of models required for use in enterprise applications. In this course, you&#039;ll go beyond prompt engineering LLMs and learn a variety of techniques to efficiently customize pretrained LLMs for your specific use cases&mdash;without engaging in the computationally intensive and expensive process of pretraining your own model or fine-tuning a model&#039;s internal weights. Using NVIDIA NeMo&trade; service, you&rsquo;ll learn various parameter-efficient fine-tuning methods to customize LLM behavior for your organization.<\/p>\n<p><em>Please note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.<\/em><\/p>","objective_plain":"By the time you complete this course you will be able to:\n\n\n\n- Apply parameter-efficient fine-tuning techniques with limited data to accomplish tasks specific to your use cases\n- Use LLMs to create synthetic data in the service of fine-tuning smaller LLMs to perform a desired task\n- Drive down model size requirements through a virtuous cycle of combining synthetic data generation and model customization.\n- Build a generative application composed of multiple customized models you generate data for and create throughout the workshop.","essentials_plain":"- Professional experience with the Python programming language.\n- Familiarity with fundamental deep learning topics like model architecture, training and inference.\n- Familiarity with a modern Python-based deep learning framework (PyTorch preferred).\n- Familiarity working with out-of-the-box pretrained LLMs.","summary_plain":"Enterprises need to execute language-related tasks daily, such as text classification, content generation, sentiment analysis, and customer chat support, and they seek to do so in the most cost-effective way. Large language models can automate these tasks, and efficient LLM customization techniques can increase a model\u2019s capabilities and reduce the size of models required for use in enterprise applications. In this course, you'll go beyond prompt engineering LLMs and learn a variety of techniques to efficiently customize pretrained LLMs for your specific use cases\u2014without engaging in the computationally intensive and expensive process of pretraining your own model or fine-tuning a model's internal weights. Using NVIDIA NeMo\u2122 service, you\u2019ll learn various parameter-efficient fine-tuning methods to customize LLM behavior for your organization.\n\nPlease note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.","skill_level":"Intermediate","version":"1.0","duration":{"unit":"d","value":1,"formatted":"1 jour"},"pricelist":{"List Price":{"US":{"country":"US","currency":"USD","taxrate":null,"price":500},"DE":{"country":"DE","currency":"EUR","taxrate":19,"price":500},"AT":{"country":"AT","currency":"EUR","taxrate":20,"price":500},"SE":{"country":"SE","currency":"EUR","taxrate":25,"price":500},"SI":{"country":"SI","currency":"EUR","taxrate":20,"price":500},"GB":{"country":"GB","currency":"GBP","taxrate":20,"price":420},"IT":{"country":"IT","currency":"EUR","taxrate":20,"price":500},"CA":{"country":"CA","currency":"CAD","taxrate":null,"price":690}}},"lastchanged":"2025-07-29T12:18:27+02:00","parenturl":"https:\/\/portal.flane.ch\/swisscom\/fr\/json-courses","nexturl_course_schedule":"https:\/\/portal.flane.ch\/swisscom\/fr\/json-course-schedule\/34498","source_lang":"fr","source":"https:\/\/portal.flane.ch\/swisscom\/fr\/json-course\/nvidia-ellmc"}}