{"course":{"productid":34464,"modality":1,"active":true,"language":"en","title":"Building Transformer-Based Natural Language Processing Applications","productcode":"BNLPA","vendorcode":"NV","vendorname":"Nvidia","fullproductcode":"NV-BNLPA","courseware":{"has_ekit":false,"has_printkit":true,"language":""},"url":"https:\/\/portal.flane.ch\/course\/nvidia-bnlpa","objective":"<ul>\n<li>How transformers are used as the basic building blocks of modern LLMs for NLP applications<\/li><li>How self-supervision improves upon the transformer architecture in BERT, Megatron, and other LLM variants for superior NLP results<\/li><li>How to leverage pretrained, modern LLM models to solve multiple NLP tasks such as text classification, named-entity recognition (NER), and question answering<\/li><li>Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER, and question answering<\/li><li>Manage inference challenges and deploy refined models for live applications<\/li><\/ul>","essentials":"<ul>\n<li>Experience with Python coding and use of library functions and parameters<\/li><li>Fundamental understanding of a deep learning framework such as TensorFlow, PyTorch, or Keras<\/li><li>Basic understanding of neural networks<\/li><\/ul>","contents":"<h5>Introduction<\/h5><ul>\n<li>Meet the instructor.<\/li><li>Create an account at courses.nvidia.com\/join<\/li><\/ul><h5>Introduction to Transformers<\/h5><ul>\n<li>Explore how the transformer architecture works in detail:<\/li><li>Build the transformer architecture in PyTorch.<\/li><li>Calculate the self-attention matrix.<\/li><li>Translate English to German with a pretrained transformer model.<\/li><\/ul><h5>Self-Supervision, BERT, and Beyond<\/h5><p>Learn how to apply self-supervised transformer-based models to concrete NLP tasks using NVIDIA NeMo:<\/p>\n<ul>\n<li>Build a text classification project to classify abstracts.<\/li><li>Build a NER project to identify disease names in text.<\/li><li>Improve project accuracy with domain-specific models.<\/li><\/ul><h5>Inference and Deployment for NLP\t<\/h5><ul>\n<li>Learn how to deploy an NLP project for live inference on NVIDIA Triton:<\/li><li>Prepare the model for deployment.<\/li><li>Optimize the model with NVIDIA&reg; TensorRT&trade;.<\/li><li>Deploy the model and test it.<\/li><\/ul><h5>Final Review<\/h5><ul>\n<li>Review key learnings and answer questions.<\/li><li>Complete the assessment and earn a certificate.<\/li><li>Take the workshop survey.<\/li><li>Learn how to set up your own environment and discuss additional resources and training.<\/li><\/ul>","summary":"<p>Learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks.<\/p>\n<p>In this course, you&#039;ll:<\/p>\n<ul>\n<li>Construct a Transformer neural network in PyTorch<\/li><li>Build a named-entity recognition (NER) application with BERT<\/li><li>Deploy the NER application with ONNX and TensorRT to a Triton inference server<\/li><\/ul><p>Upon completion, you&rsquo;ll be proficient in task-agnostic applications of Transformer-based models.<\/p>\n<p><em>Please note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.<\/em><\/p>","objective_plain":"- How transformers are used as the basic building blocks of modern LLMs for NLP applications\n- How self-supervision improves upon the transformer architecture in BERT, Megatron, and other LLM variants for superior NLP results\n- How to leverage pretrained, modern LLM models to solve multiple NLP tasks such as text classification, named-entity recognition (NER), and question answering\n- Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER, and question answering\n- Manage inference challenges and deploy refined models for live applications","essentials_plain":"- Experience with Python coding and use of library functions and parameters\n- Fundamental understanding of a deep learning framework such as TensorFlow, PyTorch, or Keras\n- Basic understanding of neural networks","contents_plain":"Introduction\n\n\n- Meet the instructor.\n- Create an account at courses.nvidia.com\/join\nIntroduction to Transformers\n\n\n- Explore how the transformer architecture works in detail:\n- Build the transformer architecture in PyTorch.\n- Calculate the self-attention matrix.\n- Translate English to German with a pretrained transformer model.\nSelf-Supervision, BERT, and Beyond\n\nLearn how to apply self-supervised transformer-based models to concrete NLP tasks using NVIDIA NeMo:\n\n\n- Build a text classification project to classify abstracts.\n- Build a NER project to identify disease names in text.\n- Improve project accuracy with domain-specific models.\nInference and Deployment for NLP\t\n\n\n- Learn how to deploy an NLP project for live inference on NVIDIA Triton:\n- Prepare the model for deployment.\n- Optimize the model with NVIDIA\u00ae TensorRT\u2122.\n- Deploy the model and test it.\nFinal Review\n\n\n- Review key learnings and answer questions.\n- Complete the assessment and earn a certificate.\n- Take the workshop survey.\n- Learn how to set up your own environment and discuss additional resources and training.","summary_plain":"Learn how to apply and fine-tune a Transformer-based Deep Learning model to Natural Language Processing (NLP) tasks.\n\nIn this course, you'll:\n\n\n- Construct a Transformer neural network in PyTorch\n- Build a named-entity recognition (NER) application with BERT\n- Deploy the NER application with ONNX and TensorRT to a Triton inference server\nUpon completion, you\u2019ll be proficient in task-agnostic applications of Transformer-based models.\n\nPlease note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.","skill_level":"Intermediate","version":"1.0","duration":{"unit":"d","value":1,"formatted":"1 day"},"pricelist":{"List Price":{"DE":{"country":"DE","currency":"EUR","taxrate":19,"price":995},"AT":{"country":"AT","currency":"EUR","taxrate":20,"price":995},"US":{"country":"US","currency":"USD","taxrate":null,"price":500},"IT":{"country":"IT","currency":"EUR","taxrate":20,"price":995},"SI":{"country":"SI","currency":"EUR","taxrate":20,"price":995},"GB":{"country":"GB","currency":"GBP","taxrate":20,"price":420},"CA":{"country":"CA","currency":"CAD","taxrate":null,"price":690},"CH":{"country":"CH","currency":"CHF","taxrate":8.1,"price":995}}},"lastchanged":"2025-07-29T12:18:27+02:00","parenturl":"https:\/\/portal.flane.ch\/swisscom\/en\/json-courses","nexturl_course_schedule":"https:\/\/portal.flane.ch\/swisscom\/en\/json-course-schedule\/34464","source_lang":"en","source":"https:\/\/portal.flane.ch\/swisscom\/en\/json-course\/nvidia-bnlpa"}}