{"course":{"productid":24022,"modality":6,"active":true,"language":"fr","title":"Machine Learning Security","productcode":"MLSEC","vendorcode":"CY","vendorname":"Cydrill","fullproductcode":"CY-MLSEC","courseware":{"has_ekit":false,"has_printkit":true,"language":""},"url":"https:\/\/portal.flane.ch\/course\/cydrill-mlsec","objective":"<ul>\n<li>Getting familiar with essential cyber security concepts<\/li><li>Learning about various aspects of machine learning security<\/li><li>Attacks and defense techniques in adversarial machine learning<\/li><li>Identify vulnerabilities and their consequences<\/li><li>Learn the security best practices in Python<\/li><li>Input validation approaches and principles<\/li><li>Managing vulnerabilities in third party components<\/li><li>Understanding how cryptography can support appplication security<\/li><li>Learning how to use cryptographic APIs correctly in Python<\/li><li>Understanding security testing methodology and approaches<\/li><li>Getting familiar with common security testing techniques and tools<\/li><\/ul>","essentials":"<p>General machine learning and Python development<\/p>","audience":"<p>Python developers working on machine learning systems<\/p>","contents":"<ul>\n<li>Cyber security basics<\/li><li>Machine learning security<\/li><li>Input validation<\/li><li>Security features<\/li><li>Time and state<\/li><li>Errors<\/li><li>Using vulnerable components<\/li><li>Cryptography for developers<\/li><li>Security testing<\/li><li>Wrap up<\/li><\/ul>","outline":"<p><strong>DAY 1<\/strong><\/p>\n<p><strong>Cyber security basics<\/strong>\n<\/p>\n<ul>\n<li>What is security?<\/li><li>Threat and risk<\/li><li>Cyber security threat types<\/li><li>Consequences of insecure software\n<ul>\n<li>Constraints and the market<\/li><li>The dark side<\/li><\/ul><\/li><li>Categorization of bugs\n<ul>\n<li>The Seven Pernicious Kingdoms<\/li><li>Common Weakness Enumeration (CWE)<\/li><li>CWE Top 25 Most Dangerous Software Errors<\/li><li>Vulnerabilities in the environment and dependencies<\/li><\/ul><\/li><\/ul><p><strong>Machine learning security<\/strong>\n<\/p>\n<ul>\n<li>Cyber security in machine learning\n<ul>\n<li>ML-specific cyber security considerations<\/li><li>What makes machine learning a valuable target?<\/li><li>Possible consequences<\/li><li>Inadvertent AI failures<\/li><li>Some real-world abuse examples<\/li><li>ML threat model\n<ul>\n<li>Creating a threat model for machine learning<\/li><li>Machine learning assets<\/li><li>Security requirements<\/li><li>Attack surface<\/li><li>Attacker model &ndash; resources, capabilities, goals<\/li><li>Confidentiality threats<\/li><li>Integrity threats (model)<\/li><li>Integrity threats (data, software)<\/li><li>Availability threats<\/li><li>Dealing with AI\/ML threats in software security<\/li><li>Lab &ndash; Compromising ML via model editing<\/li><\/ul><\/li><li>Using ML in cybersecurity\n<ul>\n<li>Static code analysis and ML<\/li><li>ML in fuzz testing<\/li><li>ML in anomaly detection and network security<\/li><li>Limitations of ML in security<\/li><\/ul><\/li><li>Malicious use of AI and ML\n<ul>\n<li>Social engineering attacks and media manipulation<\/li><li>Vulnerability exploitation<\/li><li>Malware automation<\/li><li>Endpoint security evasion<\/li><\/ul><\/li><\/ul><\/li><li>Adversarial machine learning\n<ul>\n<li>Threats against machine learning<\/li><li>Attacks against machine learning integrity\n<ul>\n<li>Poisoning attacks<\/li><li>Poisoning attacks against supervised learning<\/li><li>Poisoning attacks against unsupervised and reinforcement learning<\/li><li>Lab &ndash; ML poisoning attack<\/li><li>Case study &ndash; ML poisoning against Warfarin dosage calculations<\/li><li>Evasion attacks<\/li><li>Common white-box evasion attack algorithms<\/li><li>Common black-box evasion attack algorithms<\/li><li>Lab &ndash; ML evasion attack<\/li><li>Case study &ndash; Classification evasion via 3D printing<\/li><li>Transferability of poisoning and evasion attacks<\/li><li>Lab &ndash; Transferability of adversarial examples<\/li><\/ul><\/li><li>Some defense techniques against adversarial samples\n<ul>\n<li>Adversarial training<\/li><li>Defensive distillation<\/li><li>Gradient masking<\/li><li>Feature squeezing<\/li><li>Using reformers on adversarial data<\/li><li>Lab &ndash; Adversarial training<\/li><li>Caveats about the efficacy of current adversarial defenses<\/li><li>Simple practical defenses<\/li><\/ul><\/li><li>Attacks against machine learning confidentiality\n<ul>\n<li>Model extraction attacks<\/li><li>Defending against model extraction attacks<\/li><li>Lab &ndash; Model extraction<\/li><li>Model inversion attacks<\/li><li>Defending against model inversion attacks<\/li><li>Lab &ndash; Model inversion<\/li><\/ul><\/li><\/ul><\/li><li>Denial of service\n<ul>\n<li>Denial of Service<\/li><li>Resource exhaustion<\/li><li>Cash overflow<\/li><li>Flooding<\/li><li>Algorithm complexity issues<\/li><li>Denial of service in ML\n<ul>\n<li>Accuracy reduction attacks<\/li><li>Denial-of-information attacks<\/li><li>Catastrophic forgetting in neural networks<\/li><li>Resource exhaustion attacks against ML<\/li><li>Best practices for protecting availability in ML systems<\/li><\/ul><\/li><\/ul><\/li><\/ul><p><strong>DAY 2<\/strong><\/p>\n<p><strong>Input validation<\/strong>\n<\/p>\n<ul>\n<li>Input validation principles\n<ul>\n<li>Blacklists and whitelists<\/li><li>Data validation techniques<\/li><li>Lab &ndash; Input validation<\/li><li>What to validate &ndash; the attack surface<\/li><li>Where to validate &ndash; defense in depth<\/li><li>How to validate &ndash; validation vs transformations<\/li><li>Output sanitization<\/li><li>Encoding challenges<\/li><li>Lab &ndash; Encoding challenges<\/li><li>Validation with regex<\/li><li>Regular expression denial of service (ReDoS)<\/li><li>Lab &ndash; Regular expression denial of service (ReDoS)<\/li><li>Dealing with ReDoS<\/li><\/ul><\/li><li>Injection\n<ul>\n<li>Injection principles<\/li><li>Injection attacks<\/li><li>SQL injection\n<ul>\n<li>SQL injection basics<\/li><li>Lab &ndash; SQL injection<\/li><li>Attack techniques<\/li><li>Content-based blind SQL injection<\/li><li>Time-based blind SQL injection<\/li><\/ul><\/li><li>SQL injection best practices\n<ul>\n<li>Input validation<\/li><li>Parameterized queries<\/li><li>Additional considerations<\/li><li>Lab &ndash; SQL injection best practices<\/li><li>Case study &ndash; Hacking Fortnite accounts<\/li><li>SQL injection and ORM<\/li><\/ul><\/li><li>Code injection\n<ul>\n<li>Code injection via input()<\/li><li>OS command injection\n<ul>\n<li>Lab &ndash; Command injection in Python<\/li><li>OS command injection best practices<\/li><li>Avoiding command injection with the right APIs in Python<\/li><li>Lab &ndash; Command injection best practices in Python<\/li><li>Case study &ndash; Shellshock<\/li><li>Lab &ndash; Shellshock<\/li><li>Case study &ndash; Command injection via ping<\/li><li>Python module hijacking<\/li><li>Lab &ndash; Module hijacking<\/li><\/ul><\/li><\/ul><\/li><li>General protection best practices<\/li><\/ul><\/li><li>Integer handling problems\n<ul>\n<li>Representing signed numbers<\/li><li>Integer visualization<\/li><li>Integers in Python<\/li><li>Integer overflow<\/li><li>Integer overflow with ctypes and numpy<\/li><li>Lab &ndash; Integer problems in Python<\/li><li>Other numeric problems\n<ul>\n<li>Division by zero<\/li><li>Other numeric problems in Python<\/li><li>Working with floating-point numbers<\/li><\/ul><\/li><\/ul><\/li><li>Files and streams\n<ul>\n<li>Path traversal<\/li><li>Path traversal-related examples<\/li><li>Lab &ndash; Path traversal<\/li><li>Additional challenges in Windows<\/li><li>Virtual resources<\/li><li>Path traversal best practices<\/li><li>Format string issues<\/li><\/ul><\/li><li>Unsafe native code\n<ul>\n<li>Native code dependence<\/li><li>Lab &ndash; Unsafe native code<\/li><li>Best practices for dealing with native code<\/li><\/ul><\/li><li>Input validation in machine learning\n<ul>\n<li>Misleading the machine learning mechanism<\/li><li>Sanitizing data against poisoning and RONI<\/li><li>Code vulnerabilities causing evasion, misprediction, or misclustering<\/li><li>Typical ML input formats and their security<\/li><\/ul><\/li><\/ul><p><strong>DAY 3<\/strong><\/p>\n<p><strong>Security features<\/strong>\n<\/p>\n<ul>\n<li>Authentication\n<ul>\n<li>Authentication basics<\/li><li>Multi-factor authentication<\/li><li>Authentication weaknesses &ndash; spoofing<\/li><li>Case study &ndash; PayPal 2FA bypass<\/li><li>Password management\n<ul>\n<li>Inbound password management\n<ul>\n<li>Storing account passwords<\/li><li>Password in transit<\/li><li>Lab &ndash; Is just hashing passwords enough?<\/li><li>Dictionary attacks and brute forcing<\/li><li>Salting<\/li><li>Adaptive hash functions for password storage<\/li><li>Password policy\n<ul>\n<li>NIST authenticator requirements for memorized secrets<\/li><li>Password length<\/li><li>Password hardening<\/li><li>Using passphrases<\/li><li>Password change<\/li><li>Forgotten passwords<\/li><li>Lab &ndash; Password reset weakness<\/li><\/ul><\/li><li>Case study &ndash; The Ashley Madison data breach\n<ul>\n<li>The dictionary attack<\/li><li>The ultimate crack<\/li><li>Exploitation and the lessons learned<\/li><\/ul><\/li><li>Password database migration<\/li><\/ul><\/li><li>Outbound password management\n<ul>\n<li>Hard coded passwords<\/li><li>Best practices<\/li><li>Lab &ndash; Hardcoded password<\/li><li>Protecting sensitive information in memory\n<ul>\n<li>Challenges in protecting memory<\/li><\/ul><\/li><\/ul><\/li><\/ul><\/li><\/ul><\/li><li>Information exposure\n<ul>\n<li>Exposure through extracted data and aggregation<\/li><li>Case study &ndash; Strava data exposure<\/li><li>Privacy violation\n<ul>\n<li>Privacy essentials<\/li><li>Related standards, regulations and laws in brief<\/li><li>Privacy violation and best practices<\/li><li>Privacy in machine learning\n<ul>\n<li>Privacy challenges in classification algorithms<\/li><li>Machine unlearning and its challenges<\/li><\/ul><\/li><\/ul><\/li><li>System information leakage\n<ul>\n<li>Leaking system information<\/li><\/ul><\/li><li>Information exposure best practices<\/li><\/ul><\/li><\/ul><p><strong>Time and state<\/strong>\n<\/p>\n<ul>\n<li>Race conditions\n<ul>\n<li>File race condition\n<ul>\n<li>Time of check to time of usage &ndash; TOCTTOU<\/li><li>Insecure temporary file<\/li><\/ul><\/li><li>Avoiding race conditions in Python\n<ul>\n<li>Thread safety and the Global Interpreter Lock (GIL)<\/li><li>Case study: TOCTTOU in Calamares<\/li><\/ul><\/li><\/ul><\/li><li>Mutual exclusion and locking\n<ul>\n<li>Deadlocks<\/li><\/ul><\/li><li>Synchronization and thread safety<\/li><\/ul><p><strong>Errors<\/strong>\n<\/p>\n<ul>\n<li>Error and exception handling principles<\/li><li>Error handling\n<ul>\n<li>Returning a misleading status code<\/li><li>Information exposure through error reporting<\/li><\/ul><\/li><li>Exception handling\n<ul>\n<li>In the except,catch block. And now what?<\/li><li>Empty catch block<\/li><li>The danger of assert statements<\/li><li>Lab &ndash; Exception handling mess<\/li><\/ul><\/li><\/ul><p><strong>Using vulnerable components<\/strong>\n<\/p>\n<ul>\n<li>Assessing the environment<\/li><li>Hardening<\/li><li>Malicious packages in Python<\/li><li>Vulnerability management\n<ul>\n<li>Patch management<\/li><li>Bug bounty programs<\/li><li>Vulnerability databases<\/li><li>Vulnerability rating &ndash; CVSS<\/li><li>DevOps, the build process and CI \/ CD<\/li><li>Dependency checking in Python<\/li><li>Lab &ndash; Detecting vulnerable components<\/li><\/ul><\/li><li>ML supply chain risks\n<ul>\n<li>Common ML system architectures<\/li><li>ML system architecture and the attack surface<\/li><li>Case study &ndash; BadNets<\/li><li>Protecting data in transit &ndash; transport layer security<\/li><li>Protecting data in use &ndash; homomorphic encryption<\/li><li>Protecting data in use &ndash; differential privacy<\/li><li>Protecting data in use &ndash; multi-party computation<\/li><\/ul><\/li><li>ML frameworks and security\n<ul>\n<li>General security concerns about ML platforms<\/li><li>TensorFlow security issues and vulnerabilities<\/li><li>Case study &ndash; TensorFlow vulnerability in parsing BMP files (CVE-2018-21233)<\/li><\/ul><\/li><\/ul><p><strong>DAY 4<\/strong><\/p>\n<p><strong>Cryptography for developers<\/strong>\n<\/p>\n<ul>\n<li>Cryptography basics<\/li><li>Cryptography in Python<\/li><li>Elementary algorithms\n<ul>\n<li>Random number generation\n<ul>\n<li>Pseudo random number generators (PRNGs)<\/li><li>Cryptographically strong PRNGs<\/li><li>Seeding<\/li><li>Using virtual random streams<\/li><li>Weak and strong PRNGs in Python<\/li><li>Using random numbers in Python<\/li><li>Case study &ndash; Equifax credit account freeze<\/li><li>True random number generators (TRNG)<\/li><li>Assessing PRNG strength<\/li><li>Lab &ndash; Using random numbers in Python<\/li><\/ul><\/li><li>Hashing\n<ul>\n<li>Hashing basics<\/li><li>Common hashing mistakes<\/li><li>Hashing in Python<\/li><li>Lab &ndash; Hashing in Python<\/li><\/ul><\/li><\/ul><\/li><li>Confidentiality protection\n<ul>\n<li>Symmetric encryption\n<ul>\n<li>Block ciphers<\/li><li>Modes of operation<\/li><li>Modes of operation and IV &ndash; best practices<\/li><li>Symmetric encryption in Python<\/li><li>Lab &ndash; Symmetric encryption in Python<\/li><li>Asymmetric encryption\n<ul>\n<li>The RSA algorithm\n<ul>\n<li>Using RSA &ndash; best practices<\/li><li>RSA in Python<\/li><\/ul><\/li><li>Elliptic Curve Cryptography\n<ul>\n<li>The ECC algorithm<\/li><li>Using ECC &ndash; best practices<\/li><li>ECC in Python<\/li><\/ul><\/li><li>Combining symmetric and asymmetric algorithms<\/li><\/ul><\/li><\/ul><\/li><li>Homomorphic encryption\n<ul>\n<li>Basics of homomorphic encryption<\/li><li>Types of homomorphic encryption<\/li><li>FHE in machine learning<\/li><\/ul><\/li><\/ul><\/li><li>Integrity protection\n<ul>\n<li>Message Authentication Code (MAC)\n<ul>\n<li>MAC in Python<\/li><li>Lab &ndash; Calculating MAC in Python<\/li><\/ul><\/li><li>Digital signature\n<ul>\n<li>Digital signature with RSA<\/li><li>Digital signature with ECC<\/li><li>Digital signature in Python<\/li><\/ul><\/li><\/ul><\/li><li>Public Key Infrastructure (PKI)\n<ul>\n<li>Some further key management challenges<\/li><li>Certificates\n<ul>\n<li>Chain of trust<\/li><li>Certificate management &ndash; best practices<\/li><\/ul><\/li><\/ul><\/li><\/ul><p><strong>Security testing<\/strong>\n<\/p>\n<ul>\n<li>Security testing methodology\n<ul>\n<li>Security testing &ndash; goals and methodologies<\/li><li>Overview of security testing processes<\/li><li>Threat modeling\n<ul>\n<li>SDL threat modeling<\/li><li>Mapping STRIDE to DFD<\/li><li>DFD example<\/li><li>Attack trees<\/li><li>Attack tree example<\/li><li>Misuse cases<\/li><li>Misuse case examples<\/li><li>Risk analysis<\/li><\/ul><\/li><\/ul><\/li><li>Security testing techniques and tools\n<ul>\n<li>Code analysis\n<ul>\n<li>Security aspects of code review<\/li><li>Static Application Security Testing (SAST)<\/li><li>Lab &ndash; Using static analysis tools<\/li><li>Lab &ndash; Finding vulnerabilities via ML<\/li><\/ul><\/li><li>Dynamic analysis\n<ul>\n<li>Security testing at runtime<\/li><li>Penetration testing<\/li><li>Stress testing<\/li><li>Dynamic analysis tools\n<ul>\n<li>Dynamic Application Security Testing (DAST)<\/li><\/ul><\/li><li>Fuzzing\n<ul>\n<li>Fuzzing techniques<\/li><li>Fuzzing &ndash; Observing the process<\/li><li>ML fuzzing<\/li><\/ul><\/li><\/ul><\/li><\/ul>\n\n<strong>Wrap up<\/strong>\n\n<ul>\n<li>Secure coding principles\n<ul>\n<li>Principles of robust programming by Matt Bishop<\/li><li>Secure design principles of Saltzer and Schr&ouml;der<\/li><\/ul><\/li><li>And now what?\n<ul>\n<li>Software security sources and further reading<\/li><li>Python resources<\/li><li>Machine learning security resources<\/li><\/ul><\/li><\/ul><\/li><\/ul>","summary":"<p>Your machine learning application works as intended, so you are done, right? But did you consider somebody poisoning your model by training it with intentionally malicious samples? Or sending specially-crafted input &ndash; indistinguishable from normal input &ndash; to your model that will get completely misclassified? Feeding in too large samples &ndash; for example, an image of 16Gbs to crash the application? Because that&rsquo;s what the bad guys will do. And the list is far from complete.<\/p>\n<p>As a machine learning practitioner, you need to be paranoid just as any developer out there. Interest in attacking machine learning solutions is gaining momentum, and therefore protecting against adversarial machine learning is essential. This needs not only awareness, but also specific skills to protect your ML applications. The course helps you gain these skills by introducing cutting edge attacks and protection techniques from the ML domain.<\/p>\n<p>Machine learning is software after all. That&rsquo;s why in this course we also teach common secure coding skills and discuss security pitfalls of the Python programming language. Both adversarial machine learning and core secure coding topics come with lots of hands on labs and stories from real life, all to provide a strong emotional engagement to security and to substantially improve code hygiene.<\/p>\n<p>So that you are prepared for the forces of the dark side.<\/p>\n<p>So that nothing unexpected happens.<\/p>\n<p>Nothing.<\/p>","objective_plain":"- Getting familiar with essential cyber security concepts\n- Learning about various aspects of machine learning security\n- Attacks and defense techniques in adversarial machine learning\n- Identify vulnerabilities and their consequences\n- Learn the security best practices in Python\n- Input validation approaches and principles\n- Managing vulnerabilities in third party components\n- Understanding how cryptography can support appplication security\n- Learning how to use cryptographic APIs correctly in Python\n- Understanding security testing methodology and approaches\n- Getting familiar with common security testing techniques and tools","essentials_plain":"General machine learning and Python development","audience_plain":"Python developers working on machine learning systems","contents_plain":"- Cyber security basics\n- Machine learning security\n- Input validation\n- Security features\n- Time and state\n- Errors\n- Using vulnerable components\n- Cryptography for developers\n- Security testing\n- Wrap up","outline_plain":"DAY 1\n\nCyber security basics\n\n\n\n- What is security?\n- Threat and risk\n- Cyber security threat types\n- Consequences of insecure software\n\n- Constraints and the market\n- The dark side\n- Categorization of bugs\n\n- The Seven Pernicious Kingdoms\n- Common Weakness Enumeration (CWE)\n- CWE Top 25 Most Dangerous Software Errors\n- Vulnerabilities in the environment and dependencies\nMachine learning security\n\n\n\n- Cyber security in machine learning\n\n- ML-specific cyber security considerations\n- What makes machine learning a valuable target?\n- Possible consequences\n- Inadvertent AI failures\n- Some real-world abuse examples\n- ML threat model\n\n- Creating a threat model for machine learning\n- Machine learning assets\n- Security requirements\n- Attack surface\n- Attacker model \u2013 resources, capabilities, goals\n- Confidentiality threats\n- Integrity threats (model)\n- Integrity threats (data, software)\n- Availability threats\n- Dealing with AI\/ML threats in software security\n- Lab \u2013 Compromising ML via model editing\n- Using ML in cybersecurity\n\n- Static code analysis and ML\n- ML in fuzz testing\n- ML in anomaly detection and network security\n- Limitations of ML in security\n- Malicious use of AI and ML\n\n- Social engineering attacks and media manipulation\n- Vulnerability exploitation\n- Malware automation\n- Endpoint security evasion\n- Adversarial machine learning\n\n- Threats against machine learning\n- Attacks against machine learning integrity\n\n- Poisoning attacks\n- Poisoning attacks against supervised learning\n- Poisoning attacks against unsupervised and reinforcement learning\n- Lab \u2013 ML poisoning attack\n- Case study \u2013 ML poisoning against Warfarin dosage calculations\n- Evasion attacks\n- Common white-box evasion attack algorithms\n- Common black-box evasion attack algorithms\n- Lab \u2013 ML evasion attack\n- Case study \u2013 Classification evasion via 3D printing\n- Transferability of poisoning and evasion attacks\n- Lab \u2013 Transferability of adversarial examples\n- Some defense techniques against adversarial samples\n\n- Adversarial training\n- Defensive distillation\n- Gradient masking\n- Feature squeezing\n- Using reformers on adversarial data\n- Lab \u2013 Adversarial training\n- Caveats about the efficacy of current adversarial defenses\n- Simple practical defenses\n- Attacks against machine learning confidentiality\n\n- Model extraction attacks\n- Defending against model extraction attacks\n- Lab \u2013 Model extraction\n- Model inversion attacks\n- Defending against model inversion attacks\n- Lab \u2013 Model inversion\n- Denial of service\n\n- Denial of Service\n- Resource exhaustion\n- Cash overflow\n- Flooding\n- Algorithm complexity issues\n- Denial of service in ML\n\n- Accuracy reduction attacks\n- Denial-of-information attacks\n- Catastrophic forgetting in neural networks\n- Resource exhaustion attacks against ML\n- Best practices for protecting availability in ML systems\nDAY 2\n\nInput validation\n\n\n\n- Input validation principles\n\n- Blacklists and whitelists\n- Data validation techniques\n- Lab \u2013 Input validation\n- What to validate \u2013 the attack surface\n- Where to validate \u2013 defense in depth\n- How to validate \u2013 validation vs transformations\n- Output sanitization\n- Encoding challenges\n- Lab \u2013 Encoding challenges\n- Validation with regex\n- Regular expression denial of service (ReDoS)\n- Lab \u2013 Regular expression denial of service (ReDoS)\n- Dealing with ReDoS\n- Injection\n\n- Injection principles\n- Injection attacks\n- SQL injection\n\n- SQL injection basics\n- Lab \u2013 SQL injection\n- Attack techniques\n- Content-based blind SQL injection\n- Time-based blind SQL injection\n- SQL injection best practices\n\n- Input validation\n- Parameterized queries\n- Additional considerations\n- Lab \u2013 SQL injection best practices\n- Case study \u2013 Hacking Fortnite accounts\n- SQL injection and ORM\n- Code injection\n\n- Code injection via input()\n- OS command injection\n\n- Lab \u2013 Command injection in Python\n- OS command injection best practices\n- Avoiding command injection with the right APIs in Python\n- Lab \u2013 Command injection best practices in Python\n- Case study \u2013 Shellshock\n- Lab \u2013 Shellshock\n- Case study \u2013 Command injection via ping\n- Python module hijacking\n- Lab \u2013 Module hijacking\n- General protection best practices\n- Integer handling problems\n\n- Representing signed numbers\n- Integer visualization\n- Integers in Python\n- Integer overflow\n- Integer overflow with ctypes and numpy\n- Lab \u2013 Integer problems in Python\n- Other numeric problems\n\n- Division by zero\n- Other numeric problems in Python\n- Working with floating-point numbers\n- Files and streams\n\n- Path traversal\n- Path traversal-related examples\n- Lab \u2013 Path traversal\n- Additional challenges in Windows\n- Virtual resources\n- Path traversal best practices\n- Format string issues\n- Unsafe native code\n\n- Native code dependence\n- Lab \u2013 Unsafe native code\n- Best practices for dealing with native code\n- Input validation in machine learning\n\n- Misleading the machine learning mechanism\n- Sanitizing data against poisoning and RONI\n- Code vulnerabilities causing evasion, misprediction, or misclustering\n- Typical ML input formats and their security\nDAY 3\n\nSecurity features\n\n\n\n- Authentication\n\n- Authentication basics\n- Multi-factor authentication\n- Authentication weaknesses \u2013 spoofing\n- Case study \u2013 PayPal 2FA bypass\n- Password management\n\n- Inbound password management\n\n- Storing account passwords\n- Password in transit\n- Lab \u2013 Is just hashing passwords enough?\n- Dictionary attacks and brute forcing\n- Salting\n- Adaptive hash functions for password storage\n- Password policy\n\n- NIST authenticator requirements for memorized secrets\n- Password length\n- Password hardening\n- Using passphrases\n- Password change\n- Forgotten passwords\n- Lab \u2013 Password reset weakness\n- Case study \u2013 The Ashley Madison data breach\n\n- The dictionary attack\n- The ultimate crack\n- Exploitation and the lessons learned\n- Password database migration\n- Outbound password management\n\n- Hard coded passwords\n- Best practices\n- Lab \u2013 Hardcoded password\n- Protecting sensitive information in memory\n\n- Challenges in protecting memory\n- Information exposure\n\n- Exposure through extracted data and aggregation\n- Case study \u2013 Strava data exposure\n- Privacy violation\n\n- Privacy essentials\n- Related standards, regulations and laws in brief\n- Privacy violation and best practices\n- Privacy in machine learning\n\n- Privacy challenges in classification algorithms\n- Machine unlearning and its challenges\n- System information leakage\n\n- Leaking system information\n- Information exposure best practices\nTime and state\n\n\n\n- Race conditions\n\n- File race condition\n\n- Time of check to time of usage \u2013 TOCTTOU\n- Insecure temporary file\n- Avoiding race conditions in Python\n\n- Thread safety and the Global Interpreter Lock (GIL)\n- Case study: TOCTTOU in Calamares\n- Mutual exclusion and locking\n\n- Deadlocks\n- Synchronization and thread safety\nErrors\n\n\n\n- Error and exception handling principles\n- Error handling\n\n- Returning a misleading status code\n- Information exposure through error reporting\n- Exception handling\n\n- In the except,catch block. And now what?\n- Empty catch block\n- The danger of assert statements\n- Lab \u2013 Exception handling mess\nUsing vulnerable components\n\n\n\n- Assessing the environment\n- Hardening\n- Malicious packages in Python\n- Vulnerability management\n\n- Patch management\n- Bug bounty programs\n- Vulnerability databases\n- Vulnerability rating \u2013 CVSS\n- DevOps, the build process and CI \/ CD\n- Dependency checking in Python\n- Lab \u2013 Detecting vulnerable components\n- ML supply chain risks\n\n- Common ML system architectures\n- ML system architecture and the attack surface\n- Case study \u2013 BadNets\n- Protecting data in transit \u2013 transport layer security\n- Protecting data in use \u2013 homomorphic encryption\n- Protecting data in use \u2013 differential privacy\n- Protecting data in use \u2013 multi-party computation\n- ML frameworks and security\n\n- General security concerns about ML platforms\n- TensorFlow security issues and vulnerabilities\n- Case study \u2013 TensorFlow vulnerability in parsing BMP files (CVE-2018-21233)\nDAY 4\n\nCryptography for developers\n\n\n\n- Cryptography basics\n- Cryptography in Python\n- Elementary algorithms\n\n- Random number generation\n\n- Pseudo random number generators (PRNGs)\n- Cryptographically strong PRNGs\n- Seeding\n- Using virtual random streams\n- Weak and strong PRNGs in Python\n- Using random numbers in Python\n- Case study \u2013 Equifax credit account freeze\n- True random number generators (TRNG)\n- Assessing PRNG strength\n- Lab \u2013 Using random numbers in Python\n- Hashing\n\n- Hashing basics\n- Common hashing mistakes\n- Hashing in Python\n- Lab \u2013 Hashing in Python\n- Confidentiality protection\n\n- Symmetric encryption\n\n- Block ciphers\n- Modes of operation\n- Modes of operation and IV \u2013 best practices\n- Symmetric encryption in Python\n- Lab \u2013 Symmetric encryption in Python\n- Asymmetric encryption\n\n- The RSA algorithm\n\n- Using RSA \u2013 best practices\n- RSA in Python\n- Elliptic Curve Cryptography\n\n- The ECC algorithm\n- Using ECC \u2013 best practices\n- ECC in Python\n- Combining symmetric and asymmetric algorithms\n- Homomorphic encryption\n\n- Basics of homomorphic encryption\n- Types of homomorphic encryption\n- FHE in machine learning\n- Integrity protection\n\n- Message Authentication Code (MAC)\n\n- MAC in Python\n- Lab \u2013 Calculating MAC in Python\n- Digital signature\n\n- Digital signature with RSA\n- Digital signature with ECC\n- Digital signature in Python\n- Public Key Infrastructure (PKI)\n\n- Some further key management challenges\n- Certificates\n\n- Chain of trust\n- Certificate management \u2013 best practices\nSecurity testing\n\n\n\n- Security testing methodology\n\n- Security testing \u2013 goals and methodologies\n- Overview of security testing processes\n- Threat modeling\n\n- SDL threat modeling\n- Mapping STRIDE to DFD\n- DFD example\n- Attack trees\n- Attack tree example\n- Misuse cases\n- Misuse case examples\n- Risk analysis\n- Security testing techniques and tools\n\n- Code analysis\n\n- Security aspects of code review\n- Static Application Security Testing (SAST)\n- Lab \u2013 Using static analysis tools\n- Lab \u2013 Finding vulnerabilities via ML\n- Dynamic analysis\n\n- Security testing at runtime\n- Penetration testing\n- Stress testing\n- Dynamic analysis tools\n\n- Dynamic Application Security Testing (DAST)\n- Fuzzing\n\n- Fuzzing techniques\n- Fuzzing \u2013 Observing the process\n- ML fuzzing\n\n\nWrap up\n\n\n- Secure coding principles\n\n- Principles of robust programming by Matt Bishop\n- Secure design principles of Saltzer and Schr\u00f6der\n- And now what?\n\n- Software security sources and further reading\n- Python resources\n- Machine learning security resources","summary_plain":"Your machine learning application works as intended, so you are done, right? But did you consider somebody poisoning your model by training it with intentionally malicious samples? Or sending specially-crafted input \u2013 indistinguishable from normal input \u2013 to your model that will get completely misclassified? Feeding in too large samples \u2013 for example, an image of 16Gbs to crash the application? Because that\u2019s what the bad guys will do. And the list is far from complete.\n\nAs a machine learning practitioner, you need to be paranoid just as any developer out there. Interest in attacking machine learning solutions is gaining momentum, and therefore protecting against adversarial machine learning is essential. This needs not only awareness, but also specific skills to protect your ML applications. The course helps you gain these skills by introducing cutting edge attacks and protection techniques from the ML domain.\n\nMachine learning is software after all. That\u2019s why in this course we also teach common secure coding skills and discuss security pitfalls of the Python programming language. Both adversarial machine learning and core secure coding topics come with lots of hands on labs and stories from real life, all to provide a strong emotional engagement to security and to substantially improve code hygiene.\n\nSo that you are prepared for the forces of the dark side.\n\nSo that nothing unexpected happens.\n\nNothing.","version":"1.0","duration":{"unit":"d","value":4,"formatted":"4 jours"},"pricelist":{"List Price":{"SI":{"country":"SI","currency":"EUR","taxrate":20,"price":3000},"DE":{"country":"DE","currency":"EUR","taxrate":19,"price":3000},"AT":{"country":"AT","currency":"EUR","taxrate":20,"price":3000},"GB":{"country":"GB","currency":"EUR","taxrate":20,"price":3000},"IT":{"country":"IT","currency":"EUR","taxrate":20,"price":3000},"NL":{"country":"NL","currency":"EUR","taxrate":21,"price":3000},"BE":{"country":"BE","currency":"EUR","taxrate":21,"price":3000},"FR":{"country":"FR","currency":"EUR","taxrate":19.6,"price":3000},"MK":{"country":"MK","currency":"EUR","taxrate":null,"price":3000},"GR":{"country":"GR","currency":"EUR","taxrate":null,"price":3000},"HU":{"country":"HU","currency":"EUR","taxrate":20,"price":3000}}},"lastchanged":"2026-01-12T11:39:11+01:00","parenturl":"https:\/\/portal.flane.ch\/swisscom\/fr\/json-courses","nexturl_course_schedule":"https:\/\/portal.flane.ch\/swisscom\/fr\/json-course-schedule\/24022","source_lang":"fr","source":"https:\/\/portal.flane.ch\/swisscom\/fr\/json-course\/cydrill-mlsec"}}