Course Description:
Learning Objectives
A comprehensive set of learning objectives has been developed to guide individuals and organizations in understanding and navigating the European Union's Artificial Intelligence Act and the U.S. National Institute of Standards and Technology's AI Risk Management Framework. These objectives aim to equip learners with the knowledge and skills necessary to develop, deploy, and govern AI systems in a manner that is both compliant with emerging regulations and aligned with principles of trustworthiness and responsible innovation.
Foundational Knowledge of the EU AI Act
Upon completion of this learning module, learners will be able to:
• Explain the rationale and scope of the EU AI Act, including its objectives to ensure a high level of protection for health, safety, and fundamental rights.
• Define and differentiate the risk-based categories established by the Act: unacceptable risk, high-risk, limited risk, and minimal risk.
• Identify and provide examples of AI systems that fall into each risk category.
• Articulate the key legal obligations for providers and deployers of high-risk AI systems, covering aspects such as data quality, technical documentation, transparency, human oversight, and cybersecurity.
• Describe the conformity assessment procedures required for high-risk AI systems before they can be placed on the EU market.
• Outline the enforcement mechanisms and penalties for non-compliance with the EU AI Act.
• Explain the role and responsibilities of various actors within the AI value chain as defined by the Act.
Mastering the NIST AI Risk Management Framework
This section focuses on the practical application of the NIST framework. Learners will be able to:
• Articulate the purpose and voluntary nature of the NIST AI Risk Management Framework (RMF).
• Describe the four core functions of the NIST AI RMF: Govern, Map, Measure, and Manage.
• Explain the key characteristics of trustworthy AI as defined by NIST, including validity, reliability, safety, security, transparency, fairness, and accountability.
• Detail the activities and outcomes associated with each function of the framework throughout the AI lifecycle.
• Apply the NIST AI RMF to a hypothetical AI system, demonstrating the ability to identify, assess, and manage risks.
• Explain how the NIST AI RMF can be adapted and tailored to different organizational contexts and specific AI use cases.
COURSE FEE:
ISACA Member Php9,975.00
Non-Member Php14,175.00
Fees are subject to 12% VAT