The graduate certificate in Responsible AI provides students with the fundamentals of artificial intelligence (AI), how AI systems are architected, the principles of systems engineering as they relate to AI systems, theories of AI safety and risk, how to test and evaluate such systems to meet risk thresholds, and how to identify ethical, legal and regulatory issues that arise in such systems. Students will be prepared to develop and manage complex systems with embedded AI, including identifying unique requirements for systems with embedded AI, testing and certifying these systems, and defining and maintaining safe levels of performance for deployed AI. Graduates will also be able to develop acquisition plans for complex systems with embedded AI, and develop AI maintenance programs including auditing. Areas of application include safety-critical physical systems like self-driving cars, air taxis and health applications, as well as software-based systems like financial and banking systems, and those that support education and research.
Admissions
In addition to general admission requirements of the university, applicants must have earned a GPA of 3.00 or better on a 4.0 scale in the last 60 credits of their baccalaureate degree. Other application requirements are as follows:
- A one-page statement of educational and career goals
- Current resume
- Internationally-educated students must submit their English Proficiency scores
Certificate Requirements
Total credits: 14
This certificate may be pursued on a full-or part-time basis.
Code | Title | Credits |
---|---|---|
ECE 527 | Learning From Data | 3 |
or CS 580 | Introduction to Artificial Intelligence | |
ME 575 | AI Design and Deployment Risks | 3 |
ME 576 | AI: Ethics, Policy, and Society | 3 |
ME 577 | Emerging AI Robotics Tech Seminar 1 | 2 |
SYST 578 | Systems Engineering and Artificial Intelligence | 3 |
Total Credits | 14 |
- 1
This is a one credit course that must be taken for a minimum of two semesters.
Program Outcomes
Students will learn:
- the fundamentals of artificial intelligence,
- how AI systems are architected,
- the principles of systems engineering as they relate to AI systems,
- theories of AI safety and risk,
- how to test and evaluate such systems to meet risk thresholds, and
- how to identify ethical, legal and regulatory issues that arise in such systems.