The graduate certificate in Responsible AI provides students with the fundamentals of artificial intelligence (AI), how AI systems are architected, the principles of systems engineering as they relate to AI systems, theories of AI safety and risk, how to test and evaluate such systems to meet risk thresholds, and how to identify ethical, legal and regulatory issues that arise in such systems. Students will be prepared to develop and manage complex systems with embedded AI, including identifying unique requirements for systems with embedded AI, testing and certifying these systems, and defining and maintaining safe levels of performance for deployed AI. Graduates will also be able to develop acquisition plans for complex systems with embedded AI, and develop AI maintenance programs including auditing. Areas of application include safety-critical physical systems like self-driving cars, air taxis and health applications, as well as software-based systems like financial and banking systems, and those that support education and research.

Admissions

In addition to general admission requirements of the university, applicants must have earned a GPA of 3.00 or better on a 4.0 scale in the last 60 credits of their baccalaureate degree. Other application requirements are as follows:

  • A one-page statement of educational and career goals
  • Current resume
  • Internationally-educated students must submit their English Proficiency scores

Banner Code: EC-CERG-RSAI

Certificate Requirements

Total credits: 14

This certificate may be pursued on a full-or part-time basis.

ECE 527Learning From Data3
or CS 580 Introduction to Artificial Intelligence
ME 575AI Design and Deployment Risks3
ME 576AI: Ethics, Policy, and Society3
ME 577Emerging AI Robotics Tech Seminar 12
SYST 578Systems Engineering and Artificial Intelligence3
Total Credits14
1

This is a one credit course that must  be taken for a minimum of two semesters.

Program Outcomes

Students will learn:

  • the fundamentals of artificial intelligence,
  • how AI systems are architected,
  • the principles of systems engineering as they relate to AI systems,
  • theories of AI safety and risk,
  • how to test and evaluate such systems to meet risk thresholds, and
  • how to identify ethical, legal and regulatory issues that arise in such systems.