AI technologies hold the promise of revolutionizing our society and personal lives, with impacts spanning sectors such as business, healthcare, transportation, cybersecurity, including the environment, and our planet. They have the potential to foster inclusive economic growth and propel scientific breakthroughs that could enhance the state of our world. However, AI technologies also carry risks that could adversely affect individuals, groups, organizations, communities, society, the environment, and the planet. Like other technological risks, those associated with AI can manifest in various forms and can be categorized based on their duration (long- or short-term), probability (high or low), scope (systemic or localized), and impact (high or low).

  1. Define the Problem (that AI may solve)
  2. Set Clear Objectives (Standards)
  3. Assemble the Team (Resource Management)
  4. Data Collection and Preparation (Technical Considerations internal and external)
  5. Model Development (Deploy in dev/staging to production).
  6. Model Testing and Validation (Quality of code and Data working together)
  7. Deployment (Go-Live)
  8. Maintenance and Continuous Improvement (Transfer responsibility to business/technical stakeholders).

Each AI endeavor is different and may require additional steps or considerations based on the specific business problem, the resources, and the established data available within the enterprise. It’s also important to consider ethical implications and ensure that your AI project complies with all relevant laws and regulations. We use the AI Risk Management Framework from NIST.