APPENDIX I: Model Governance Definitions
The purpose of the question related to model governance is to obtain a better understanding regarding a
company’s awareness of specific risk areas tied to the NAIC’s Artificial Intelligence (AI) Principles. In
addition, the survey seeks information to understand if guidelines and/or best practices are documented.
Specifically, if the company is involved in using AI/machine learning (ML) models, does the company have
a documented process in place that addresses:
• Fairness and Ethics Considerations: Ensuring responsible adherence to fairness and ethical
considerations. It is clear there is debate regarding the definition of “fairness and ethics,” so for the
purposes of this survey, and assuming a general understanding of the terms, the response should be
consistent with how the company defines those terms. Generally, respect the rule of law and implement
trustworthy solutions designed to benefit consumers in a manner that avoids harmful or unintended
consequences including unfair or proxy discrimination.
• Accountability for Data Algorithms’ Compliance with Laws as Well as Intended and Unintended Impacts:
Ensuring the data used and the algorithms/models within the scope of the AI/ML system are delivering
the intended benefit, and there are proactive processes in place to ensure there is no unacceptable
unintended impact. Simply put, be responsible for the creation, implementation, and impacts of any AI
system.
• Appropriate Resources and Knowledge Involved to Ensure Compliance with Laws, Including Those
Related to Unfair Discrimination: Ensuring the requisite and appropriate resources, skill sets, and
knowledge needed to ensure compliance with laws, including those related to unfair discrimination, are
actively involved in these programs and decision-making
—including oversight of third parties’
understanding and competence related to compliance with relevant laws and the issue of unfair
discrimination.
• Ensure Transparency With Appropriate Disclosures, Including Notice to Consumers Specific to Data
Being Used and Methods for Appeal and Recourse Related to Inaccurate Data: Ensuring documented
processes and best practices are in place that govern and actively address the issue of transparency,
ensuring adequate and complete/understandable consumer disclosure regarding the data being used and
how the data is used, as well as providing a way for consumers to appeal or correct inaccurate data. This
is intended to be specific for data not already protected by legislation such as the federal Fair Credit
Reporting Act (FCRA), as the assumption is all companies would be compliant with that law. This pertains
to consumer data not specified in the FCRA.
• AI Systems are Secure, Safe, and Robust, Including Decision Traceability and Security and Privacy Risk
Protections: Ensuring an appropriate governance process is in place and documented specific to the
company’s AI/ML activity or program that focuses on protecting security, in terms of its data and
intellectual property, from potentially compromising interference or risk and relevant and necessary
privacy protections are in place. Ensuring the data and the AI/ML models are sufficiently transparent and
explainable so that they can be reviewed for compliance with laws and best practices and proven to not
be unfairly discriminatory or used for an unethical purpose.
It is understood that governance models vary in terms of components and terms used to describe these
risk areas. However, there is a common thread across most governance models, and this language was
specifically used in this survey as it ties directly to the NAIC’s AI Principles. Where there may be concerns