Is Insurtech a High-Risk Application of AI? – The National Law Review
47
New Articles
Find Your Next Job !
While there are many AI regulations that may apply to a company operating in the Insurtech space, these laws are not uniform in their obligations. Many of these regulations concentrate on different regulatory constructs, and the company’s focus will drive which obligations apply to it. For example, certain jurisdictions, such as Colorado and the European Union, have enacted AI laws that specifically address “high-risk AI systems” that place heightened burdens on companies deploying AI models that would fit into this categorization.
Although many deployments that are considered a “high-risk AI system” in one jurisdiction may also meet that categorization in another jurisdiction, each regulation technically defines the term quite differently.
Europe’s Artificial Intelligence Act (EU AI Act) takes a gradual, risk-based approach to compliance obligations for in-scope companies. In other words, the higher the risk associated with AI deployment, the more stringent the requirements for the company’s AI use. Under Article 6 of the EU AI Act, an AI system is considered “high risk” if it meets both conditions of subsection (1) [1] of the provision or if it falls within the list of AI systems considered high risk and included as Annex III of the EU AI Act,[2] which includes, AI systems that are dealing with biometric data, used to evaluate the eligibility of natural persons for benefits and services, evaluate creditworthiness, or used for risk assessment and pricing in relation to life or health insurance.
The Colorado Artificial Intelligence Act (CAIA), which takes effect on February 1, 2026, adopts a risk-based approach to AI regulation. The CAIA focuses on the deployment of “high-risk” AI systems that could potentially create “algorithmic discrimination.” Under the CAIA, a “high-risk” AI system is defined as any system that, when deployed, makes—or is a substantial factor in making—a “consequential decision”; namely, a decision that has a material effect on the provision or cost of insurance.
Notably, even proposed AI bills that have not been enacted have considered insurance-related activity to come within the proposed regulatory scope. For instance, on March 24, 2025, Virginia’s Governor Glenn Youngkin vetoed the state’s proposed High-Risk Artificial Intelligence Developer and Deployer Act (also known as the Virginia AI Bill), which would have applied to developers and deployers of “high-risk” AI systems doing business in Virginia. Compared to the CAIA, the Virginia AI Bill defined “high-risk AI” more narrowly, focusing only on systems that operate without meaningful human oversight and serve as the principal basis for consequential decisions. However, even under that failed bill, an AI system would have been considered “high-risk” if it was intended to autonomously make, or be a substantial factor in making, a “consequential decision,” which is a “decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer of—among other things—insurance.
Both the CAIA and the failed Virginia AI Bill explicitly identify that an AI system making a consequential decision regarding insurance is considered “high-risk,” which certainly creates the impression that there is a trend toward regulating AI use in the Insurtech space as high-risk. However, the inclusion of insurance on the “consequential decision” list of these laws does not definitively mean that all Insurtech leveraging AI will necessarily be considered high-risk under these or future laws. For instance, under the CAIA, an AI system is only high-risk if, when deployed, it “makes or is a substantial factor in making” a consequential decision. Under the failed Virginia AI Bill, the AI system had to be “specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.”
Thus, the scope of regulated AI use, which varies from one applicable law to another, must be considered together with the business’s proposed application to get a better sense of the appropriate AI governance in a given case. While there are various use cases that leverage AI in insurance, which could result in consequential decisions that impact an insured, such as those that improve underwriting, fraud detection, and pricing, there are also other internal uses of AI that may not be considered high risk under a given threshold. For example, leveraging AI to assess a strategic approach to marketing insurance or to make the new client onboarding or claims processes more efficient likely doesn’t trigger the consequential decision threshold required to be considered high-risk under CAIA or the failed Virginia AI Bill. Further, even if the AI system is involved in a consequential decision, this alone may not deem it to be high risk, as, for instance, the CAIA requires that the AI system make the consequential decision or be a substantial factor in that consequential decision.
Although the EU AI Act does not expressly label Insurtech as being high-risk, a similar analysis is possible because Annex III of the EU AI Act lists certain AI uses that may be implicated by an AI system deployed in the Insurtech space. For example, an AI system leveraging a model to assess creditworthiness in developing a pricing model in the EU likely triggers the law’s high-risk threshold. Similarly, AI modeling used to assess whether an applicant is eligible for coverage may also trigger a higher risk threshold. Under Article 6(2) of the EU AI Act, even if an AI system fits the categorization promulgated under Annex III, the deployer of the AI system should perform the necessary analysis to assess whether the AI system poses a significant risk of harm to individuals’ health, safety, or fundamental rights, including by materially influencing decision-making. Notably, even if an AI system falls into one of the categories in Annex III, if the deployer determines through documented analysis that the deployment of the AI system does not pose a significant risk of harm, the AI system will not be considered high-risk.
Under the CAIA, when dealing with a high-risk AI system, various obligations come into play. These obligations vary for developers[3] and deployers[4] of the AI system. Developers are required to display a disclosure on their website identifying any high-risk AI systems they have deployed and explain how they manage known or reasonably foreseeable risks of algorithmic discrimination. Developers must also notify the Colorado AG and all known deployers of the AI system within 90 days of discovering that the AI system has caused or is reasonably likely to cause algorithmic discrimination. Developers must also make significant additional documentation about the high-risk AI system available to deployers.
Under the CAIA, deployers have different obligations when leveraging a high-risk AI system. First, they must notify consumers when the high-risk AI system will be making, or will play a substantial factor in making, a consequential decision about the consumer. This includes (i) a description of the high-risk AI system and its purpose, (ii) the nature of the consequential decision, (iii) contact information for the deployer, (iv) instructions on how to access the required website disclosures, and (v) information regarding the consumer’s right to opt out of the processing of the consumer’s personal data for profiling. Additionally, when use of the high-risk AI system results in a decision adverse to the consumer, the deployer must disclose to the consumer (i) the reason for the consequential decision, (ii) the degree to which the AI system was involved in the adverse decision, and (iii) the type of data that was used to determine that decision and where that data was obtained from, giving the consumer the opportunity to correct data that was used about that as well as appeal the adverse decision via human review. Developers must also make additional disclosures regarding information and risks associated with the AI system. Given that the failed Virginia AI Bill had proposed similar obligations, it would be reasonable to consider the CAIA as a roadmap for high-risk AI governance considerations in the United States.
Under Article 8 of the EU AI Act, high-risk AI systems must meet several requirements that tend to be more systemic. These include the implementation, documentation, and maintenance of a risk management system that identifies and analyzes reasonably foreseeable risks the system may pose to health, safety, or fundamental rights, as well as the adoption of appropriate and targeted risk management measures designed to address these identified risks. High-risk AI governance under this law must also include:
When deploying high risk AI systems, in-scope companies must carve out the necessary resources to not only assess whether they fall within this categorization, but also to ensure the variety of requirements are adequately considered and implemented prior to deployment of the AI system.
The Insurtech space is growing in parallel with the expanding patchwork of U.S. AI regulations. Prudent growth in the industry requires awareness of the associated legal dynamics, including emerging regulatory concepts nationwide.
[1] Subsection (1) states that an AI system is high-risk if it is “intended to be used as a safety component of a product (or is a product) covered by specific EU harmonization legislation listed in Annex I of the AI Act and the same harmonization legislation mandates that he product hat incorporates the AI system as a safety component, or the AI system itself as a stand-alone product, under a third-party conformity assessment before being placed in the EU market.”
[2] Annex 3 of the EU AI Act can be found at https://artificialintelligenceact.eu/annex/3/
[3] Under the CAIA, a “Developer” is a person doing business in Colorado that develops or intentionally and substantially modifies an AI system.
[4] Under the CAIA, a “Deployer” is a persona doing business in Colorado that deploys a High-Risk AI System.
More Upcoming Events
Sign Up for any (or all) of our 25+ Newsletters
You are responsible for reading, understanding, and agreeing to the National Law Review’s (NLR’s) and the National Law Forum LLC’s Terms of Use and Privacy Policy before using the National Law Review website. The National Law Review is a free-to-use, no-log-in database of legal and business articles. The content and links on www.NatLawReview.com are intended for general information purposes only. Any legal analysis, legislative updates, or other content and links should not be construed as legal or professional advice or a substitute for such advice. No attorney-client or confidential relationship is formed by the transmission of information between you and the National Law Review website or any of the law firms, attorneys, or other professionals or organizations who include content on the National Law Review website. If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor.
Some states have laws and ethical rules regarding solicitation and advertisement practices by attorneys and/or other professionals. The National Law Review is not a law firm nor is www.NatLawReview.com intended to be a referral service for attorneys and/or other professionals. The NLR does not wish, nor does it intend, to solicit the business of anyone or to refer anyone to an attorney or other professional. NLR does not answer legal questions nor will we refer you to an attorney or other professional if you request such information from us.
Under certain state laws, the following statements may be required on this website and we have included them in order to be in full compliance with these rules. The choice of a lawyer or other professional is an important decision and should not be based solely upon advertisements. Attorney Advertising Notice: Prior results do not guarantee a similar outcome. Statement in compliance with Texas Rules of Professional Conduct. Unless otherwise noted, attorneys are not certified by the Texas Board of Legal Specialization, nor can NLR attest to the accuracy of any notation of Legal Specialization or other Professional Credentials.
The National Law Review – National Law Forum LLC 2070 Green Bay Rd., Suite 178, Highland Park, IL 60035 Telephone (708) 357-3317 or toll-free (877) 357-3317. If you would like to contact us via email please click here.
Copyright ©2025 National Law Forum, LLC
Find Your Next Job !
While there are many AI regulations that may apply to a company operating in the Insurtech space, these laws are not uniform in their obligations. Many of these regulations concentrate on different regulatory constructs, and the company’s focus will drive which obligations apply to it. For example, certain jurisdictions, such as Colorado and the European Union, have enacted AI laws that specifically address “high-risk AI systems” that place heightened burdens on companies deploying AI models that would fit into this categorization.
Although many deployments that are considered a “high-risk AI system” in one jurisdiction may also meet that categorization in another jurisdiction, each regulation technically defines the term quite differently.
Europe’s Artificial Intelligence Act (EU AI Act) takes a gradual, risk-based approach to compliance obligations for in-scope companies. In other words, the higher the risk associated with AI deployment, the more stringent the requirements for the company’s AI use. Under Article 6 of the EU AI Act, an AI system is considered “high risk” if it meets both conditions of subsection (1) [1] of the provision or if it falls within the list of AI systems considered high risk and included as Annex III of the EU AI Act,[2] which includes, AI systems that are dealing with biometric data, used to evaluate the eligibility of natural persons for benefits and services, evaluate creditworthiness, or used for risk assessment and pricing in relation to life or health insurance.
The Colorado Artificial Intelligence Act (CAIA), which takes effect on February 1, 2026, adopts a risk-based approach to AI regulation. The CAIA focuses on the deployment of “high-risk” AI systems that could potentially create “algorithmic discrimination.” Under the CAIA, a “high-risk” AI system is defined as any system that, when deployed, makes—or is a substantial factor in making—a “consequential decision”; namely, a decision that has a material effect on the provision or cost of insurance.
Notably, even proposed AI bills that have not been enacted have considered insurance-related activity to come within the proposed regulatory scope. For instance, on March 24, 2025, Virginia’s Governor Glenn Youngkin vetoed the state’s proposed High-Risk Artificial Intelligence Developer and Deployer Act (also known as the Virginia AI Bill), which would have applied to developers and deployers of “high-risk” AI systems doing business in Virginia. Compared to the CAIA, the Virginia AI Bill defined “high-risk AI” more narrowly, focusing only on systems that operate without meaningful human oversight and serve as the principal basis for consequential decisions. However, even under that failed bill, an AI system would have been considered “high-risk” if it was intended to autonomously make, or be a substantial factor in making, a “consequential decision,” which is a “decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer of—among other things—insurance.
Both the CAIA and the failed Virginia AI Bill explicitly identify that an AI system making a consequential decision regarding insurance is considered “high-risk,” which certainly creates the impression that there is a trend toward regulating AI use in the Insurtech space as high-risk. However, the inclusion of insurance on the “consequential decision” list of these laws does not definitively mean that all Insurtech leveraging AI will necessarily be considered high-risk under these or future laws. For instance, under the CAIA, an AI system is only high-risk if, when deployed, it “makes or is a substantial factor in making” a consequential decision. Under the failed Virginia AI Bill, the AI system had to be “specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.”
Thus, the scope of regulated AI use, which varies from one applicable law to another, must be considered together with the business’s proposed application to get a better sense of the appropriate AI governance in a given case. While there are various use cases that leverage AI in insurance, which could result in consequential decisions that impact an insured, such as those that improve underwriting, fraud detection, and pricing, there are also other internal uses of AI that may not be considered high risk under a given threshold. For example, leveraging AI to assess a strategic approach to marketing insurance or to make the new client onboarding or claims processes more efficient likely doesn’t trigger the consequential decision threshold required to be considered high-risk under CAIA or the failed Virginia AI Bill. Further, even if the AI system is involved in a consequential decision, this alone may not deem it to be high risk, as, for instance, the CAIA requires that the AI system make the consequential decision or be a substantial factor in that consequential decision.
Although the EU AI Act does not expressly label Insurtech as being high-risk, a similar analysis is possible because Annex III of the EU AI Act lists certain AI uses that may be implicated by an AI system deployed in the Insurtech space. For example, an AI system leveraging a model to assess creditworthiness in developing a pricing model in the EU likely triggers the law’s high-risk threshold. Similarly, AI modeling used to assess whether an applicant is eligible for coverage may also trigger a higher risk threshold. Under Article 6(2) of the EU AI Act, even if an AI system fits the categorization promulgated under Annex III, the deployer of the AI system should perform the necessary analysis to assess whether the AI system poses a significant risk of harm to individuals’ health, safety, or fundamental rights, including by materially influencing decision-making. Notably, even if an AI system falls into one of the categories in Annex III, if the deployer determines through documented analysis that the deployment of the AI system does not pose a significant risk of harm, the AI system will not be considered high-risk.
Under the CAIA, when dealing with a high-risk AI system, various obligations come into play. These obligations vary for developers[3] and deployers[4] of the AI system. Developers are required to display a disclosure on their website identifying any high-risk AI systems they have deployed and explain how they manage known or reasonably foreseeable risks of algorithmic discrimination. Developers must also notify the Colorado AG and all known deployers of the AI system within 90 days of discovering that the AI system has caused or is reasonably likely to cause algorithmic discrimination. Developers must also make significant additional documentation about the high-risk AI system available to deployers.
Under the CAIA, deployers have different obligations when leveraging a high-risk AI system. First, they must notify consumers when the high-risk AI system will be making, or will play a substantial factor in making, a consequential decision about the consumer. This includes (i) a description of the high-risk AI system and its purpose, (ii) the nature of the consequential decision, (iii) contact information for the deployer, (iv) instructions on how to access the required website disclosures, and (v) information regarding the consumer’s right to opt out of the processing of the consumer’s personal data for profiling. Additionally, when use of the high-risk AI system results in a decision adverse to the consumer, the deployer must disclose to the consumer (i) the reason for the consequential decision, (ii) the degree to which the AI system was involved in the adverse decision, and (iii) the type of data that was used to determine that decision and where that data was obtained from, giving the consumer the opportunity to correct data that was used about that as well as appeal the adverse decision via human review. Developers must also make additional disclosures regarding information and risks associated with the AI system. Given that the failed Virginia AI Bill had proposed similar obligations, it would be reasonable to consider the CAIA as a roadmap for high-risk AI governance considerations in the United States.
Under Article 8 of the EU AI Act, high-risk AI systems must meet several requirements that tend to be more systemic. These include the implementation, documentation, and maintenance of a risk management system that identifies and analyzes reasonably foreseeable risks the system may pose to health, safety, or fundamental rights, as well as the adoption of appropriate and targeted risk management measures designed to address these identified risks. High-risk AI governance under this law must also include:
When deploying high risk AI systems, in-scope companies must carve out the necessary resources to not only assess whether they fall within this categorization, but also to ensure the variety of requirements are adequately considered and implemented prior to deployment of the AI system.
The Insurtech space is growing in parallel with the expanding patchwork of U.S. AI regulations. Prudent growth in the industry requires awareness of the associated legal dynamics, including emerging regulatory concepts nationwide.
[1] Subsection (1) states that an AI system is high-risk if it is “intended to be used as a safety component of a product (or is a product) covered by specific EU harmonization legislation listed in Annex I of the AI Act and the same harmonization legislation mandates that he product hat incorporates the AI system as a safety component, or the AI system itself as a stand-alone product, under a third-party conformity assessment before being placed in the EU market.”
[2] Annex 3 of the EU AI Act can be found at https://artificialintelligenceact.eu/annex/3/
[3] Under the CAIA, a “Developer” is a person doing business in Colorado that develops or intentionally and substantially modifies an AI system.
[4] Under the CAIA, a “Deployer” is a persona doing business in Colorado that deploys a High-Risk AI System.
More Upcoming Events
Sign Up for any (or all) of our 25+ Newsletters
You are responsible for reading, understanding, and agreeing to the National Law Review’s (NLR’s) and the National Law Forum LLC’s Terms of Use and Privacy Policy before using the National Law Review website. The National Law Review is a free-to-use, no-log-in database of legal and business articles. The content and links on www.NatLawReview.com are intended for general information purposes only. Any legal analysis, legislative updates, or other content and links should not be construed as legal or professional advice or a substitute for such advice. No attorney-client or confidential relationship is formed by the transmission of information between you and the National Law Review website or any of the law firms, attorneys, or other professionals or organizations who include content on the National Law Review website. If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor.
Some states have laws and ethical rules regarding solicitation and advertisement practices by attorneys and/or other professionals. The National Law Review is not a law firm nor is www.NatLawReview.com intended to be a referral service for attorneys and/or other professionals. The NLR does not wish, nor does it intend, to solicit the business of anyone or to refer anyone to an attorney or other professional. NLR does not answer legal questions nor will we refer you to an attorney or other professional if you request such information from us.
Under certain state laws, the following statements may be required on this website and we have included them in order to be in full compliance with these rules. The choice of a lawyer or other professional is an important decision and should not be based solely upon advertisements. Attorney Advertising Notice: Prior results do not guarantee a similar outcome. Statement in compliance with Texas Rules of Professional Conduct. Unless otherwise noted, attorneys are not certified by the Texas Board of Legal Specialization, nor can NLR attest to the accuracy of any notation of Legal Specialization or other Professional Credentials.
The National Law Review – National Law Forum LLC 2070 Green Bay Rd., Suite 178, Highland Park, IL 60035 Telephone (708) 357-3317 or toll-free (877) 357-3317. If you would like to contact us via email please click here.
Copyright ©2025 National Law Forum, LLC
source
This is a newsfeed from leading technology publications. No additional editorial review has been performed before posting.

