
In the nearest future, artificial intelligence can revolutionize how we approach medical diagnostics, treatment, disease prevention, drug development, and other healthcare-related processes. However, the wide adoption of this technology will be possible only if industry experts and regulatory bodies establish clear AI regulations in healthcare.
“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people worldwide, but like all technology, it can also be misused and cause harm.”
These are the words of Dr. Tedros Adhanom Ghebreyesus, WHO Director-General, referring to risks like unethical collection and utilization of health data and biases encoded in algorithms. And this point is still astonishingly relevant two years after it was voiced.
How do legal regulations secure the safety of AI solutions used in healthcare? What does the regulatory landscape look like for AI-focused health tech products? And how can businesses ensure their AI solutions for the healthcare industry are reliable, unbiased, and transparent? Using our own experience in the healthcare software development field, that’s what we explore in this article.
Healthcare industry regulations for AI
In May 2023, Sam Altman, CEO and cofounder of OpenAI, said, "There should be limits on what a deployed model is capable of and what it does." He also recommended lawmakers to pursue new safety requirements for testing tech products before release.
So, why do we mention it in the context of AI technology in healthcare? Well, because even wealthy countries that are actively using artificial intelligence don’t have clear legislative regulations governing the use of AI in the healthcare industry yet. Let’s explore why.
AI regulatory landscape in the UK
As of July 2023, there is no legislation explicitly governing the use of AI in the UK, meaning the use of this technology in healthcare is regulated by general legislation like Data Protection Act 2018 (DPA 2018) and the General Data Protection Regulation (GDPR).
The DPA 2018 and the GDPR are closely linked in the context of the UK. The EU implemented GDPR in 2018, aiming to harmonize data protection laws across member states. And even though the UK departed from the EU, the GDPR continues to apply in the country through the European Union (Withdrawal) Act 2018.
The DPA 2018 was enacted as the primary legislation supplementing the GDPR to ensure data protection and privacy in the UK. The act provides additional details and provisions that complement the GDPR taking into account the UK legal landscape. Both frameworks establish the rules and rights for data protection organizations and individuals must comply with in the UK.
In the context of AI, the DPA 2018 and the GDPR require organizations to:
- lawfully process personal data used in AI systems
- protect data privacy
- ensure transparency and information provision to individuals
- respect rights related to automated decision-making
- adhere to cross-border data transfer requirements
- establish accountability and governance measures
Other UK regulations AI creators in the health tech sector must comply with include the following:
- Regulations by the Medicines and Healthcare products Regulatory Agency (MHRA), the UK regulatory agency that provides certifications aimed to ensure the safety, quality, and effectiveness of medical devices and medicines, including AI-powered devices and software applications used in healthcare.
- National Health Service (NHS) Digital Health Technology Standard, which provides guidance and assessment criteria for digital health technologies, including AI applications. Its main focuses include clinical evidence, data protection, interoperability, and user experience.
- UK Medical Device Regulations 2022 (MDR), which govern medical devices, including AI-powered ones, used in healthcare. The regulations align with the European Union Medical Device Regulation (MDR) and set requirements for the quality, performance, safety, and security of medical devices.
Though there isn’t a regulatory framework that governs AI specifically, in September 2021, the UK government presented the National AI strategy, which contained a plan for the development of AI in the country for the next ten years. Additionally, in July 2022, the UK government published a policy paper establishing a pro-innovation approach to AI regulation.
Currently, the UK’s legislators focus on developing a clear and transparent regulatory framework that would “drive growth while also protecting our safety, security and fundamental values.” For instance, the UK’s National Institute for Health and Care Excellence has been working on a multi-agency advisory service (MAAS) for AI and data-driven technologies, funded by the NHS AI Lab.
The service aims to comprise a partnership between the Care Quality Commission, Health Research Authority, and MHRA and help developers and technology adopters navigate the regulatory landscape. However, it’s too early to tell how successful all this guidance and legislation will be long-term.
EU regulations on AI in healthcare
As we’ve already established, one of the EU’s core legal regulations in AI for healthcare is GDPR, a data protection regulation that sets requirements for the collection, use, storage, and processing of personal data.
While the UK is taking a guidance-based approach, the EU requires a comprehensive and detailed legislative framework since it would be more efficient considering the union has 27 member states.
In April 2021, the European Commission presented a proposal for a regulation laying down harmonized rules on artificial intelligence, also known as the AI Act. The regulation goes hand in hand with the Coordinated Plan on AI. The end goal of the act is to make sure the Europeans can trust the AI solutions they use and contribute to building an ecosystem of AI excellence within the EU.
The AI Act aims to address the risks of specific AI use cases, categorizing them into four levels:
- Unacceptable risk
- High risk
- Limited risk
- Minimal risk
For instance, minimal-risk solutions like secure administrative support automation systems will face no restrictions. Limited-risk solutions will have transparency obligations.
High-risk AI systems will need to go through conformity assessments. Such high-risk systems will likely include robotics assistants used in surgeries, AI systems that provide diagnoses, clinical decision support systems, etc. Such AI-enabled health tech products must have an established risk management system, comply with data governance requirements, and meet several other conditions.
The AI Act is expected to take effect no earlier than in the second half of 2024. The territorial applicability of AI regulations will encompass all member states and, given the Northern Ireland Protocol, Northern Ireland as well.
AI regulations in healthcare for the US
Just like the UK, the US does not have specific comprehensive regulations solely focused on AI in healthcare, though the country’s regulatory bodies do provide guidance on the use of AI in healthcare settings. At the same time, as the EU is working on passing the AI Act, the US has been relatively slow in developing a relevant regulatory framework at the federal level.
As of July 2023, AI technologies in healthcare in the US fall under other existing laws and regulations, specifically those that govern aspects of data privacy, security, and medical device regulations.
Here are the key regulations and guidance sources relevant to AI in healthcare in the US:
- Health Insurance Portability and Accountability Act (HIPAA) sets standards for protecting patients' sensitive health information, also known as protected health information (PHI). Companies and organizations handling PHI, including those using AI, must comply with HIPAA's privacy and security rules to ensure the confidentiality and integrity of patient data.
- Federal Food, Drug, and Cosmetic Act (FD&C Act) regulates medical devices, including certain AI-based software apps used in healthcare. This means AI software used in healthcare can fall under the medical device category and require FDA clearance.
- FDA's Software as a Medical Device (SaMD) Guidance on the regulation of software that functions as a medical device, including AI-based software, outlines principles for determining the risk categorization and regulatory requirements for SaMD.
- National Institute of Standards and Technology (NIST) Frameworks, including the AI Risk Management Framework, provide guidance on assessing and managing risks associated with AI systems.
For companies and organizations developing or managing health tech products powered by AI, it’s essential to constantly monitor the regulatory landscape since artificial intelligence is a rapidly evolving area.
If you are also involved in AI-enabled software development in the US, be sure to follow the latest updates from regulatory bodies like the FDA, FTC, OCR, NIST, and other relevant regulators. Sooner or later, the US will likely develop an industrial strategy for using AI in the industry.
Fostering trust and confidence in AI healthcare systems
Trust is the foundation for the successful widespread adoption of artificial intelligence. Creating a proper regulatory environment in healthcare for AI is the first step towards fostering acceptance, engagement, and confidence necessary to collaborate with AI systems for both healthcare professionals (HCPs) and patients.
There is no doubt that AI holds the potential to fundamentally transform nearly every aspect of healthcare: from creating treatment plans to performing surgeries. However, for clinicians to embrace the technology, the solutions must be reliable since HCPs are responsible for their patients’ lives.
Among the primary concerns connected to AI are:
- accuracy of predictions
- use of sensitive medical information and data breaches
- bias of AI algorithms
- lack of transparency and explanation
- accountability for negative outcomes AI systems lead to
Since they haven’t been resolved yet, the attitude toward AI among patients and medical workers is rather contradictory.
For example, according to a Pew Research Center survey, 60% of US adults would feel uncomfortable if their healthcare provider relied on AI for diagnostics and treatment recommendations. At the same time, 40% think AI in health and medicine would reduce the number of mistakes. Moreover, 51% of Americans who see a problem with racial and ethnic bias in healthcare say the issue of unfair treatment would improve with AI.
Another research by Tebra, which involved 1,000 US adults and 500 healthcare professionals, showed that only a little over 1 in 10 HCPs use AI technologies. However, almost 50% have expressed an intent to adopt these technologies in the future, and 95% of those who reviewed AI’s medical advice shifted toward a positive perspective on AI.
There need to be comprehensive, transparent, and clear legislative AI regulations in healthcare that address the above-mentioned issues and provide users with information on the level of AI’s safety, efficiency, and accuracy.
Key considerations for developing a compliant AI solution in healthcare
As we’ve already established, developing a compliant AI solution for the health tech sector is a way to gain users’ trust and mitigate legal risks. However, these aren’t the only reasons.
Strict regulations in the industry help protect patient safety and privacy, prevent AI misuse and abuse, improve the accuracy of AI solutions, and encourage creators of these solutions to battle bias and other ethical issues.
Let’s explore what needs to be considered to build a truly secure, compliant AI solution for healthcare purposes.
Regulatory compliance
While current AI regulations in healthcare aren’t designed to address all AI-related issues specifically, there are still several legal requirements that health tech companies building AI solutions must comply with to ensure proper handling of patient data, consent, etc.
Data privacy and security
Establishing solid data privacy and security measures is vital for obtaining users’ trust. We suggest implementing solid data encryption, access controls, and authentication measures for this. It’s also advisable to minimize data collection and anonymize the data when possible, conduct regular security audits and risk assessments, and provide employee training on best practices. Secure data storage and transmission, regular risk assessments, and continuous monitoring are also crucial.
Bias mitigation and other ethical considerations
Let's be honest, the traditional healthcare system has seen all kinds of bias: racial, ethnic, gender, socioeconomic, geographic, etc. In 2023, it’s crucial to develop AI solutions with ethical principles in mind. To ensure fairness and mitigate issues like bias, discrimination, and safety violations, we recommend training AI algorithms on diverse and representative datasets and committing to protecting patient autonomy, privacy, and confidentiality.
Validation by healthcare professionals
The solution you develop must fit into the real-life clinical workflow for the HCPs to adopt it. Therefore, we recommend collaborating with healthcare facilities, engaging medical workers early on, and conducting meticulous clinical validation, testing, and refining of the solution based on HCPs’ feedback. You will also need to gather evidence to prove the solution’s safety and accuracy in clinical settings.
Explainability and transparency
Health tech companies should foster explainability and transparency when building AI solutions to promote trust among HCPs and patients, enhance patient safety, detect and mitigate bias, and contribute to education and knowledge sharing. This can be done by utilizing interpretable algorithms, documenting data and model details, employing explainability techniques, addressing ethical considerations, engaging stakeholders, and seeking feedback.
We also recommend health tech companies establish clear roles and responsibilities for HCPs within AI solutions and promote collaboration between AI algorithms and human experts to enhance patient care and safety.
Documentation and auditability
Maintaining detailed development process documentation helps ensure regulatory compliance, accountability, and transparency. It also enables health tech companies to demonstrate adherence to regulations, build trust with stakeholders, and facilitate audits. Therefore, keeping track of data sources, validation processes, user feedback, and other details is essential.
Continuous monitoring and improvement
Healthcare companies working with AI should continuously monitor and improve their solutions to ensure high-quality, equitable, and ethically responsible AI-driven healthcare services. We suggest establishing mechanisms to collect and analyze user feedback, identify and address issues promptly, and provide updates to ensure user satisfaction.
While developing a compliant AI solution for healthcare is not easy, all these considerations, as well as collaboration with healthcare professionals, regulatory bodies, legal experts, and software developers with relevant expertise, can help you build a solution that aligns with industry best practices and regulatory requirements.
Conclusion
The healthcare sector is renowned for its strict regulations covering numerous aspects like licensing requirements, data storage, and clinical trials. However, AI-powered solutions have proven to be much harder to regulate.
Existing regulatory frameworks are primarily focused on conventional healthcare systems. But AI is a rapidly evolving and incredibly flexible technology, meaning it needs non-conventional regulatory strategies that are yet to be implemented.
With great power comes great responsibility. Health tech companies that decide to go for the integration of AI systems will likely have a substantial competitive advantage. However, they will also need to constantly monitor the industry’s ever-changing regulatory landscape to mitigate the risks that come with artificial intelligence and ensure their solutions are safe, secure, and efficient.
As experts in healthcare software development and AI solutions integration, Mind Studios is ready to be your guide into the medical tech world. Whether you have an idea for a digital product, want to enhance your current medical system, or need advice on AI and healthcare regulations — feel free to reach out, and we’ll arrange a free consultation for you.