Artificial Intelligence’s Role in the Future of Healthcare — And What That Means for Privacy
As artificial intelligence and machine learning become woven into the fabric of the life sciences, careful consideration must be paid to the ethical and legal implications that come with the prevalence of big data, says Babak Forouraghi, PhD, professor and chair of computer science.
Artificial intelligence and machine learning have made a significant impact on the way we live and work — from our smartphones’ facial recognition software and internet search engines, to the medical imaging and genome mapping applications used by medical practitioners and scientists. AI is especially important in the social media applications many of us use on a daily basis. LinkedIn just recommended a job you should apply for? That’s machine learning at work. Facebook protected you from offensive or fraudulent content? That’s AI’s natural language processing tools doing their job.
Recently, AI and its subsets have gained leverage in key healthcare areas such as clinical decision support, remote patient monitoring and telehealth — especially in light of COVID-19. The common goal of these applications is to help improve patient outcomes by gaining insight from compiled data. This includes vital medical notes, transmitted recordings from medical devices, laboratory images, and audio/video communications between clinicians and patients.
But as more and more data becomes accessible, what does this mean for patient privacy, liability and cybersecurity?
Healthcare and AI Successes
Several years ago, the medical device company Medtronic, in partnership with IBM, created a smartphone diabetes management app, which processes millions of data points to discover potential links between glucose readings and lifestyle choices.
In another partnership, Philips, a provider of healthcare technology, and Salesforce, one of the leading providers of customer relationship management (CRM) in the cloud, joined forces to develop a digital health care platform. The new platform enables collaboration between professionals and the integration of vast amounts of data: electronic patient records, diagnostic and treatment information from Philips’ imaging and monitoring equipment, and information from personal devices such as Apple’s HealthKit. According to a recent study conducted by U.S. Philips, their new telehealth delivery platform for hospitals has reduced mortality rates by 26% and length of stay by 20%.
U.S. Philips new telehealth delivery platform for hospitals has reduced mortality rates by 26% and length of stay by 20%.
In private homes, virtual assistants such as Alexa on Amazon Echo Dot help people with Alzheimer’s disease to plan their daily activities to eat, bathe and take medication. Other examples of AI technologies include a machine-learning imaging system that utilizes deep learning algorithms to provide diagnostic information for skin cancer in patients, or a smart sensor device that estimates the probability of a heart attack.
Cybersecurity Challenges for Telehealth
Cybersecurity tools are of paramount importance for healthcare organizations to safeguard patient data privacy and integrity. According to a report by CI Security, healthcare data breaches went up 36% in the second half of 2020, with more than 21.3 million records compromised, an increase of 177% from nearly 7.7 million records breached in the first half of 2020.
Healthcare companies also use SolarWinds software, which opens them up to the same security risks faced by Fortune 500 companies, the U.S. military, government agencies and universities affected by the attack on the company and its clients in 2020. The report notes that the frequency of daily ransomware attacks increased 50% during the third quarter of 2020 as compared with the first half of the year.
Future of AI in Healthcare
In the future, AI-driven technologies will continue to evolve and transform medical management and patient care. With this continuous growth also comes a number of ethical and legal challenges such as data protection and privacy, informed consent, liability and cybersecurity.
In January 2021, the FDA released the agency’s first Artificial Intelligence/Machine Learning -Based Software as a Medical Device (SaMD) Action Plan. The steps in the plan include:
- Further developing the proposed regulatory framework, including through issuance of draft guidance on a predetermined change control plan (for software’s learning over time);
- Supporting the development of good machine learning practices to evaluate and improve machine learning algorithms;
- Fostering a patient-centered approach, including device transparency to users;
- Developing methods to evaluate and improve machine learning algorithms; and
- Advancing real-world performance monitoring pilots.
It is of the utmost importance that all major stakeholders create and enforce an adequate level of oversight to ensure the safety and effectiveness of AI in practice.
In addition to plans like this, it’s of the utmost importance that all major stakeholders, including healthcare organizations, AI makers, medical practitioners and legislators create and enforce an adequate level of oversight to ensure the safety and effectiveness of AI in practice.
Babak Forouraghi, Ph.D., is a professor in the College of Arts and Sciences, chair of the computer science department, and program director for the computer science and cybersecurity master’s programs. Forouraghi teaches primarily on artificial intelligence and machine learning.
Learn more about Saint Joseph’s University graduate program in computer science and certificates in artificial intelligence, cybersecurity and web and database technologies.