In this episode, TMLT’s Tony Passalacqua and Adrian Senyszyn, principal attorney with Germer PLLC, discuss potential sources of liability from current technologies being adopted in health care. Learn more about the threats posted by Artificial Intelligence (AI), ChatGPT, social engineering, and much more.
Also available on Apple and Spotify. A transcript of this podcast is found below.
Additional episodes in this series:
Episode 4: TMLT Risk management's Greatest Hits
Episode 3: Texas Medical Board defense
Episode 2: Five things that get physicians sued
Episode 1: Lawsuit defense
Transcript:
Tony Passalacqua: Hello and welcome to this edition of TMLT's podcast, TrendsMDs: Answers for health care's Digital Trends. I'm your host, Tony Passalacqua. Today, I have special guest Adrian Senyszyn from Germer PLLC, and we are going to discuss the impacts of technology on your organization today. Adrian Senyszyn, JD, has extensive experience in cyber security, privacy, and security roles, data loss, and prevention, and medical malpractice.
He works closely with health care professionals to manage liability and lectures on a variety of risk management topics. Mr. Senyszyn routinely leads practices through the breach notification process. and represents practices investigated by the Department of Health and Human Services and the Texas Medical Board.
He has been a member of Germer Law Firm since 2021. Adrian has also been recognized nationally as a Best Lawyer and Super Lawyer. Welcome, Adrian, to the podcast.
Adrian Senyszyn: Thank you so much for having me here today. I'm so excited to be here to talk about this interesting topic.
Tony Passalacqua: All right. Well, I just want to start off with our first question, which is how do you view technology and how is it incorporated into both professional and personal life?
Adrian Senyszyn: The technology of AI is exponential, and from what I see, it's quickly being implemented into our personal and professional lives, and how we interact with one another.
AI is predicted to replace many low-level jobs in the future, and professional jobs like my job, are non-immune and there's some fear that it could wipe out whole sectors of professional work like contract attorneys or specialty areas in health care. However, AI also is widely regarded as a tool that can enhance the predictability and precision of how health care practices deliver medical care to patients.
It also can fill a gap by helping treat underserviced patients or addressing health care sectors or regions that are underserviced.
Tony Passalacqua: What are some potential dangers of technology?
Adrian Senyszyn: With this technology, and with all technologies, there come dangers, some of which we'll explore in today's podcast. As, uh, I've alluded to, AI automation can cause job loss.
AI also can create socioeconomic inequality if only the most privileged and wealthy sectors of society can take advantage of the technology. AI can be used to exploit people and businesses by threat actors and governments. There's also a danger that self-aware AI might self-automate and go rogue.
AI also has been proven to create market volatility and it is feared it could cause both economic upturns and downturns. Important to us today is the danger that AI bias might adversely affect the delivery of health care. Also, we anticipate that AI will rapidly challenge our security networks and how we protect patient privacy.
Challenges facing our network are increasing as we digitize information and the government pushes health care practices to provide ready access of the information to patients. The push to give patients quick access to their health care records and the easily transferable digital information will be a challenge for practices in the future as they deal with social engineering techniques like deepfakes or pretexting.
Tony Passalacqua: Can AI be challenging?
Adrian Senyszyn: AI is so challenging because there currently exists a lack of transparency or the ability to even understand and predict how the algorithms work and gather information. It's so important to understand that the algorithms are programmed by humans and therefore have an inherent bias built into them.
This could lead to the algorithm unnecessarily ignoring some key health care information due to the bias. And this, in turn, can result in unsafe medical decisions, or worse, harm to the patient.
Tony Passalacqua: It feels as though there's a race going on right now between all these different companies and governments that are developing AI. What do you think about that?
Adrian Senyszyn: There most certainly is. AI is being developed by the largest companies as well as governments throughout the world. It's not small mom and pop businesses that are developing AI. It's these huge companies and they plan to make money on the product. The AI race certainly is placing the privacy of information second to the ability of the AI machine to quickly analyze and assemble and share massive amounts of data and information. The danger is the ability of AI to quickly map out and predict all aspects of a single person's life and health care without that person's permission or even their awareness to what is occurring. AI will be able to instantaneously identify a person's location, preferences, habits, and even predictive behaviors.
AI will know or be able to predict a person's health care. Governments will have the ability to target individuals through various means. If, if the government were to so choose.
Tony Passalacqua: Do you feel that AI, that there could be any sort of dangers with AI and the loss of human influence over time?
Adrian Senyszyn: Absolutely, that's a serious risk. Um, the loss of, the loss of human influence in decision-making and a possible over reliance on the technology are massive dangers. Relying on AI to identify, diagnose, and treat patients may reduce human empathy and reasoning if proper steps are not put in place to balance AI with how health care is delivered by a physician.
AI also can lead to reduced communication between physicians and patients, reduced collaboration among health care professionals, and a decrease in social skills that also can adversely affect the delivery of health care to patients. Other ethical concerns that I think about are how physicians may be balancing AI to increase efficiency and profitability.
And if that balancing act comes at the expense of patients and the delivery of health care.
Tony Passalacqua: So, one of the big questions that I always think of is who owns the data?
Adrian Senyszyn: That's a great question. There are serious questions about ownership of data and the transparency of companies developing these technologies.
I mean, who's responsible if an AI technology accidentally releases health care information? Is it the company that delivered the technology or is it the programmer who wrote the algorithm, or is it the physician who's responsible for protecting the patient's health care information? Also, how can we control data, which is so transferable when there is a mad scramble by the largest companies in the world and governments to gather as much data as possible for its AI software to digest and as part of their analysis and targeting of individuals.
Tony Passalacqua: So, is it difficult to control data? I mean, the way that I look at it is, is that data has to be able to be quantified by these different algorithms and AI machines, and then it has to be, uh, then sent back to the user. How does that data work, and how do you think that it would be controlled in those specific environments?
Adrian Senyszyn: Yeah, yes, you're right. The task of controlling and securing data becomes even more difficult once the data is out of the box, and it's in the open and unsecure. AI programs can quickly gobble up information and share it amongst other programs. ChatGTP is one example of this kind of technology. ChatGTP leverages access to billions of data points, which means it is accessing, which means it's accessing source data about people without their approval or permission.
You have to understand that every time you go on chat GTP, the information that is inputted, including your login information and searches may be retained and used in the future. It's captured. It is not only secure, it is not secure and certainly no longer private. If one were to accidentally release sensitive information or protected health information into an unsecure AI program, there would be virtually no way to undo that data being shared with the AI program and different chat bots driving the technology, or any one person, or any person accessing the information.
There is a real danger with health care practices and physicians using programs like ChatGTP and inadvertently exposing patients’ information If proper care and security precautions are not used.
Tony Passalacqua: So, we were just talking about exposing patient's information. One of the things that I always think of is deidentification of patient's information and then the reidentification. Have you ever heard of anything where a program may do something like that?
Adrian Senyszyn: I know recently Stanford did a study of AI programs that were digesting, uh, massive amounts of health care information. The researchers reportedly were able to write an algorithm that correctly reidentified health care information with patients’ information, even though the patient data had been deidentified.
Tony Passalacqua: Do we see any programs used for social engineering?
Adrian Senyszyn: So that program. Successfully recombined that health care information with the patient identifiers in, I believe it was approximately 60 percent of the instances.
Tony Passalacqua: So, one of the big catch terms that you always hear around the internet is social engineering. Uh, can you explain what social engineering is?
Adrian Senyszyn: Yes, absolutely. Understand that one of the massive strengths of these AI programs, uh, is that they can be used for both noble and nefarious purposes. We see examples of this when threat actors and malicious actors use chat GTP and similar AI programs to help write phishing emails or malicious code used in attacks.
AI's use in social engineering techniques are anticipated to become widespread problems in the coming years.
Tony Passalacqua: What is social engineering?
Adrian Senyszyn: Well, it's simply an attack technique that relies on leveraging human interaction and emotion. People are manipulated into breaking a normal security procedure or doing something abnormal, which then allows a threat actor, or maybe a government, access to a system or sensitive information.
Threat actors leverage people's willingness to help or their fear of punishment. Social engineering is increasingly complex and targeted. The individuals most often targeted are low level employees, like receptionists or helpdesk people, who may not have the needed security training but have the needed access to allow a threat actor into the system.
With information learned about the practice and the people working within, using AI, threat actors are able to appear legitimate.
Tony Passalacqua: Is social engineering just one type of technique, or are there multiple ways to attack someone?
Adrian Senyszyn: There are so many different social engineering techniques used by threat actors. Phishing emails is the most prevalent, and AI technology will allow threat actors who do not speak English to craft excellent and persuasive emails intended to trick employees into disclosing sensitive information or click on a link that has malware. There's also pretexting scams where people impersonate legitimate people within the organization or maybe a business partner and trick employees into disclosing information.
One notable example of the social engineering technique involved the famous Bravo host, Andy Cohen. Threat actors were able to trick him into believing his bank account was compromised. For fear of the compromise, he allowed threat actors who acted like legitimate bank security officials to access his account and phone. Ultimately, he had a sizable amount of money diverted and stolen and for a time, he lost control of his cell phone.
Tony Passalacqua: So, I know another term that we always hear about is deepfakes. Can you tell me a little bit more about what deepfakes are and how they're being used?
Adrian Senyszyn: Deepfakes are AI programs that can imitate fake news, voice, and images of people, and they pose a significant danger to all businesses.
This kind of technology will be especially challenging to health, to health care practices in light of the Department of Health and Human Services drive to allow patients to quickly and easily access their information. Correctly verifying the identity of a patient will become much more complex.
Recently, I actually came across an article about a UK group that lost 25 million after a deepfake clone of the CFO trick staff members. The article that I read mentioned that a video conference joined by the company's digitally cloned CFO and other fake company employees assisted in the hoax and eventually led to the transfer of the money.
Tony Passalacqua: What is vishing?
Adrian Senyszyn: Vishing is similar to phishing, but a social engineering technique used over the phone. In a vishing attack, the threat actor is trying to acquire financial or personal information or access to a system by leveraging the emotion of a person. This technique is becoming increasingly more common.
It's especially effective when tricking business associates or other downstream business support, to support for the practices into allowing access to protected health information. These downstream businesses and their employees are vulnerable to being exploited if proper security protocols and training is not in place.
Tony Passalacqua: Do you have any examples of a data breach?
Adrian Senyszyn: Yes, there's a, a recent example where Anthem paid the Department of Health and Human Services 16 million in a record settlement. This was one of the largest U. S. health care data breaches in history. Anthem discovered cyber attackers had infiltrated their systems through spear phishing emails sent to an Anthem subsidiary.
And after at least one employee responded to the malicious email and opened it. Anthem discovered cyber attackers had infiltrated their system through spear phishing emails sent to an Anthem subsidiary after at least one employee responded to the malicious email and opened the door for future attacks.
Human error by clicking on a malicious email and deploying a virus is one of the most common techniques used to gain access to a system. I've seen it many times in data breaches involving health care practices.
Tony Passalacqua: So, are there ways to protect yourself against these different kinds of attacks?
Adrian Senyszyn: Yes, uh, one common and recognized security practice is multi-factor authentication or MFA.
Practices should use MFA when employees or business associates access sensitive information on its system. We're increasingly becoming aware of this technology in our personal lives as we use it for banking, our 401ks, and IRAs. This technology is essential to controlling access to key information. But with every security technology it's important to remember that nothing's foolproof.
Multi-factor authentication can be exploited or worked around, especially when a threat actor has access to a user's login and credentials. The threat actor may try to bait a person to approve an MFA request by resending a request immediately after the person logs in that is monitored by the threat actor.
I recently experienced this kind of attack while working one night. I logged on to my firm's network using MFA. And a few minutes after I logged in, I received another MFA request from Carlsbad, California. Since I was already logged in from a different location, I knew to immediately decline the request.
I quickly notified my security department and reset my passwords.
Tony Passalacqua: So, security is a vicious cat and mouse game. That, that process that you were just describing, does it have a name?
Adrian Senyszyn: Yes, uh, it's called MFA fatigue. Threat actors try to cause MFA fatigue by sending multiple requests in quick succession, or they might send a single request afterwards, attempting to trick you into clicking or approving the MFA request.
Once approved, the threat actor has access to the individual's account and potentially the system.
Tony Passalacqua: Are there any ways that you can protect yourself against that kind of MFA attack?
Adrian Senyszyn: Absolutely, um, there are different ways like enabling additional context like geolocation or device fingerprinting which allows users to identify what location or what device the request originated from. If the device is not recognized or the location does not appear correct, then the user can decline the request. Geolocation is what allowed me to avoid the MFA attack that I was previously referencing. Disabling push notifications that simply allow users to approve an MFA request is another technique to prevent MFA attacks.
Requiring a user to re-enter a random code before logging in to prevent a user from mindlessly pressing the MFA button and thereby approving a request is a really good security measure. In this situation, users must enter a code to gain access to the system. If the user is already in the system or if they are not at their computer, they can't enter the code and thereby the, the attack is prevented.
This concept is called number matching. Other concepts involving MFA include challenge-based responses or one-time passwords.
Tony Passalacqua: Do you find it important for organizations to educate staff?
Adrian Senyszyn: It's so important to educate your users about the dangers of relying on MFA technology and incorrectly believing it's foolproof. Geolocation, device fingerprinting, or other security techniques should absolutely be implemented. Train your users with simple language. Also, since a username and password is required for a threat actor to trigger MFA, the use of a strong password with regular rotation really can help prevent MFA from being targeted by threat actors.
Tony Passalacqua: Are there other ways to protect your system?
Adrian Senyszyn: Encryption is another key way to secure sensitive information. The Department of Health and Human Services strongly recommends data at rest be encrypted when stored in a health care practices system. If a threat actor gains access to a system, they will not be able to view the encrypted data and thereby steal it without additional passwords and access.
Tony Passalacqua: You were just talking about encrypting data “at rest.” Is there any way to encrypt data while it's “in transit?”
Adrian Senyszyn: Yes, practices should encrypt data at rest and also in transit. The use of a VPN connection is one of the most common and recommended ways to secure data in transit. A VPN, or a Virtual Private Network, creates an encrypted tunnel for sensitive data to travel between two different locations. Because the data is encrypted, it is virtually impossible to exploit when using a properly structured VPN.
Tony Passalacqua: Are there any other ways to connect your computer? Are there any other ways to connect information in transit that's maybe less secure?
Adrian Senyszyn: Yes, one of the less secure and more exploitable methods to connect computers is with something called an RDP port or an RDP connection. It stands for Remote Desktop Protocol. RDP is a different kind of computer connection that allows the user on one computer to access and manipulate data that is stored on another computer or server. The benefit of the RDP connection is that a user can access and manipulate data stored in a secure location.
Unlike a VPN connection, you're not trying to move data, you're going on to a different device and again trying to manipulate it. This is especially useful when traveling. However, there are known security flaws with RDP connections. And this kind of technology is easily exploitable by threat actors. Once a threat actor has RDP access, they can access all the sensitive information stored on the other device and manipulate it. They can download it. They can also gain access to the network. It's important to understand that RDP connections are not encrypted. Therefore, the best practice is to use a VPN when remotely accessing sensitive information or protected health information via an RDP connection. The VPN should always be used when a user is on a public Wi Fi accessing company information or the downside to the VPN is a slightly slower network connection, but the upside, uh, far outweighs the downside of using the VPN connection.
Keep in mind, VPNs can be exploited if a threat actor obtains the security information to access it.
Tony Passalacqua: So, what are some other risks that practices face as we continue to digitize?
Adrian Senyszyn: As our society becomes increasingly digitized, there has been a push to implement patient facing software applications, like patient portals and mobile apps. Health care practices need to understand that there can be vulnerabilities in these software programs. Technical errors and programming errors, or even simply human error, can lead to compromise. Also, there are real ethical questions about how practices should secure these applications even though they are not directly under their management.
For example, whose responsibility is it if the patient has their password compromised? Should the applications use multi-factor authentication? How does a health care practice prevent staff and other people who manage these programs from being tricked through some sort of social engineering technique?
Whose responsibility is it to notify the Department of Health and Human Services if there's a breach of protected health information?
Tony Passalacqua: Do you have any suggestions on what may be some best practices for use of patient portals or mobile applications?
Adrian Senyszyn: It's important to always remember that the use of any patient portal or mobile application requires the health care practice to implement appropriate security protocol and measures. Practices should have an opt in consent form by the patient that clearly and carefully explains the risks of using the technology in plain language. There should be role-based access controls where only certain users are provided access to certain information. Create a zero-trust framework. Where repeated verification of user activity is required for multiple sources like MFA.
Practices also should implement appropriate security and patch management of the programs. Staff training and awareness, training to combat social engineering techniques also should be implemented. And all the data that's stored in these programs should be securely backed up and encrypted so they cannot be exploited.
Tony Passalacqua: So, what else should practices do to manage AI risks?
Adrian Senyszyn: Practices should have policies and procedures in place that address these emerging issues. Staff should be trained on security to include all social engineering techniques. Risk assessments and risk management plans should be updated yearly. And health care practices should consider having their systems independently checked through what's called a penetration test on a yearly basis.
Health care practices need to consider how to manage and review their contracts and agreements with vendors and business associates, at least on a yearly basis. There should be a disaster plan and the practice should work through different scenarios to train their employees how to respond. Access controls, encryption, and multi-factor authentication also should be implemented.
Patch management. Patch management also is very important to managing the security of its software, and there should be an incident management plan.
Tony Passalacqua: How about backups?
Adrian Senyszyn: Great question. Encrypted backups that are off site and immutable is a best practice. This ensures any data that is stored off site is not affected by a virus or an attack. The Department of Health and Human Services recommends some of these additional steps to protect health care practices: a structured program having regular software updates, holding every department accountable for security, increased, increased physical security, hiring independent security and technology consultants, imposing proper credential tracking, and imposing sensible restrictions.
Tony Passalacqua: Adrian, what's the one thing that you would like our listeners to leave with?
Adrian Senyszyn: I hope your users recognize that the emerging use of AI technology in society is exciting, and it's going to be very important for health care in the coming years. Recognizing the risks that AI poses and balancing those risks with the benefits while implementing appropriate security measures should allow health care practices to leverage AI in the treatment of their patient.
Continued use of proper security measures and the implementation of new policies, security systems, and training for health care practice employees is critical will be critical in the coming years.
Tony Passalacqua: Thank you for your time, Adrian.
Adrian Senyszyn: Thank you so much
Tony Passalacqua: Thank you for listening to our podcast. If you are a policyholder, please feel free to contact us with any questions by calling 1-800-580-8658 or check out our resources at tmlt.org and clicking on our Resource Hub.