Bioethics and Healthcare Data – What are the Boundaries?

with Dr. Anita Ho,

Bioethicist and Health Services Researcher

This week on the Abundant Aging Podcast, host Michael Hughes welcomes Dr. Anita Ho, a bioethicist, author, and health services researcher. During the episode, Anita discusses the ethical challenges of using technology in healthcare, especially for older adults. Dr. Ho has an extensive background in the space and explores the impact of health literacy, monitoring devices, and AI on the elderly. The conversation also highlights the importance of relational autonomy and community support in the context of aging. Dr. Ho also shares her book “Live Like Nobody’s Watching” and addresses the need for ethical frameworks in healthcare technology, how to engage in critical bioethical issues, and more.
Play Video

Notes:

Highlights from this week’s conversation include:

  • Anita’s Background and Research (0:07)
  • Becoming Fascinated with Bioethics (2:03)
  • Evolution of Bioethics and Ethical Issues (4:11)
  • Principles of Bioethics (6:23)
  • Ethical Dilemmas in Research and Data Usage (8:27)
  • Social Determinants of Health and Personalized Care (14:37)
  • Challenges of Hospital at Home and Ethical Considerations (18:24)
  • Digital Literacy and Infrastructure in Healthcare (21:58)
  • Technology and support for aging (24:13)
  • Ethical considerations in health monitoring (26:45)
  • Privacy and ethical implications of AI in health monitoring (30:15)
  • Ethical guidelines and regulations for AI (36:39)
  • Personal reflections on aging (40:01)
  • Connecting with Dr. Ho and final takeaways (43:06)

 

Abundant Aging is a podcast series presented by United Church Homes. These shows offer ideas, information, and inspiration on how to improve our lives as we grow older. To learn more and to subscribe to the show, visit abundantagingpodcast.com

Transcription:

Michael Hughes 00:07
Hello and welcome to The Art of aging which is part of the abundant aging podcast series from United Church halls. On this show, we look at what it means to age in America and in other places around the world with positive and empowering conversations that challenge, encourage and inspire everyone everywhere to age with abundance. I am super excited today because we have our first bioethicists on the show today. This is Dr. Anita Ho, a doctor who is a bioethicist and health services researcher with a unique combination of training and experience in philosophy, clinical and organizational ethics, public health and business. By the way, she’s also a classically trained pianist with a master’s degree in piano performance. So I bet you she’s still trying to decide which he’s going to do when she grows up. And neither is currently an associate professor at the UCSF bioethics program and a clinical associate professor at the Center for Applied Ethics at the University of British Columbia. So go Canadians. She’s also the Vice President of ethics at the North Northern California Division for common spirit health. As an international scholar and author of more than 90 publications, Anita is an elected fellow of the Hastings Center. Her current research focuses on the ethical dimensions of utilizing innovative and artificial intelligence technologies healthcare, research and trial design ethics support decision making and end of life care decisions. She’s particularly interested in systemic and social justice issues arising in health care. Her broader research areas include trust and decision making, and clinical and research medicine, family centered care, health care, resource allocation, and disparity organizational system ethics and health care cross cultural and global health, edit ethics, professional patient relationships, ethics, education for health professionals, disability and pain experiences and various concepts of autonomy. Oh, this is gonna be medium. No, I love this. Her book live Like Nobody’s Watching Relational Autonomy in the Age of Artificial intelligence, and Health monitoring was published by Oxford University Press in May 2023. And, Anita, thank you for making time to be on the show. Welcome.

Anita Ho 02:02
Thank you. Great to be here.

Michael Hughes 02:04
And just a reminder that this podcast series is sponsored by United Church homes Ruth Frost Parker Center for Abundant Aging, to learn more about the center, including our annual symposium in October, visit UnitedChurchHomes.org/parker-center. So I need the first question, I think I want to ask a lot of our guests is that I mean, you could be doing a lot with your talents, you could be obviously playing Carnegie Hall, you could be you know, there’s a lot of things you could be doing, but you’re in this area of study. I mean, how did you become fascinated with bioethics? Yeah,

Anita Ho 02:39
Well, thank you for that introduction. And you mentioned earlier about music. I initially actually wanted to go to music school, I was born and raised in Hong Kong, and my parents were small business owners, and they didn’t have college graduation, college degrees. And so they thought, like, you’re not going to be a pianist, we’re musicians and, and they were small business owners and wanted me to study business. And so I went to Carolina and was studying marketing, and it was actually when I was required to take a non business elective that I stumbled upon a course in philosophy called Biomedical Ethics. And I just thought it was really interesting. And they were asking all these questions that I hadn’t really thought much about, like, how do we allocate resources when people get older and possibly get really sick? And can we force people to get treatments? They have mental health issues? Or if they cannot make decisions for themselves? Do we make decisions for them? Or how much risk can we allow people to take if we don’t feel comfortable with the potential cost of society on others? So I took that one course and I loved it. And I thought, that’s what I want to do. When I grow out to the rest for a while, and then here I am still doing

Michael Hughes 03:58
Well, I have to think I mean, it was always called Bio. I mean, how long has the word bioethics been around? And before there was a term bioethics? How would people look at those issues? How would they be framed?

Anita Ho 04:11
Yeah, great question. The term bout ethics really came more into our more common usage in probably in the late 1960s and early 70s. In the US, when people started to really think more about ethical issues arising in research and in health care. This is particularly after the Second World War, when we thought that well, we can’t necessarily just rely on physicians’ goodwill to make good decisions, particularly good in some of the Nazi experiments that were done. I mean, even though the term might not have been used until that time, I would say to probably many of us actually encounter what I will call Bioethical Issues on a daily basis. I really mentioned some examples. I mean, I think especially now with technologies being more apt, and possibly more expensive as well. We often have to, even individually, make decisions in terms of what healthcare treatments we want, particularly when using many of these technologies, sometimes we have uncertain risks and benefits. This is especially as we get older, and we may have more and more different conditions as well. And high tech, health care may sound really good in some ways, but it does impose other kinds of burdens. So how do we decide whether we want to take on those burdens, especially at the end of life, and especially for, let’s say, more experimental types of treatments as well. And I think that because healthcare is getting more and more expensive, especially in the US. There are also justice questions in terms of how do we make sure that we can allocate resources responsibly and fairly, especially as we know that there are systemic inequalities, there are some people who have always been left behind. And how do we make sure that they are not further disadvantaged? Because they don’t have access to health care?

Michael Hughes 06:10
Is there a general agreement on the principles of bioethics, is there a quantified sort of boundary of what is ethical and what is not? And

Anita Ho 06:23
yeah, so at least in Western bioethics, we haven’t talked about these four principles, in places that really focus more on people’s independence and their own rights. We often start with thinking about what would be respecting people’s autonomy so that I get to make decisions about my own health care, just like I can make decisions about finances, the kinds of matters that are important to me. But then in health care, we also have, you know, the two kinds of correlating principles about beneficence, which is to make sure that if we are going to provide health care that we are providing benefits to people, and that we do no harm. So non-maleficence, when physicians are doing things, which is why the Nazi experiment that I mentioned earlier was so traumatic for many people, and problematic Of course, from a professional ethics perspective as well. Do no harm is often like the number one see, even if we don’t know for sure that we can give people benefits or promote the well being at least we cannot harm them intentionally, or at least in race unforeseeable. And then I mentioned earlier about justice as well. So how do we make sure that we’re being fair to people, we’re not discriminating against them. So those are the kind of boundaries that we think about. But of course, sometimes what happens too, is that they conflict with each other. Yeah.

Michael Hughes 07:42
And that’s actually something that’s really kind of you really key to keep things with World War Two and Nazi references because let me pose this question that maybe, you know, I’ve thought about once or twice, we know that horrible things were done. During World War Two, especially, you know, you know, Dr. Mengele’s experiments and things like that. Yet those experiments achieved scientific outcomes. And those outcomes may be useful in supporting positive health care for other people. Is it ethical to use those outcomes? Or do we have to discard them because they were achieved in such a horrific way?

Anita Ho 08:26
That’s a really great question. Right. So sometimes, of course, lessons that we have learned, we cannot really unlearn them. So in many ways, of course, we hoped that the lessons can still stay with us. But I mean, I kept thinking that a big part of the lessons that we really should Good morning from the Nazi experiments is really how we should not be violating people’s rights. But what would count as people’s rights when we’re thinking about health care decisions and health? True research actually is not as curious as we might think. After the Nazi experiments, we talk a lot about informed consent, whether it is medical treatment, or medical experiments. But in many ways, there are also questions to think about in terms of whether we really should allow people to just opt out of let’s say, the health care information be included in data analysis, if we think that we need everybody’s data to make sure that we have representative data, so that when we try to improve health care, we can improve it for everybody else. But I do think that you raised a really important question. Somebody was asking me something very similar at a conference a few days ago, in relation to large language models and other kinds of data you’re collecting from people. It may not be the Nazi kind of scale, but that we may still be unethically collected. People didn’t realize that their information had been used. So now that we have got some insight on the data, should we just discard them? I think it depends on the scale of the harm that has been done to people and how much of an atrocity you will be. And so it’s case by case basis, perhaps.

Michael Hughes 10:09
I mean, so many things are going in my head around these kinds of quandaries that we face. I mean, I know somebody has introduced this philosophical concept of communitarianism to me, but I guess that I’m going to fail at this. But, you know, I will pay attention to you as a person until whatever we’re doing affects the wellness of the whole. So you know, for thinking about healthcare data research, we want to be, you know, we want to collect a representative sample, a wide range of people, and yet we don’t have enough from a certain group, and let’s say we find a group of data somewhere, and it’s not opted in or whatever. I mean, that’s a tough decision around, you know, the benefits of that data supporting a potentially beneficial set of research. And then that violation. I mean, these are not black and white issues, right?

Anita Ho 11:05
Yeah, no, and I think this is really, to me, a system issue as well. So you can imagine what you and I are both Canadian and in the US to write, when I was conducting a research project in Canada last few years, on research, consent, and so on, so forth. And it was really interesting, maybe partly because the participants were in Canada, which enjoys publicly funded healthcare, and people would not be denied health care coverage. Many of them kept saying, Oh, I’m happy to donate my data and my, you know, samples or information for research, as long as we know that there will be the community benefit and kind of like the communitarian way of thinking. But as long as we are worried whether some people may not actually benefit from health care, then we’re not too sure. And I think one concept that is really important for us to think about, too, is the learning health system. So as we now have the ability to collect more and more information on that, even when patients are in the hospital, or in the health care, clinics and clinic, health care setting, and not just in research, we have to consistently ask ourselves, should we have consistent quality improvement, use our data and see how patients are faring? The tricky part, I think that that response to your comments a little earlier, is not even just whether people are consenting to participate in research, if they don’t trust the system. There are many people who may be underinsured or uninsured, especially in the US, or in Canada, they don’t trust the health care system, you’ll see many people in the indigenous communities, they may not even go and seek care. So their data may not be included in the first place. And we really tried to improve care, and we don’t have people’s data, or they don’t trust us to allow us to use the data, then we have this what I consider to be a double edged sword. We want people to trust us so that they will let us use the data. But they might not want us to use the data if they don’t think they can benefit from research. So we have to really think about how do we actually make the system more accountable so that people can actually trust us with their information?

Michael Hughes 13:24
Nvidia? Yeah, and I definitely want to get to some of the scenarios that I know that you and I have talked about before. But I got to tell you, what’s really opening, you know, what’s really kind of going through my mind right now is a world that, at least I have been looking at very closely for the past six or seven years in this world of social determinants of health, and particularly the the impact of social determinants on preventative health and wellness. And we have a whole spectrum of non clinical influencers that impact our health and our wellness. And there are opportunities, at least in the US and in senior living and supportive care for people with functional or other disabilities. You know, we find that if we discover one’s motivation, if we discover ones under underlying, I guess, you know, mental health we don’t have those understandings that allow us to develop more personalized models of care, but yet we need to build that trust and capture that data. In order to deliver that care. I mean, no one is going to say, I’m lonely or my car or, you know, my son tries to take care of me, but he can’t lift me up. Or even you know, if I have one of the most predictive non clinical pieces of data that is predictive of your future health and wellness is your credit score, but we know that at least in my experience, that is data that is not touched. We do not pull that data that is not brought into that, but that’s sort of another example of these ethical decisions, right?

Anita Ho 14:57
Yeah. Well I thought I would need another whole podcast to talk about using credit score. Because I mean, I love that you mentioned social determinants of health. After I’ve been working in the healthcare system in academic bioethics, a long time, I actually went back to school to UC Berkeley about 10 years ago to get a master’s in public health, partly because I think actually, a lot of even individual bedside ethical issues in the hospital setting are really very much about broader healthcare issues, and broader social environmental issues. And so the credit score part, I won’t say too much, it really ties to people’s economic situations, very often it ties to their education level, but it may also tie to racism in terms of how some people are consistently denied employment opportunities or other kinds of opportunities. Even if it is inadvertent racism, and so that, for some people, they may consistently have a lower credit score, because, you know, we may be reporting people more easily as well, due to their background. So how do we actually determine what data would be most relevant and most helpful, is, you know, by itself, an engineering data question, but I think also has the ethical dimension to that as well. And I do agree with you that very often we do want to have as much data about people as possible to be more, like you said, personalized, and more than a precessional kind of way of thinking. But we do have to really think about what kind of information actually is valuable? What kind of information is noise?

Michael Hughes 16:45
Right? Let’s switch gears a little bit, and just talk about just, you know, how the healthcare system really and at least in the United States, you know, engages with people today? I mean, you know, it’s not really a health care system, in my mind, it’s a sick care system. And, and is, it’s a system where there’s not enough workforce, we have an aging society, you know, you know, Any demand we see for healthcare services today will probably be 10 times, you know, the demand in 10 years, with no, you know, no, no vision for a workforce increase. And we have, you know, all of these different wonderful thoughts about how we might, you know, thread a needle or silver bullet or wherever you want to call it by doing things like, you know, moving care into the home as much as possible. You know, so we talked about these concepts like the hospital at home. And this, I think, as you explain it, everybody, I mean, nobody wants to be in a hospital, nobody wants to be in a skilled nursing facility, we know that the jobs of hospitals are really just to stabilize you until you get the most X most appropriate place. And then oh, well send them home, that’s great. But that, in my mind, just opens up a whole new kind of dynamic, because then we talk about the impact of the needs of the healthcare system, not just on the patient, but on their families. And I’m wondering, if you can talk a little bit about maybe some of the ethical issues, we have to unpack around the idea of sending them home or supporting them here, or just this concept of a hospital at home? Yeah. And

Anita Ho 18:24
I think that this ties really well to some of the principles that you were asking me about in bioethics? So like you said, I mean, hospitals are not good places for people to get better, I mean, to stabilize you, yes. That hospital infection and noises and unfamiliar environments for many people in the hospitals are actually, you know, often adding more stress to people than anything else. And so people want to be home, they want to get home as soon as possible. So from a respect for the autonomy, that agency, we want to do that you promote the wellbeing so it sounds great. There is that justice component, I think we really need to think more about some of the broader social perspectives, because of health systems, but then also shifting the cost to individuals, hospital care is very expensive. And if we can somehow allow people to be cared for at home and be monitored at home, then hopefully, you will feel better. And it would be also cheaper care, in some ways that we think that this can also be really good for people’s privacy, because now you know, instead of wearing those horrible hospital gowns, that you can, you know, be in your own environment, and, you know, determine your own schedule and so on. But we are then shifting the cost on people themselves. So depending on how your follow up appointments, medication reminders may be all set up. But there’s a lot of burdens on people, especially if they may have some kind of cognitive decline. Or if they don’t have family support, and hospital at home haha often also requires various kinds of technologies for people. So it may not be this whole quiet, calm and peaceful environment that you may expect your home to be as well. But there may be this kind of technological intrusion all of a sudden. So how do we really reconcile the privacy matter? And also to see how exactly you know, if people do live with their family members? How exactly is this really going to help versus adding even more burdens on family members as well, who often, of course, as you might know, already, are women who may have their paid employment outside already and may have children they’re caring for already. So how may we the supporting also family members who may be taking on some of that burden, as well

Michael Hughes 20:49
as let’s talk about some best practices here, because I think you outlined a number of the different, you know, factors that should be happening that should be happening in the decision making, you know, I imagine, you know, a parent or a spouse being sent home and you know, somebody may have a job where you can’t really take the time off, or then there’s the educational component of things, you know, and you have to learn this whole new language. I used to think that Well, I don’t know, I think, you know, because, you know, the medical world has its own language. I mean, you know, and I know, you know, health literacy is even a challenge for me. I mean, I don’t know what 120 Over 80 means, mind arrays. And so there’s a lot of different factors there. And it’s even just, I don’t even just, you know, just that bridge of understanding one’s clinical condition, understanding the terminology, that is just such a first big step. I mean, where do you think things are falling short? Or where do you Where have you seen, I guess, some best practices of things that are being done? Right? Who do you think has got it? Or where have you seen examples of things where these were kind of more gold standard programs for patient support or family support in the healthcare system?

Anita Ho 21:59
I’m not sure but the gold standard? Yeah, partly because we’re still relatively early. But I would say a few things about what you were just talking about. In some of this, I mentioned in my book in relation to home child’s monitoring, as well. And you mentioned health literacy. But I think as we use more and more technologies, we also have to think about digital infrastructure, and digital literacy, because especially if we’re using them for older adults, many of them may not have been very used to the technologies. So how do we use for example, human centric design, and really use like journey mapping to see how they all the occupants of that particular place that you are monitoring the person will be using or will be interacting with the technologies have been used, because sometimes the people may not even know how to turn on and turn off things. I generally think that I don’t know whether this is gold standard, but at least helpful is, especially at the beginning, if we are using the hospital at home, or other kinds of monitoring, to really assess what people’s readiness might be from the beginning, what is their health literacy, like I said, when maybe their digital literacy, would they be able to turn on and turn off machines, and our for example, our alerts being put out there really relevant to the person’s care. Because one thing that we might find too, and and we have seen that in various studies, is that sometimes our older adults or their family members may actually feel very anxious or burdened, even by some of the cupboards. So that the sweet spot may be to really figure out how people interact with the information, what information they think would be most important for them? What information would really make a difference in terms of what their behavior should be, standing versus sitting and so on? Or the medication? What information would really help the clinicians to support patients? Because otherwise, sometimes, we may be thinking, Oh, the charts hold the numbers will really give us information. But that may, in some cases, be too much information that can overwhelm people more than help them. Wow,

Michael Hughes 24:13
That is great, that is a great statement. Because you know, I remember, you know, when the Fitbits were popular, and I was talking to one of my doctors, he’s like, Oh my gosh, if I have another patient coming in with like eight months, data on a USD, but it goes the other way around. And because it’s so two things are going in my head right now. Because, you know, in terms of technology, and supportive technology for longevity and aging, you know, you get into this world of First of all, the ability for more and more clinical monitoring, and for yourself to really see more or your loved one. I mean, you know, you can, you know, I get people all the time saying oh, can I sell a remote patient monitoring tool for the families of your residents and I’m like, Oh my gosh, I think they own only people that would use them are the people that just get freaked out all the time about is my mom. Okay. And this is just layering yet more things in more data points. And just like you said before, to feel anxious about right.

Anita Ho 25:13
Yeah, I think sometimes that the more information we have, especially if they’re not really relevant, there are some devices that just keep sending us alerts. And I have to say, but I’m combining my marketing hat and my ethics hat to that is a fantastic business model, right. Like, if you have models that don’t seem to be, you know, thinking at all, alerting people at all, you think that it’s either not working for it is not really doing anything for you. But a lot of alerts can really make people very anxious. And it also depends on whether the target users are really like sampling members who want to monitor or the people who are being monitored, because they may also have different risk acceptance levels as well. Family members sometimes may be more risk averse, compared to older adults, who may think that the, you know, the daughters or sons are just, you know, too worried, and always wondering about what they’re doing, what they’re eating, or not eating, and so on, so forth. So we have to think about the workflow even for your staff members. What happens if you know, all these machines are at the same time? How much is that going to also affect their workflow affects family members’ days, as well, because you mentioned about your doctor talking most hard, but I can imagine, especially people are being monitored all the time. They may also think that what that means is that my doctor has all my information, why haven’t they called me yet with all those, you know, right. alarms going off? Right? Right,

Michael Hughes 26:45
right. Right. Right. Right. So there’s, there’s, there’s ways to unpack, you know, I know that, you know, more data and more risks, because why didn’t you see this one? You see that? And then it also kind of speaks to the human centered design element that you spoke to, before that, in that designed environment? If these things are alerting or cuing? or what have you, you know, does that is that what is the associated feeling with that is that is that anxiety is that there’s a lot you can get into there, I want to kind of unpack two more, maybe just areas with you, along these lines. And I want to kind of frame this, back to that idea of an aging society, a society where more than two thirds of us will need long term care and support services either for a short time or for an extended time at some point in our lives. And, you know, we have a system where, you know, we can’t build, you know, buildings fast enough, we don’t have enough caregivers, you know, we do have this thought that, that technology is going to be able to fill the gaps here. And we’re seeing solutions. Like Like, like, let’s say, you know, I know that we are gonna be testing Alexa. And we can certainly bring something like an Alexa into different, you know, into resident units or into individual homes. It’s a secure communications hub, we can do text messages on it, it’s easy to use. And, you know, generally I think that at least in my observations for anxiety, but Alexa, always listening and things like that seems to be, at least you know, in my observation with older adults, it does seem to be on the decline. But what is associated with that communication tool also is this idea of behavioral monitoring. And you see, I’ve seen different systems out there where it could be a camera system, or it’s a, I think it’s called millimeter wave pulsing, or it’s kind of like radar pulses that go through the home. Or it could even be an Internet of Things device where you’re turning on and on off the lights using an Alexa. But somebody who may be providing supportive care for that individual would be able to see the data collected by those systems. So that if somebody has a baseline set of behaviors, and there’s a deviation from that baseline, they may signal somebody to have an alert. So the example that I often get presented with is that let’s say, you know, there’s a communication hub in the kitchen, or somebody engages like with an Alexa, we start my day routine. And Dad is always between the hours of eight and nine for the last three months. He goes to Alexa or he comes into the kitchen, and there’s some sort of activity in the kitchen. And now on this particular day. It’s 930. And there’s been no activity yet. And that may signal oh my gosh, I should probably give a call or I might be able to check in and it could be an emergency or it could not be. So I’m wondering where this strikes you in terms of the benefits of, you know, fall detection or what have you, but with the sense of something that will record your day to day behaviors.

Anita Ho 29:52
And you mentioned my book earlier this month, like nobody’s watching me, so even to describe Want to get ahead, I was thinking, nobody is watching in the sense that we’re not talking about humans watching you. So there’s nobody watching. But at the same time, you have to just pretend that nothing is watching. on YouTube.

Michael Hughes 30:15
We’ve got phones on ourselves. I mean, anybody who’s worried about privacy, and what you know, it’s crazy that that of your phone sends to marketers and all sorts of people.

Anita Ho 30:23
That’s right. That’s why it’s so. So I mean, I think that, especially for older adults, it’s really interesting. And this is why I think from an ethical perspective, it’s a really fascinating question. When people are getting more and more vulnerable in terms of physical and cognitive decline, that is the best time for us to monitor. I mean, if people are healthy, and generally speaking, if they fall, they can get up and they know that they’re at risk, we shouldn’t have to worry too much about that, and people should be able to take care of themselves, is when they are having cognitive decline, they can’t remember to grab the whole bar, or you know, call for help, and so on that we want to monitor them. But those are also the times that people cannot remember, or know to consent or refuse consent to eat recorded. So as a society, I think we really have to ask ourselves a question. And I’m not saying that we shouldn’t do it, but just assume that these are fundamental questions that we should have for ourselves as we get older, whether we would be willing to say, you know, privacy, versus my safety, I would value one way or the other. Especially if I can trust who would actually have my data, who would get to use it? If it is no particular company’s device, is the company going to own the data and that they can use it for other product production, promotion, quality improvement, and so on? Model development? Yeah, I agree exactly. Right. And so, and people end up going back to what I said earlier, people in Canada, when they talk about specific research projects, they are very willing to donate the information to the data, if they know exactly what the purposes are. But as we commercialize a lot of these platforms, that may be really helpful, we have to start asking questions about how we thought about privacy, and boundaries, in terms of how we share data is still applicable now. Mostly that they’re not I mean, I think in many ways, I may still be kind of like, holding on to some of that. But holding on, especially because it is so uncertain, and unknown how our information may be shared and used in the future, because technologies are moving so quickly. And there’s so many things that we can now do that we never thought were possible, even 10 years ago.

Michael Hughes 32:45
Right? Right. And of course, that also leads now into AI, which is the second thing I wanted to cover with you. And this will be the last thing before I get into the three questions that we ask every single one of our guests. So Surprise, surprise, but you know, I think that there is obviously an AI component to the monitoring and behavioral patterns and things like that. But then there’s also the interaction, right. And we have this scary, you know, growing inaccurate ability to kind of have these AI generated avatars serve as someone that can ask daily day to day questions, are you how are you feeling? I was asleep last night? You know, how are, you know, so the so to both check in on health and wellness, but then also as this idea of being like a virtual companion? And you know, I’m not sure if I have a lot of comfort with that, to be honest with you, and I’m not sure I Where I stand? And what do you think should be? What are the questions that we should be asking? As we’re sort of wild, but the capabilities of AI? Yeah,

Anita Ho 33:52
I think some of what you said earlier applies here as well. Like, are we using these technologies, you know, to supplement and enhance what we are already providing, you know, our loved ones or people were cheering for? Or are we using them to the place? I mentioned, I was just at a conference recently, and somebody there mentioned this, the AVR, that avatar that can talk to people in skilled nursing to remind them to take medication and so on. And they can basically use or use so to speak the voices of these people’s family members to say, Hey, Mom, you know, it’s time to take your meds, he mom looks how to go for a little walk or so and so forth. And you can record or you can use the avatar to say things about making sure that they are eating and so on. Now, and maybe that depends very much on that particular person’s condition as well. Because you can imagine that it can calm people if they know, you know that this is their daughter’s voice or they think this is the daughter’s voice and now somehow kind of get freaked out in terms of I’m hearing my daughter, but I don’t see my daughter. And if it is the case that we’re using these avatars to also like replays, the doctor’s visits, in some ways, and people may feel even more socially isolated, right? So we think, oh, you know, I, my voice is already there, I don’t need to go see Mom, that’s not necessarily going to help with the person’s engagement. So there’s that. But there’s also the question, I think the ethical question is if somehow this older adult is really confusing things and believes now that this avatar is the real deal? Are we also lying to them? Do we have to lie to them to say that this is really your daughter talking to you? So I think that there, you know, there might be benefits in doing something short term, but there are ethical questions as well. How do we get consent if there is consent? And can we deceive people in the use of some of these technologies?

Michael Hughes 36:00
And are you aware of? or should there be just at the top of the podcast, you talked about a kind of a framework for bioethics in general? Should there be a general set of rules or ethical boundaries around AI in that manner? Like, you know, it cannot be used for this or it cannot be like, like, for instance, I know that we wouldn’t use AI to select job candidates, you know, should there be when we’re talking to older people and the use of these avatars, some sort of an ethical framework, that should be generally agreed upon?

Anita Ho 36:40
Many people have been talking about guardrails for AI. So regulations in the US or in Canada are still slow, partly because many governments are competing. They want to be innovators, they, companies, and especially big tech companies are really worrying whether at least that’s what they’re saying, and the lobbying efforts, that if we have too many regulations to say what we cannot do, it will really slow down innovation. I don’t think everything that we invent is innovative. So I will put it out there. But I mean, there may certainly still be questions in terms of how we want to measure look into tech people in the US and in Canada as well, not every single device that may have a health impact is considered to be a medical device. So not even everything goes through, you know, FDA or other health Canada for approval. So many ways. One, we don’t even know for sure nothing works. And two, we don’t actually know what, what some of the safety guardrails might be as well. So right now, many companies and organizations are voluntarily deciding that they may not use certain devices. But certainly, there are regulations in the EU for a European Union, for example, we are trying to say that, at least for devices that may have higher risk, higher impact is about people’s access to jobs or access to healthcare in the medical setting, and that, you know, their data security cannot be nearly assured that we need to be more cheerful about it. But right now, at least in the US, because still, especially for commercial products, there is still more of a voluntary adoption of various kinds of guidelines. So there will be questions, I think, as we collect and use more data, to think about how we should have fair ways to make sure that all industry contributors will be abiding? Because if you just have people voluntarily deciding whether they will do certain things or not, then they may think from a competitive perspective. If we do that, and the other companies are not doing it, then we’ll lose out. Yeah, no, those those

Michael Hughes 38:52
are excellent points. And you know, and just for anybody who’s listening to this podcast, at some point in the future, we are recording this in May, early May of 2024. And I think it would be fascinating for me to reconnect with you in a future episode, once we sort of have more of these things shaped and to contrast what we’re talking about now, how it’s evolved, because we know that this is moving fast. But as I said before, we do like to ask our guests three questions about aging. And I’m wondering, is it okay, if I ask these of you? Sure. All right. And just to remind our audience, I need his book, live like there’s no sorry, live like nobody’s watching relational autonomy in the age of artificial intelligence, health monitoring. It’s Oxford University Press. It was just about a year ago that it was published. So congratulations on that. If you wanted to purchase the book, what could they worth and they purchase it?

Anita Ho 39:43
Many online platforms have that and you can also go to the OSU University Press website.

Michael Hughes 39:49
Wonderful. Okay. Question number one for you. So I need it when you think about how you’ve aged. What do you think has changed about you or grown with To you, that you really like about yourself,

Anita Ho 40:01
I think it’s probably more like growing up, I can’t remember if I mentioned at the beginning that I grew up in Hong Kong. So this may be partly a cultural aspect as well, even though I live in the US and Canada, and which is very much focusing more on individual autonomy, individual rights and so on, which I certainly agree with as well. But I think, especially as I get older and have been worse, with more older adults in terms of what their healthcare issues, our ethical issues might be, that I’ve become more and more relational, that ties to my book as well. I think generally speaking, humans really thrive and flourish in relationships. And I keep thinking about how nobody is born completely alone. Like whatever the context of your birth might be, there was always a, you know, a birth mother. And so as we think about aging, as well, if at some point, we become completely isolated, then we don’t feel well. And so from a flourishing perspective, I certainly think about how we build a community and how to talk about communitarianism earlier, especially as we get older and may have more physical and mental vulnerabilities. How do we build that village together so that we can really care for each other?

Michael Hughes 41:20
Absolutely. And that’s something that we think about all the time. And I think those are really valuable thoughts? Okay, so question number two, though, is what has surprised you the most about you, as you’ve aged?

Anita Ho 41:33
Well, there is, of course, sexism in terms of aging, that when then in particular, often try to see how they can not get older or not look older, and so on, and so forth. So, I mean, I still live within that cultural context that is often very difficult to fight. But I think as I get older, I also will realize how being able to get hold is really a privilege, and a blessing in many ways. One of my friends is turning 60 This year, and we’re talking about celebration, and he was like, there is no way I’m going to celebrate, like I hate getting. And I really think that especially given some of my family situations, that being able to get old, is a blessing. My brother and my high school best friends, both died of cancer in the last two years in their 40s. And so, as I also see a lot of unrest around the world, there are lots of casualties. It’s not even just like military personnel, but many people who die and get injured or young people, some of which aren’t children as well, that I really think that being able to get home is not a given errand is a blessing. Wow,

Michael Hughes 42:48
That’s such a tremendously important thought. And thank you for sharing that with us. But our third question is, is there someone that you’ve met or been in your life that has set a good example for you in aging, somebody that has inspired you, as we like to say age with abundance,

Anita Ho 43:06
I haven’t told this person that I would shun her. For me a really good example is that any child who has been fooled, I think, for more than 40 years, is the President and CEO of Self Help for the elderly, which is a community organization that provides support for more than 40,000 seniors in the California Bay Area. She also hosts a public affairs television show for Chinese speaking audiences in the bay area as well. And I would say that she’s probably one of the people whom I know that’s most humble, and giving, caring, and just generous in terms of really thinking about how to promote healthy communities, and really try to empower older adults. So she’s not just living an aging abundant liana. I think she’s really trying to also empower other people to also age abundantly. So I really think of her as an inspiration, particularly in relation to the service mentality to the community, for the common good.

Michael Hughes 44:13
That’s amazing. And maybe we can get, you know, get Hani to be on a future show. And she’s not the only one that’s inspiring me that you are inspiring. We could be doing just such a longer podcast here. I mean, I’m just so happy that you were able to give us this time because we know that you’re busy. So my sincere thanks to you for being a guest on the art of aging. And, and most especially I want to thank our listeners, thank you for tuning in and listening to this episode of The Art of aging, which is part of the abundant aging podcast series for United Church homes. And we want to hear from you. Have you heard of bioethics? Are you working in bioethics? What do you think should be the standards for ethical conduct as we enter into this amazing new world? World of AI, and data analytics, please tell us your thoughts. Abundant agent podcast.com. You can also give us feedback when you visit the Parker Center at UnitedChurchHomes.org/parker-center. And Dr. Ho, where can people find you?

Anita Ho 45:18
I have a website and so is my full name and ethics. So is AnitaHoEthics.com. So you can find some of my current and recent work here. And then if you want to contact me, there is also a way to do that there.

Michael Hughes 45:31
Yes, that’s AnitaHoethics.com. And again, to get her book live like nobody’s watching relational autonomy in the age of artificial intelligence for health health monitoring is available when you go to the Oxford University Press website. So I need to thank you for joining listeners. Thank you for joining us. We will see you next time.