The Promise of AI in Aging & Elder Care

with Victor Wang,

Founder & CEO, Care.Coach

This week on the Art of Aging, host Michael Hughes chats with Victor Wang, Founder and CEO of Care.Coach. During the episode, Victor discusses the use of AI technology to improve the health and wellness of older adults. He shares his passion for addressing the underserved aging population and explains how Care.Coach uses AI avatars to provide companionship and psychosocial support. The conversation also covers the limitations of current AI chatbots, the training process of AI models, ethical considerations of AI, and more.
Play Video

Notes:

Highlights from this week’s conversation include:

  • The inspiration behind Care.Coach (1:08)
  • The need for innovation in senior care (2:58)
  • Introduction to Care Coach and its approach (4:46)
  • The evolution of capabilities in generative AI (11:45)
  • The comfort barrier in adopting AI technology (13:07)
  • Big tech companies’ brand protection and responsibility (19:37)
  • The behavior of AI language models (23:50)
  • Reinforcement learning with human feedback (27:30)
  • The development of empathic skills (35:08)
  • Building a geriatrics model (37:25)
  • Ethical considerations and boundaries (40:33)
  • The AI Transcription Software (47:45)
  • Connecting with Victor and Care.Coach (48:30)
  • Final thoughts and takeaways (52:32)

 

Abundant Aging is a podcast series presented by United Church Homes. These shows offer ideas, information, and inspiration on how to improve our lives as we grow older. To learn more and to subscribe to the show, visit abundantagingpodcast.com

Transcription:

Michael Hughes 00:07
Hello and welcome to The Art of aging part of the abundant aging podcast series from the Ruth frost Parker center for abundant aging which is part of the church homes. On this show we look at what it means to age in America and other places around the world with positive and empowering conversations that challenge, encourage and inspire everyone everywhere to age with abundance. Our guest today is Victor Wang and this is part of our aging and innovation series. And today we’re gonna be packed unpacking artificial intelligence, a subject that no podcast has ever tackled before. Artificial intelligence in the world of aging, aging, wellness and age tech, just give a little bit of background on on Victor Victor is the founder and CO CEO of Care.Coach, which is a San Francisco Bay Area area generative conversational AI company that is truly on a mission to improve the health and wellness of millions, particularly older adults. Victor is one of the world’s leading experts in generative AI and its applications for health and wellness. And like me, he’s also been a proud resident of Canada. And he started his journey into tech at MIT by working on one of the most Canadian things I can think of: telerobotics. And I’m only keen on this picture. Because you know, one of the examples I saw was that your your robotic arm on the space shuttle is an example. Actually, if you know anything about Canada man, do they they put it on the money they put it on. The Canada is very proud of the Canada Arm and nation. But it was his grandmother’s really inspired his turn for living alone in Taiwan and was next to impossible to get her on a video call and the isolation affected her health and wellness. So the how might we question then became how can I apply all my skills and knowledge in AI and robotics to transform the world of senior care, aging and wellness? So again, that’s what we’re talking about today. Ai? How can help with this transformation? And Victor, welcome.

Victor Wang 02:06
Thanks, Mike. Happy to be here.

Michael Hughes 02:08
So Victor, I’m gonna start this by asking you the question I ask a lot of guests especially, you know, guests that we have on the aging Innovation Series, which is you could be doing a lot of stuff. You could be developing a new sports drink, you could be developing, I know, fill in the blank. But you know, you really chose to put your energy into this world of Aging’s one of age tech, why are you so passionate about this space?

Victor Wang 02:34
It’s where I can make a difference. I think it’s where a lot of technology innovators can make a difference. It’s undeserved. And so you know, most of the technology innovators out there are building technology for people like themselves, because it’s easier to imagine what somebody like yourself would want and then to build that thing. And then to realize, oh, I don’t like this, let me fix it. And so, because it’s easier, I think that’s where most of the attention goes as far as software developers, data science, AI applications. But then there’s this huge population of people that we care about, like, you know, people in your family and just people that have contributed a lot to society. And now maybe they’re in a situation where they could use a lot of help, they could use a lot of support. They could use some predictive AI, they could use some companionship, AI, they could use some healthcare, robotics, or whatever it is. And there’s really not enough innovation going on to help this population and it’s kind of a type of work where you feel your day to day kind of value that you create and you feel the impact that you have on people’s lives. And you can see it and you know, it’s just a rewarding thing to do. So

Michael Hughes 03:53
that’s awesome and you’re putting energy into Care Coach care doc coach so everybody care doc coach doc

Victor Wang 04:01
Yeah, I like to say the DOT Yeah. Otherwise, we have these like Medicare Advantage and others like health plan customers and sometimes they have a job title within their organization called Care Coach. So we have to say weird care DOT coach. It also makes the contact info super easy because I’m Victor at Kerr DOT coach, that’s my email address. You know, hit me up

Michael Hughes 04:27
days where you only can only have a.org or.com or dotnet and cloture guy, whatever. And that was a long time ago. Why don’t ya don’t remind you. But tell me more about the care doc coach, you know, what does it do? And what was your approach to developing it?

Victor Wang 04:46
Yeah, so can I coach a whole platform at this point. So the part of it that we’re most known for that we’ve been doing the longest is this avatar product and it says See, the idea is, was originally to solve senior loneliness. And essentially, through this idea of teleoperation telepresence, and leveraging global workforces, because we will talk in the US about the caregiver crisis, and the silver tsunami and the caregiver ratio is declining, and what are we going to do about it. And it turns out, there’s plenty of workforce, there’s plenty of empathic, intelligent, caring people to support the aging population here in the United States. They’re just not in the United States. And so it was this idea that if we could leverage that workforce, we could create a lot of good, and not only solve a big problem here in the United States, but also just employ a lot of people around the world. And, you know, give them good careers and give them this opportunity to do something that matters, from the comfort of their own home. And then the way that we connect them to this population in the US is, you might naively think, like, Sure, just put them on a video call, you know, provide a lot of psychosocial support, like companionship conversation, and we wanted to provide this 24/7. And so you might staff this 24/7 team around the world, and just kind of kind of people over video calls whenever they want, it turns out that that would be a really poor user experience. So So I went around and interviewed, for example, homecare agency owners and assisted living owners, and like geriatric care managers, and so on, and what you discover is that this fragmentation of that end user experience with the fragmentation of the care experience, is one of the biggest reasons why people who are even fortunate enough to be able to afford 24/7, senior care will refuse it because these are very private things going on, like you’re talking about your incontinence, or you’re losing your memory, or your depression, some family dysfunction. And you have to end up talking with seven or eight different people at a 24/7 kind of homecare arrangement. And people just don’t want to do that. And so we unify this whole 24/7 team together into a single persona through an avatar. And then the avatar has the same face and voice and personality and memory 24/7. And it makes it a lot easier to build a continuous relationship, especially if you have cognitive impairment, or memory impairment. And, you know, do I remember you like, what are we talking about what, it’s much easier if it’s just one consistent persona, but also for, you know, normal aging, you know, cognitively intact memory and decK, kind of use cases, it’s helpful just to have that continuous experience. Yeah, and then you can choose what the avatar looks like. So we want to make it fun. Usually, people don’t want to, your daughter’s worried about you, you got this nurse, you gotta get to talk to your doctor, you got this other specialist, like, do you really want a virtual nurse that’s supposed to take care of you. You know, generally, you’d rather do something more interesting or fun. And so the avatar actually looks like a little dog or a cat. And we have a lot of fun conversations with people like about prayers, or their kids or what they’re watching on TV, it’s a sports game going on, or just, sometimes people just have to vent or they’re frustrated or worried about something or kind of actively listening. And then that relationship forms the foundation of all of the coaching that happens with care doc coach, and so we have other parts of the platform that will inject into these human driven conversations, various evidence based protocols that drive outcomes, like, you know, improved diabetic self management, or reduced risk of falling and things like that. And that’s why the health plans pay us as our customers.

Michael Hughes 08:42
Right? Right. So when we’re talking about experiments, unpack this a little bit. So you know, an older person, and let’s say, if somebody with cognitive challenges, would receive a care coach who might bring their own device, or it would be a portable device, or device would be used. And they would log into the care doctor coach platform, they would be introduced to the option of an avatar. And, you know, I love the idea of of having, you know, a dog or a cat or something like that, because, you know, a lot of the avatars we see, you know, are developed by 28 year old developers and 28 year old developers, you know, create these really idealistic looking young women, or men or whatever. And that’s not exactly like me, and now and so. But the conversational AI that comes to life in the avatar, is that driven by this, this individual that you’re hiring from overseas? Is it the relationship or?

Victor Wang 09:43
Yeah, that’s primarily it for the avatar product. And that’s definitely how we got started. Yeah, and by the way, they don’t have to log into anything because for this product, we ship a device and it’s a managed device that we enclose in a facility that we call the hospitality hub in Kansas and then that facility chips are this device, you turn it on. And once you turn it on, you never turn it off, there’s no buttons to worry about, it connects by itself to cellular data. And, and you don’t have to log in because actually our hospitality, we logged in that device for you. And we completely manage the experience. So we have people, you know, if you’re good with technology, it’s fine. It’s just convenient and easy. And on the other end of the spectrum, our avatars talk with people and help people out with all the way up to and including in stage Alzheimer’s, who have like really no idea how the technology works. So there is no logging in. We just deliver the whole experience on this magical artifact that shows up at your door. But yeah, as far as what generates the intelligence, yeah, the way we started was by employing these global teams, because it was 2012. When we were founded, and back then, you know, despite being at the Massachusetts Institute of Technology, I realized that technology just wasn’t good enough, like, you know, AI is just not good enough to really be friends with an older loved one, and alleviate any of that loneliness in any real way. But then, you know, over the course of the last 12, ish years, a lot has changed. So, at this point, we have a generative AI engine that’s proprietary. And then we also even before that started to build in software automation for all the evidence based health coaching protocols. So it’s not entirely generated by humans.

Michael Hughes 11:36
Well, and, you know, we are recording this podcast in October of 2023. And each year just seems like it’s a, you know, you’re getting more and more intimately, you know, into the use cases. And, you know, I think I read somewhere you know, that last year, you know, generative AI, conversational ai, ai programs had trouble distinguishing verb tense in German, and everyone’s talking about it taking the bar exam, and you can get a 90% score on on the bar exam. Right? So AI seems to be a real thing. But where are you? I mean, where do you see the evolution of capabilities go? And I mean, are we hitting a wall? In terms of anything, because as I understand it, you know, anything we’re talking about the general AI requires training. And that training is based on both the available content that’s out there. And he goes, Why Reddit shut down their API’s because their site was getting scraped, you know, everything was categorized, everything had upvotes and downvotes. I know that there’s a world of Microsoft right now, or its promises to use your internal data as kind of its own kind of training base, and things like that. But I think the two ingredients that I understand for generative AI is a base of content, and then human beings interpret what this content means. And is there still plenty of opportunity there, or, you know, are terrified, because the stories are starting to emerge that we’re hitting a wall. And I don’t know if that’s true,

Victor Wang 13:07
I think the wall is a wall of comfort. The technical limitations are basically unlimited, but we are, it’s very similar to what happens with for example, like surgical robotics, or automobile AI, you know, like self driving and things like that, like in reality, we’re at a point where the machines can do something pretty closely equivalent to the quality of a human or like, you know, piloting an airplane, like, really, there’s software that does all the work. But at some level, the human is comforted knowing that a doctor or another human is kind of like pulling the trigger or doing some important aspect of it, whether it’s the fact that there’s still a pilot and the plane that could be flying itself or the fact that or the train that could be running itself. Or, for example, like a surgical robot, even like 14 ish years ago, is working on a surgical robot to insert a needle into a man’s cranium, and drop off some radioactive seeds to kill your prostate cancer. So that’s pretty scary. And I was helping with some of the work on this like a needle guiding robot to where a guide with the robot would put the guide in a certain place and then the doctor would by hand, push the needle through the guide to get the radioactive seeds into the right spot. And then I asked my professor at the time, like, why don’t we just add one or two extra degrees of freedom to be able to insert and rotate the needle as needed? Like why do we need doctors? It’s like we don’t, we’re able to do that, you know, just as well as a doctor, but the patients wouldn’t want it like the patients are comforted by the doctor doing the final step. So you know, it’s to this day, that’s kind of a large part of the bottleneck, I think is human comfort, not to say that you don’t need to know what you’re doing when you work with some of these generative models, and that there’s not things you need to understand and take care of like, for example, have a proper human approval human in the loop process. And there’s a lot to it. But I think what I’m noticing is, probably the biggest barrier is people’s comfort levels. And that’s always going to be the case when it comes to new technology.

Michael Hughes 15:34
No, I’m thinking, first of all, I think, you know, the story about the doctor being involved reminds me of an old marketing story about when cake mix was first introduced, like instant cake, produced in the 1950s. And it only sold well. It didn’t sell well at first, but then they added in a step, which is to add an egg. And when he added an egg, then you felt like you were actually cooking something in the shell. So it’s kind of shot up, you know, because you’ve got that little art to science, I guess that’s an interesting psychology of how people personify this art to something that should be very precise right now, and there’s that. And then the other place where my mind’s going right now is just with the, you know, first of all, at one extreme, when you think about AI, you know, there’s the example of, oh, gosh, if you tell AIS to make as many paper clips as possible, then suddenly the world is going to get covered with paperclip factories, because it’s going to turn into the Terminator, and Skynet. And that’s what’s going to happen, you know, and then you have another dynamic, you have these companies that are actually going to Capitol Hill, and they’re saying, you know, hey, you know, we need to think about this, we need to regulate this, you know, and on one hand, you look at that and saying, Oh, my gosh, of Google, and Apple, and Microsoft, and they’re all going to Capitol Hill saying, hey, you know, we’re not sure about this. So you may want to sort of put some boundaries around it, you can interpret that as Oh, my gosh, is gonna take over the world. But then, the last time I heard a story like that Victor is when all the cab companies went to the government said, Oh, my gosh, you’ve got to create taxis, because there’s this new thing called Uber coming to what degree do you think the, you know, those companies are scared of the paperclip scenario? Or that they’re scared of somebody in their garage or in their basement or somewhere just coming up with a much better search engine? A much better, you know? Do you think that they’re legitimately threatened and someone’s going to eat their lunch?

Victor Wang 17:45
I think that’s an interesting analogy, Mike, with the cab companies, because it’s an interesting analogy, although there’s a big difference, which is with the cab companies, Uber, was a newcomer to kind of, as far as the incumbents perceive it, whereas with these companies that are pro regulation on Capitol Hill, it’s not as if there’s a new company rising to eat their lunch, I think they do believe they’re going to be part of the pack that’s going to know there’ll be many winners, you know, like, the foundational models at this point are rather commoditized. So like, you know, is Microsoft that much better off than Google? Or is it like, you know, how good is anthropology? I mean, you could, you could say something now, but three months later, it might change. And, you know, everybody’s pretty much competing on similar terms. And so I know, it’s a little bit different, because I don’t think it’s like, some newcomer is going to eat our lunch. It’s more like everybody’s dealing with the same technology, the same fundamental architecture of these transformers are called. And I guess there’s a worry that if the government doesn’t step in, you know, somebody might go out of line, and like, not exercise best practices, whereas most of these large companies seem to be fairly responsible. In fact, the thing that they have to be responsible about, is, in some sense, ethics, and or like morality, and you know, in some sense that people behind the company care about civilization. But also, I think another big factor is these are huge, huge for profit companies that are public, and they are trying to go after a multibillion dollar market and billions of people served, and they already have huge brands to protect. And so what happens is just from a pure brand value preservation standpoint, in fact, they are incentivized to be very conservative and safe. Like you’ll notice in the news, maybe somebody you met figured out that, you know, even though there’s maybe some copyright questions around Mickey Mouse and you don’t want Mickey Mouse flying into the Twin Towers, because that’s an offensive image. Somebody figured out, oh, draw me a picture of a cartoon mouse flying a plane with two tall towers in the background. And because you didn’t trigger anything like that, it’ll actually generate an image. And then the world gets outraged.

Michael Hughes 20:20
It’s just a distortion. Because you know, the most frequently occurring images might be the Mickey Mouse and the Twin Towers. Yeah, yeah. So. So the

Victor Wang 20:27
Being like, like an image creating Gen AI actually produced this, like Mickey Mouse in an airplane flying towards obviously, the Twin Towers, and then the world is outraged. And I don’t think the world should have been outraged. I mean, this stuff is a tool. So obviously, like, if I wanted to draw a picture of Mickey Mouse, you know, like, I could have done it myself, there’s like, the gist was faster because of the Gen AI, but the world likes to get riled up by this type of thing. And then you’ll notice that immediately afterwards, Microsoft, basically, likes to nerf that feature. And now, it’s just not as useful anymore. So actually, what happens is, because these big tech companies have such big brands to protect, they’re actually really on top of this stuff. And they’re constantly responding. And like nerfing their own products, and making it like, actually a lot less useful in the interests of being safe and protecting their own brands, at much more so and much quicker than you could possibly expect legislation to force them to, like, it’s a really big deal for them to protect their own brands.

Michael Hughes 21:28
And that’s a great perspective if you’re, you know, because yeah, I mean, again, the value of grand is just, you know, tremendous. I mean, we got, we had the windshield replaced on the car the other week, and my son knew the safelite repair safelite replace jingle, because it just comes on. So often. I mean, that’s a huge amount of investment. And you got to think that, you know, if you say to an AI engine, draw me a picture of a cartoon mouse, you know, that x percent of images that are training these systems are Mickey Mouse, so it’s gonna work. So it’s, and i Today’s analogy day on the show, it almost seems like, because you and I both have Canadian Heritage. You know, it almost seems like a hockey game where you sort of, you know, you skate too far out, and then you have to pass the puck back, you know? Yes. Let’s go up and see what this tech can do on Oh, wow. And everyone has been, yep, people always are gonna, you know, just try the boundaries of anything, right. And then it reveals and, oh, there’s a boundary now we have to reset and reset and, but framing the motivation for so and brand brand protection is very revealing.

Victor Wang 22:37
I think the key there, Mike, is that different people should have a different boundary and different use cases should have different boundaries. But the problem with like, a lot of this rhetoric around these large language models, and kind of people’s perceptions of them is they’ve only tried the ones that big tech companies have released. And so, so here, as we just discussed, these big companies set a certain boundary for a brand protection standpoint. And so then it’ll be useful if you just want to do things that are very conservative and like, you want a conservative, like an assistant that will answer your questions and summarize documents and write stories or something, then that’s great, you know, use a big tech company solution. But if what you’re trying to do, for example, is solve loneliness. And you’re trying to add scale, use Gen AI to build relationships that people enjoy engaging with, and that, you know, people want to talk with this gen AI the same way they want to talk with you might because you’re thoughtful, and maybe you challenge them and you have a good sense of humor. And then maybe these big tech companies, the boundary is set. Sub optimally, it’s set very conservatively to a point where what we notice is the behavior you get. You might think of it as like Butler curry, you know, they’re very subservient. They don’t come

Michael Hughes 24:04
across as a friend very Yes. They don’t come across as Yeah,

Victor Wang 24:07
yeah. And like all the interesting conversations people have with people they trust, you know, like, you probably talk about, like, religious beliefs or politics, or you know, things like that, you know, real friends talk about. But you’ll find that you can’t talk with, for example, Chad GPT-3 about these things, in any real way. Because if you start to say anything, that might be, you know, a screenshot ending up on X or whatever, might be offensive to the company. Suddenly, the large language model will just spit at us some boilerplate like well, like, all religions or something, and you know, we should, so that’s fine. You sure you’re right. Yeah, we let’s respect all religions, but like, can we just talk like friends, and can we like, share some opinions? And so that’s kind of the direction Now that we need to go in order to actually solve loneliness, right, people want to talk about the things important to them. And a lot of times the things that are most important to you, you wouldn’t post on social media. So

Michael Hughes 25:15
and that’s the thing too, because you know, you’re tapping my mind is spinning in a couple of different directions right now. And I do want to sort of unpack this in the world of age tech and loneliness in particular, because I think that there’s a huge opportunity. And I think there is this ethical question, but almost like, you know, from a pure economic stone point of view, and going back to the earlier point about eating lunch, you know, I think that there’s a lot of if people are going to seek that type of an experience from AI, and they believe it’s valuable and established brands will not go there, would you and I’m sort of thinking about the early days of the Internet to where people found, you know, when before 4chan became Fortran, for real, and those sorts of messages would be where people are kind of like trying to get at an edge experience within something new. Do you think that there might be you think that we might start to see some actors in place where they will kind of risk and we’ll put these edgier, conversational AI models out there,

Victor Wang 26:15
there’s definitely ones out there already, some of them are quite well funded. The difference is that as far as I know, they’re going for extreme scale right off the bat. And so essentially, everything is being done by the AI. And so then you gotta go to like how these models are trained to begin with. So GPT-3 has existed for a long time. It’s a large language model that was trained essentially, just to predict text, you give it some text, and it’ll predict what is probably going to come next based on all the text it was trained on. And as a result, it wouldn’t really do what humans always want it to do. For example, if you ask GPT-3, like, can you write me some code to calculate pi or something like that? You might actually because he learned from a forum post where somebody asked a question, and then the reply was, why don’t you do it yourself? Like, there’s so many solutions to this already? Why are you posting this here? Like you GPT-3, for example, might very well spit something like that? Because it’s a likely response, but it’s not what you want. And so how did they actually go from GPT-3? To GPT-3? Point five turbo, which powers the free version of chat GPT-3 I’m

Michael Hughes 27:30
So glad that the word turbo is in it, by the way, because, you know, I don’t know what the 80s were when turbo, Turbo sunglasses and turbo aftershave. And I’m a car freak. So So

Victor Wang 27:42
yeah, presumably, there was a non Turbo version? I don’t know. Sorry, I don’t, I’ve never used the non Turbo version. But yeah, I think colloquially, people just say 3.5. But technically, it’s 3.5 Turbo. And then what they did, what opening I did to get there was they hired a team, it wasn’t even that many people, you can look at their research paper that they published, and they actually, to their credit, they thank the credit and acknowledge every single human labeler that helped create GPT-3 point five turbo. And what they did was it took GPT-3. And they had it, you know, they prompt the prompt, and they get all these outputs, and then they have these human labels, that amount to a little less than 30 is around 30 people. They’re not that many. And I imagine, they don’t disclose for how long but I imagine it would have been a couple months of work for 30 people as like a little research project. And they would go in and they would rank these results, like from this property generated these versions and responses. And this one’s the best. This one’s the next best. Like, this one’s okay, but this one’s horrible. And from that they run a cycle called reinforcement learning with human feedback and keep doing this. And it progressively learns like, not just to predict from, like an unsupervised training corpus, like what words are likely to come next, given the prompt, but what words are likely to satisfy the human that gave me a broad, and that was one of the key innovations to to create this experience that everybody went crazy about with GBT, where like you ask it something, you ask it for something and it basically does what you want. And so the problem is that, unless you’re doing that, you’re gonna be using kind of like a more limited version of like you can do, you can fine tune your models to do more of what you want. But the thing that seems to have the most powerful effect is reinforcement learning with human feedback and so, so what we do at Teradata coaches, we have this team of humans, staffing or avatar, as we’ve always done, and so they have a level of skill well into empathy and intelligence in informing conversationally supportive relationships with older adults and people with disabilities and so on in a Medicare and Medicaid context, and they’ve been doing this for years, and this is their career. And that is the level of skill and empathy and intelligence that is provided with human feedback when we do our human reinforcement learning with human feedback loop. Whereas some of these other companies like I mean, I’m sure they have their own proprietary confidential processes. But I’m sure they don’t have the same level of human empathy and expertise, especially in a Medicare and Medicaid health care context. Right. That’s actually the limiting factor. I think you alluded to this earlier, Mike is like, the humans that are training the model are the limiting factor, and how good the model can get. And with Teradata coach, because we’ve been doing this through this 24/7 Human team, and we have, we not only have a bunch of historical data to fine tune on, but we have this ongoing dynamic ability to do the hourly Jeff training. That’s I think what sets us apart, we’re in a really special spot to push this technology forward in a way that actually builds real relationships and healthcare supportive roles and safely because that human team, we also pull in real time to edit things. So for example, when the AI, the other thing to understand about the technology is like, when you have one of these use cases, like any kind of mass consumer chatbot, you’re serving a huge enough population that you can’t really afford to have a lot of human reviews. So you’ll say something in the chat bot and just reply right back. And you might think that’s just the performance of the AI. But in fact, before it replied to you, what the model generated was a series of floating point numbers like, you know, what, zero point something zero point, something. And so for each token or bundle of characters in that sentence, that it’s gonna, it’s gonna send you, it actually has some sense of the probability of fit there. And so if you look at that, in the aggregate, you can actually make a calculation as to how, how consonant or how certain it is about that response. Now, you don’t see any of that when you’re just normally using Bing chat or chat GBT or any of these, like, like, these friendly chat bots. And so you get a certain experience, but actually, if you behind the scenes, if you’re designing the system, you can cause it to where if it’s not very confident, or if it’s detecting certain situations like safety or health related type matters, we in real time, can pull in one of our health advocates to take a look at that, and potentially edit it to be safe or edit it to be more engaging or effective.

Michael Hughes 32:47
And it’s in the context of you know, serving older adults serving someone with trauma and rotations are the type of persona that you’re working with. This is a virtuous circle, where the conversational interactions you can get better and better both. We want a specific level for that person, but you’re also building up just this specific set of knowledge on these people. And it strikes me I mean, and when we first met and we met at the AARP H tech collaborative conference a few months ago, and you know, I heard you on our panel, and you said something there that really struck me where if you’re talking about geriatrics, if you’re talking to me, if you’re a cardiologist, you know everything about the heart I’m paraphrasing you by the way, you’re gonna do you know that you know, everything with a heart. If you’re in geriatrics, you have to know everything about complexity. And it seems like these AI models, what they’re good at, is making some semblance of order out of seemingly complex inputs, right.

Victor Wang 33:55
Yeah. So I can give you a heads up on some of the stuff that we’re doing. I mean, yeah, I don’t. Like, along those lines, we have a model where it’s actually not so much that we have a model, it’s like, we have a system by which we can take foundational models, and upgrade them to be more empathic and better in a healthcare supportive role. Because the thing is, like every couple of months, the state of the art in what the best foundational model changes. So we’ve set ourselves up to basically like not care that you know, it’s like, do you use Windows or Mac? Doesn’t matter. But then what we do is we take whatever is best, and then we put in all of our own training, to then elevate that foundational model to become best in class at these empathy health care supportive relationships conversationally and like long term relationships. And then that’s a skill you can think of as a skill. It’s as if you took a college educated person, this foundational model from off the street. and they have no context or anything, you can prompt them you can be like, Okay college educated person I found out of the street, like what is two plus two and write me a poem and it will be able to do these things. But then you train this person that you found off the street with a decent start of an education to be more empathic talking with seniors, and talking with people with disabilities and so on, and being appropriate in a healthcare supportive role. And then it turns out that if you think about it, that will involve I mean, this is like, like, centuries of interaction, we can just like throw in there and imbue this college student with like, like one lifetimes of talking with seniors. And then suddenly, this person has the skill that if you think about, it also includes some understanding of geriatrics and senior care and the fact that like, oh, you talk to this person, and they talk about, you know, their unsteady, and how they’re worried that, you know, they feel dizzy when they stand up. And then soon thereafter, they’re talking about how they were in the hospital, because they had a fall. So all of these things, you know, in many lifetimes of conversationally supporting this population, not only do you learn how to talk with them better, but you experience all of these things, and you get an intuitive sense of the things that matter in aging and how these things are related. And then we take that and then actually we’re adding in this is something called Project agar that we presented at the National pace Association conference a few weeks ago, are in a program of all inclusive care for the elderly, your pace. And so we have a new initiative here, where we’re collaborating with one of the leading Pace, pace like innovator based programs in the country, where we’re feeding in the EMR and claims data from a very complex, older adult population. And we’re actually layering it on top of this engine that we use this conversational engine, and then basically transforming the healthcare information, the claims and the EMR information into a form that that model understands. And the idea is that when you put all this information into the same model, you start to build on, it starts to build on itself, instead of like training a separate model that’s going to predict hospitalizations, a separate model that gets good at talking with seniors a separate model, put it all into one model, and just like a human that has served as a caregiver for 20 years and then got their medical degree, they would be a very different type of doctor, they probably have like really good bedside manner and like, very strong empathy for a certain type of patient. Or if you had a, if you had a, you know, somebody who worked as a nurse for a few years before they became a doctor like that would be, that’d be a really interesting background. And that would be very different and probably like a better doctor, actually. And so by putting all of that experience and training into one of these models, where we’re in progress of building basically this, this kind of overall kind of Geriatrics model that will start to develop emergent properties. That’s what it’s called, it’s like when you have these huge, complex, large models, and you throw in training data of various sorts. The model develops what’s called emergent skills that it didn’t have before. For example, if you trained one of these models to answer your questions about accounting, and then you trained the model to produce poetry with a limerick rhyming scheme, you might find that the model itself figured out how to write limericks about accounting. And it could be the world’s greatest accounting Limerick writer, and you didn’t even you didn’t even train it for that.

Michael Hughes 38:45
I’m sure right now the world’s greatest Limra counting writer, the human being up there quaking in their boots. That was my thing.

Victor Wang 38:58
So that’s what we’re building. And yeah, it’s really exciting. And

Michael Hughes 39:03
I need a way because you know, I think we all see it because you know, we all in this world, we all understand and see health inequities, we all see the impact of depersonalization of care or the fact that, you know, to quote somebody I respect MP Evans over a peer health care 90% of people with chronic disease believe it’s up to them and demo alone to manage that condition. I mean, people really do feel marooned a lot in the healthcare system. And this has been promised and I want to cover maybe just two more questions on the subject. Before we get to the end of our podcast, I think the first one has to do with ethics, the other one has to do with speech interpretation. So So in terms of ethics, you know, we are all thinking about the benefits of the scalability of these types of companions, given the workforce is You and especially if, you know it can get better and better at personalization and actually give you give somebody, an entertainer and I say undergoes a meaningful and entertaining experience one and it’s fairly sticky. But just like we talked before about boundaries that you cared about, coaches are likely very involved with the ethical question because of the audience you deal with. And that must have led you to make some decisions about the own boundaries that you will carry forward. And are you able to talk a little bit about that?

Victor Wang 40:33
Yeah, it’s definitely an evolving space as they get, like better and better. We’ve always had to deal with interesting boundaries. Because you know, when you get into geriatrics, you also experience a lot of Jarrow, psychiatric challenges, and people’s behavior can be, you know, pretty eccentric, or unexpected. This year, you’re not really used to it. So we’ve always dealt with kind of interesting boundary issues, or sometimes, you know, like, for example, people like to vent to their right to their avatar. But then sometimes it gets like, really extreme and like, an abusive kind of thing. And we have to decide like, how do we because then we have, like these health advocates in the back that are experiencing it, right. And so we always have to balance, like teaching the health advocates how to have a healthy perspective and like, Okay, if they’re verbally abusing the avatar, they’re not verbally abusing you. They don’t actually know you. They’re just, there’s something going on in their life, like a chunk of our population.

Michael Hughes 41:47
If they’re at that stage where they’re large, with swear words and dementia, that’s part of the brain. Yeah,

Victor Wang 41:53
large portions of our population have diagnosed Alzheimer’s disease and related dementia and a lot of it is undiagnosed, as well. As you know, people with Bipolar have all sorts of mental health challenges. And, you know, we’re there to help them. So we always have to balance between educating our health advocates and supporting them and having kind of like internal pathways for our own team to kind of decompress a little bit. If it’s tough to handle versus like, at some point, we sometimes recommend to the health plan like him, we should probably disenroll this person like this is not a healthiest relationship. So that’s something that we’ve always had the one on the like, other examples, or like, you know, saying I love you. So this is such a common thing, we develop a policy where when our client tells the avatar, I love you, then we can reciprocate. So we’re allowed to reciprocate. So, but we never initiate. So that’s another interesting one.

Michael Hughes 43:01
Hold on for a second. Alexa, I love you. Thanks for saying excuse me,

Victor Wang 43:12
It’s a song now.

Michael Hughes 43:13
It’s a song. Alexa, stop. Sorry, just testing this data. So yeah, sorry, again, let me pull the entire interview over to the side of the road while I do that. Other question i It’s more of a technical question. We look at different stages of AI. And I guess everyone has different categorizations. And some of them are fantastical, this point about, you know, being self aware and things like that. But there seems to be a level of accomplishment within AI that maybe has yet to come. And the question is, is it going to and how soon is this idea of intuition in the the the the cadence of a voice it in the trembling of a voice, the joy of you know, just basically not so much the interpretation of the words, but the intonation, and how that could be an additional piece of data that could size up someone’s wellness?

Victor Wang 44:13
Yeah, that one’s an evolving field. And actually, that starts to get into another ethical thing around like, like, kind of health equity, because another big thing that we do that’s related to ethics is like make sure that our solutions are well developed for and kind of, like equally functional for the populations that we support, whether it’s like English or Spanish and like for Hispanic populations. We do like concurrent development of multiple languages and like, we make sure that it works well in classifying responses uttered by like Jewish people, or, you know, my Hispanic people who like to word things differently, or African Americans who word things differently. Yeah, and so then when you add in like intonation, I guess that’s a pretty interesting part of it, too. We haven’t really gotten to that or ourselves, there are companies that specialize in processing a voice sample and using AI to detect if that person has depression or even potentially diagnose things like, neurological conditions like Alzheimer’s and things like that. There are companies that do that. We haven’t really gone down that path yet. Because when things get challenging, at least to the avatar, we can rely on our human health advocates to come in and use their human brain to get a sense of how to respond appropriately. And then, you know, we have another product called Teradata Farah that’s like, largely generative AI powered that there is no intonation because it’s text. So it’s designed to engage mass populations, through text messaging for Medicare Advantage, populations primarily. So yeah, we haven’t gotten too much into that. Although I can tell you that, like the voice production system that we use, a lot of our newer devices will use one by Microsoft on Azure, which you actually have to pay for, depending on how much is used. But we found it rather worthwhile, because it’s a lot more natural than what you get built into these Android devices these days. And then you can, you know, you can make, you can make custom voices, which we haven’t done, but you can and then either way, they have certain voices, where you can put in tags, so you can actually pause. The voice was produced to sound happier, like, say a certain word more seriously, for example.

Michael Hughes 46:37
So it was interesting. Yeah, I mean, as we get closer that that’s the coaching element of it, right? You know, that’s the coaching element. Like to be your friend right now, I’m talking to you. But also, you know, here’s this new medication you’re taking. And now I’m going to talk a little bit more straightforwardly. And all that to kind of grab your attention. Yeah, yeah. I could, I mean, we could carry this conversation on for a long time. But, unfortunately, you have a little bit of a timeout in here. But just Victor, I can’t thank you enough for being on the show. We do have another set of questions for you. There are three questions that you’d like to ask all guests about your personal experience in aging. Is it okay, if I ask these questions? Sure. No, Mike, you can’t buy it. Okay. Somebody’s gonna say no, one of these days. But first of all, before we get to that, where can people find you tell people tell our listeners where they can reach you.

Victor Wang 47:32
There’s literally a DOT coach, if you type that into your address bar and hit Enter, that’s the entire URL. But sometimes, that’s too simple. So I have to say WWW DOT Keras. DOT coach, and then you

Michael Hughes 47:43
gotta keep Yeah, it’s warmer transcription. Yeah, I use this website as their key for doing something.

Victor Wang 47:50
Yeah, it’s a URL summarization. And my email address is Victor at care DOT coach. And you know, if you have a loved one that could benefit from companionship, or like coaching to better manage their chronic conditions or better support some dementia, challenges, anything like that. We are available to consumers, but mostly we sell into health plans. And then we also help younger people with intellectual and developmental disabilities. So we’re in several states, for these IDD services and people with Down syndrome, cerebral palsy. We’re also there for psychosocial support and kind of day to day coaching. So feel free to reach out

Michael Hughes 48:29
SaaS care DOT coach, easy as that. And so your question number one, Victor, when you think about how you’ve aged, what do you think has changed about you or grown with you that you’re really like about yourself?

Victor Wang 48:42
I think there’s mellowed out you know, like a lot of like, when you run a company, like, you know, when you start a startup and there’s a lot of problems and, and water problems to solve and things like that, like, at some point, you’re like, You got to pick your battles, pick your pick what you really focus on and things like that. And so

Michael Hughes 49:11
I love it. I love it. I love it. Here’s question number two. What has surprised you the most about you as you’ve aged?

Victor Wang 49:21
That actually yeah, I mean, like, back in the day, I’d be like, pretty, pretty sure I have Asperger’s. So it’s like, like, zero. A lot. Like very high attention to detail.

Michael Hughes 49:36
Yeah, we’re all neurodivergent around here. So yeah,

Victor Wang 49:39
their attention to detail and like perfectionist type character and like, I think I’m still like, very direct as a communicator to this day. But yeah, I think when I was a lot younger I was probably like, even more offensive than I am today. Like, I wouldn’t have thought

Michael Hughes 50:08
As an ADHD guy, man, I swear, I can look back on my childhood, my poor parents, I know, I was just like saying sandpaper just for the wit with the way that I was back then. And what have you found that you overtime, you sort of embrace those parts of you and kind of put them to work in the right ways. I mean, that’s just kind of neat. Yeah,

Victor Wang 50:26
like, I’ve developed this appreciation for, like, soft skills that I think when I was younger, I didn’t really respect. You know, I think when I was younger, I would respect like engineering skills or like technical skills, and like, all the soft skills are just, I guess, if you can’t get code, I guess you got to learn some soft skills, right, but

Michael Hughes 50:53
this guy can’t code, he’s probably got some stops, calls, you know, whatever.

Victor Wang 50:58
I think when you’re in that mindset, I think a lot of young people who are like technical or engineering minded probably have a similar mindset, and probably don’t think that they’ll actually change that much. But yeah, I’m surprised and not positively surprised that I still have a long way to go. But I’ve, you know, developed the respect for, you know, like, the ability to, like, compliment somebody and like their day better, you know, like, provide positive feedback and appreciate somebody and like, make somebody like, feel good about being on your team. And

Michael Hughes 51:35
well, we’ve been I mean, he’s right up front, and you’re seeing people’s lives change for the better, not only with the people that you serve the the end users, but also for the people that are in positions to coach could be challenges, but there also must be these wonderful moments of insight and connection, and grace and all the rest of it to kind of make the day special.

Victor Wang 51:56
Yeah, I spent a lot of time staffing our avatars, like, you know, when the team was, I mean, the team had to start from nothing. Like, one point, you know, I was the one staffing the avatar, and then we had more, more staffing and more staffing. Eventually, I didn’t have to staff the avatar anymore. But yeah, a lot of the conversations in the early days was me, in fact, that what I kind of noticed was like, because, you know, back then I was still very, like, I guess, engineering and like, all about hard skills and stuff like that back then, still, but I think what one of the things I noticed was, it’s easier to, it’s easier to engage somebody in like, a pleasant or supportive or positive and like, soft skills, good kind of way through the avatar, actually, because because I felt like I was, I’m not myself anymore, I’m like I can, I have an opportunity to reset and be like, I am this supportive dog, and I’m here, you know, I am empathic and compassionate and supportive. And it was actually like, a little bit easier, I think, for me to develop these skills. And I can go in person, I talk with older people all the time, and things like that in person, then going into the skilled nursing facilities and like, and during interviews and things like that, and we’d like to run activities in these communities to, you know, learn about our population. But it’s, you know, I have to develop the skills. And so I think initially, actually, it was like, it was like a cognitive aid to be like, Oh, I’m like roleplaying now as like, this ideal companion. So that was interesting.

Michael Hughes 53:43
And that gets into maybe a variation on the third question, because you know, you’re obviously inspired to create this company by your grandmother. And the third question is really around, have you met people that have inspired you to age with abundance? And now you’ve also had this experience sort of, you know, working directly with older adults? Have you found any sort of traits that they embody that sort of, in that, I want to have that when I’m older, or they’ve inspired your own? When I’m at age, I hope that I’m as lanky as this person is.

Victor Wang 54:18
Yeah, one person that comes to mind is genuine Hanson. So she basically popularized pace as the CEO of the first PACE program in the country. And then I didn’t know anything about it, but I mean, she’s not that old, but she’s, she’s. She’s like somebody that I look up to as, like, when I’m older, like I want to continue to like, like, you know, contribute to society and like, advise young people and like, and give back to the world as much as she does because, you know, like, I didn’t know about any of this stuff. You know, we had these companion avid tars who were selling into non medical Senior Care, like private duty home care, things like that. And then somebody made an intro to Jenny. And she’s like, Hey, Victor, like, yeah, these avatars sound amazing. Have you heard of peace, if you heard of these programs of all inclusive care for the elderly, this is where, like, this kind of companionship, can really make a difference intrinsically. Because loneliness is like a chronic condition that affects morbidity and mortality, as much as any other chronic condition, but also, like, maybe you can help support these people in different ways. Because these people have, like an average of eight chronic conditions, and go to the hospital and er all the time, like, are you interested, I can make a couple of intros, then that’s what got us into, like, health insurance basically, deserves special type of health plan. And it’s been completely transformative, to care about coaches in a company. And to this day, you know, we’re in almost 20% of all the PACE programs in the country. And it’s kind of when the company actually started to take off as a result of that. And so, so yeah, I want to be like that, you know, at some point,

Michael Hughes 56:09
this is taking me pace is taking off, I mean, yeah, you’re to meet it, and also to be to learn from it, and to grow with it, and eventually pace it, you know, your markets, not just amazing, what a great grounding Foundation, basically, like

Victor Wang 56:25
One day, I want to be able to just have these conversations with young people and like, you know, in like, one little conversation be completely transformative, not just to that entrepreneur, but like the entire company, and to all the organizations they serve. And like, it’s just really interesting that, you know, as you develop your wisdom and experience and connections in the industry, how like, you know, even if you’re not on the ground, doing all the work anymore, as you reach retirement age, and beyond, like, you know, I want to be I want to be able to give back in that way, like one, one conversation can have such an impact on not just one person, but like an entire organization.

Michael Hughes 57:07
Well, Victor, I mean, you I just got to have to say thank you, sincerely, thank you for your time. Today on this podcast. Yeah, lots of food for thought. I’m glad that we’re kind of memorializing this conversation. I’m interested in you. And I’m sorry, most importantly, thanks to you, our listeners, for listening to the undone aging podcast, which is part of our Ruth Frost Parker center for abundant aging, which in itself is part of the United Church homes and we want to hear from you. I mean, what compelled you to have this conversation? Where do you think the Dai space is going? What drives purpose in your life, because it’s all we do for you. And we hope that we are serving you. You can see it at an abundant aging podcast.com You can also I have to say hi to my mom, by the way, I hope folks slow enough this time. Better saying you were on YouTube on United Church. Oh,

Victor Wang 57:58
Hi Mike’s mom, Mike’s awesome. Do you still criticize him too much?

Michael Hughes 58:05
Victor, I really hope that we have you back on the show. As we continue to kind of follow along the journey. We’re here to our coaches, your dynamic industry that last was changing but it’s also a very purposeful spirit and a very strong need out there for what you’re trying to do. So. PLUG is Keras DOT coach, right, Victor at Kerr DOT coach, is there anything else that we should know about you before we sign off?

Victor Wang 58:30
I’m certain my mom is going to be watching this. So hi, mom, as well.

Michael Hughes 58:35
He’s doing great. Great interviewee. All right. Well, thanks. We’ll leave it there guys. Thank you so much for listening. We will see you next time.