Setting the Winning Conditions for AI-Powered Healthcare
Date
July 29, 2025
Runtime
42:44
Subscribe
Digital Health Canada’s Setting the Winning Conditions for AI-powered Healthcare report summarizes actionable insights from interviews with eight leading health care organizations implementing AI. In this episode, we dive into those conditions and their implications – learning what sets winning AI implementations apart.
Guests:
- Muhammad Mamdani, Vice President of Data Science and Advanced Analytics, Unity
Health Toronto - Ted Scott, Vice President Innovation and Partnerships, Hamilton Health Sciences
Learn More:
- Setting the Winning Conditions for AI-Powered Healthcare (report)
- Unity Health Toronto
- Hamilton Health Sciences
Transcript
DHiC 15 – Winning Conditions for AI-Powered Health Care
This transcript was generated by AI and may contain minor errors.
Muhammad Mamdani: Canada has all the ingredients to be not only one of, but the, world leader in AI.
Katie Bryski: Hello and welcome to Digital Health in Canada, the Digital Health Canada podcast. My name is Katie Bryski.
Shelagh Maloney: And I’m Shelagh Maloney.
Katie Bryski: And for our 50th anniversary of Digital Health Canada, we have 50 reasons for you to listen to the show. And reason number 16, we have the hottest takes on the hottest topics.
Digital Health Canada’s Chief Executive Forum recently published setting the winning conditions for AI powered Healthcare. This report summarizes results of and gleans insights from interviews that were conducted with eight leading healthcare organizations that are implementing ai. And in this episode, recorded live at the eHealth Conference will describe those conditions and their implications with two leaders who are at the cold face and their respective institutions.
We are very pleased to welcome Muhammad Mamdani, Vice President of Data Science and Advanced Analytics from Unity Health Toronto and Ted Scott, Vice President Innovation and Partnerships, Hamilton Health Sciences. Thank you both so much for joining us here, especially right in the middle of eHealth.
Shelagh Maloney:. So we always ask people and it’s quite, quite exciting to hear how they got to where they are. So Ted, maybe we’ll start with you. Just tell us about your career path and how you got from where you started to where you are now.
Ted Scott: Well, thanks for the opportunity, Shelagh. It’s been quite a journey.
We were just saying earlier for the podcast how surprising twists and turns in your career can take you, and certainly that was the case for me. So I started my career on the. Front lines of medical imaging, pre-digital with film. And then I pivoted into,
Shelagh Maloney: Okay, Fred Flintstone.
Ted Scott: And uh – nice. Nice.
Muhammad Mamdani: Yabba-dabba-doo!
Ted Scott: Yeah. The Stone Age. And the funny thing about that is I had no idea because what happened in my early part of my career. Was a sudden adoption of digital imaging technology and the DICOM imaging standard, which made it possible to share images across all the different platforms in every imaging department.
So I had this sort of unrealistic expectation of what would be possible in healthcare and standards. And then I pivoted into academia and eventually into a leadership role in, uh, research applied research. And then ultimately back into health into the hospital as a leader. But little did I know back in the mid nineties that the rest of health information wouldn’t just be easily exchangeable, just as images were.
And that that was for me, quite an insight. Yeah.
Shelagh Maloney: Well, it must have been a very big aha moment for you. It’s like, wait a minute. This is not what I thought. Because not at all digital limiting was so far ahead of everything else at that time.
Ted Scott: Well, and I was shockingly naive of all of the many ways that you could store and retrieve in health data because I thought, well, you know, imaging is kind of complicated and we have all of these really cool technologies and modalities and all these things and ways you can actually construct images. And I thought that was complicated. But then I came into health informatics.
I was like, oh my God.
Shelagh Maloney: That’s true. Muhammad, what about you?
Muhammad Mamdani: So I kind of came in at the clinical front. So I’m, uh, I’m a pharmacist by training, so I have a clinical background and I remember going onto the floors as a student. And just shaking my head saying, oh my gosh, we’ve learned all this stuff in the textbooks and I’m on the floors.
And so little of it actually applies to real clinical practice. And there’s so many things that we do that are based on cutting edge evidence that typically excludes the majority of patients that we see because it’s a statistical game, right? Right. When you’re designing studies, you need to show or you hope to show an effect.
So you make the patient population as homogenous as possible to increase your chances of showing an effect. That means you exclude a lot of patients. And of course this is best evidence that we’re supposed to apply to the rest of the population. So I was really struggling with, there’s limited application of the evidence that’s available.
I’m making decisions based on my intuition, some adaptation of evidence. It’s not a perfect world at all. Learning from my peers and my supervisors who are also to a large extent, winging it. And so then you’d actually have to step back and say, well, how do I actually make the best decisions because. I know we’re not making optimal diagnoses.
I know we’re terrible at prognosis. I know we actually do trial and error for a lot of the treatments that we have and communication. That’s not ideal either. So as much as we, I guess we glorify the medical system and medical practice, there’s a lot of shortcomings and it’s hard. It’s really, really hard.
So can we look to best evidence? Uh, so I started getting into pharmac epidemiology, health economics, realized there’s all sorts of biases with those sorts of studies. Then I started up a randomized trial center because that’s the gold standard of medical evidence, all sorts of issues with that as well.
So what can you do to actually tailor data to an individual patient at the time of medical decision making? Hmm. There’s this thing called AI and you know, actually it takes into consideration a lot of data, but it actually tries to take into consideration the individual parameters of that individual patient that you’re seeing at that moment.
And that’s where we thought, you know, maybe there’s an application here. And it’s been an exciting journey so far.
Ted Scott: Can I jump in with a question?
Katie Bryski: Of course. Yeah.
Ted Scott: What I think was quite interesting about your journey was you got in the game very early and frankly, a lot of us looking from the sidelines were like, well, that probably looked, you know, but the, the runway that you were on was like a very long one.
So like, how did you have the confidence to say, okay, I’m gonna really put my chips on the table in this bucket? Because a lot of people are like, eh, listen, we’ve been doing this for a long time. It’s probably 10 to 20 year journey, but you managed to get a lot. Accomplished in maybe a five to 10 year horizon.
So what gave you the confidence that, that we’re gonna go right in front and take this on
Shelagh Maloney: and can you, and can you just before that, what was the timeline? Was this five years ago, 10 years ago?
Muhammad Mamdani: I mean, I think, first of all, I think it wasn’t so much confidence. It was probably ignorance.
Shelagh Maloney: Naivete.
Muhammad Mamdani: Exactly. Of course we can do this. No, I, I think it was more outta of frustration than anything else, but it was throughout that journey of getting exposure as a student that I actually practiced for almost 10 years after I finished. Now, I don’t practice clinically anymore, but seeing all those nuances. But I also have this weird kind of combination of degrees.
So I have my PharmD. Which is a doctor pharmacy degree, but I also have a couple master’s degree in this NF fellowship, one of the masters is in econometric theory of all things. So that’s all about data and mathematics. Yeah, I know I wrote mathematical posts for two years and I’ll never get back. I code it for about a decade.
So you kind of see all the nuances around what goes into these algorithms. But then I have another master’s in statistics epidemiology, so data was always really close to me and the question has always been, how can I leverage that data in a meaningful way? And there’s all sorts of ways you can leverage it.
So then that pathway meandered from clinical epidemiology with observational studies, randomized trials, to this AI space. Mm-hmm. ’cause you see all this data that’s just sitting there in healthcare and it’s just waiting to be used and we’re not using it very well. So that’s kind of how I came about it.
But you know, I look towards people who like you are the innovators, Ted, around. What can we glean from folks at the other end who say, I’ve got these really weird, wonderful ways of doing things and learn from, from other experts to actually bring people together. Because I think this is a team sport, right?
We have to get clinicians in with data scientists in, with technical folks, with legal folks, ethicists, we all play together. Innovation folks.
Katie Bryski: So having this exact conversation with a patient partner, earlier today, they were saying we are part of the solution and are, as are the clinicians, as are the administrators, as are the data processor, in Steve Huesing’s letter you read a few episodes ago. It’s interesting for both of you, it’s almost like as your career has progressed, you, you kept going kind of a level further down, like Ted starting with the DICOM imaging and kind of like, well, how, how, how hard can the rest of it be? And then that next level and Muhammad almost kind of building what you needed.
Ted Scott: Yeah. I think the common thread, is we both do not withstand the frustration of the status quo very well, and then we have no other choice but to try to disrupt it and hence innovation ensues. And you need people like that around to make things change. Otherwise you just end up doing the same old things.
Katie Bryski: That’s like the grit in an oyster, right? That makes a pearl. You need something.
Ted Scott: I guess we’re more sensitive to that. Yes, and it just drives us crazy. And that’s the motivation to like keep on keeping on.
Muhammad Mamdani: So I mean, you see all the inefficiency in the healthcare system, but you also see literally patients doing poorly in some cases dying because of the lack of innovation that’s there.
And you’re just seeing all this information sitting there that could have been used to not only make the healthcare system more efficient, but to actually like literally save lives and provide better patient care. Now that being said, there’s a high rate of failure for innovation. Yeah. And uh, that’s one of the challenges is how do you fund these failures?
And that’s how you learn. But there’s also strategies around how you minimize the chance of failure, and that’s what we have to get better at.
Shelagh Maloney: Well, it’s interesting and I think, you know, and being at the eHealth Conference, there’s a lot of discussion around data and we don’t really talk about data at eHealth that much and it’s now it’s become such a forefront ’cause without data you can’t do anything.
And, and, and both of you guys, it’s been data driving what you’re doing. And Muhammad, I think you were part of the panel yesterday that talked about, as you said, there’s so much data and the percentage of the amount of data that’s out there that’s actually being used is minuscule. And then the rate at which data is being generated is also incredibly mind boggling.
And then our closing keynote yesterday was around equity and making sure we have equity in our data and, and we don’t. And you know, you, you sort of hinted at it before, there’s a lot of issues with the data that we currently have, the homogeneity of it, et cetera. So innovation, you need to look at ways to not only innovate and solve those problems, but make sure you have the right foundation and data in which to develop those solutions almost.
And do you see that as a, as an issue? Like is that a problem for you when you’re doing these innovations and developing this AI algorithms? How easy is it or difficult is it, or consistent, is it in terms of your data sets?
Muhammad Mamdani: Maybe I’ll tell you a quick story. When, when we first started our journey, it was in 2015 where we, uh, received a significant amount of money from the Li Ka Shing Foundation in Hong Kong to fly back to Toronto and actually do the data thing.
The, the one sentence that I remember from the leaking foundation was, Mr. Li saying, I want data to help people. So the purpose wasn’t so much the data, the purpose was to help people. And so data is a mechanism of the vehicle to get there, at least from the proposal that we made to them. We fly back to Toronto and we say, yes, we got a bunch of money from the Li Ka Shing Foundation and we’re gonna, we’re gonna help people.
And then we looked at our data environment, we said, there’s no way we’re gonna help people with this data. So we, we literally spent about three years trying to fix our data environment. And it was hard because the first thing that happened was privacy officers, legal folks said, absolutely not, can’t.
Won’t do it because privacy, security. And when we pushed, when we asked to say, really, can you show us a legislation that where you’re saying no and they couldn’t. So we actually consulted with an external legal firm to get an opinion around can we actually do this? And they said, absolutely you can. Here are the parameters around how.
And so we have to take it to our own folks who are not experts in data and privacy, they’re, they’re general legal counsel to educate them and to talk with lawyers who are experts to say, actually this can be done. The, the funding is there. Yeah. But we need the, the will of the people, which is actually a bigger roadblock typically than the dollars are to enable an environment that would enable ai.
Because without that environment, without that oil of data, you’re not going to haveAI. And so the processes around, we have legal expertise from the outside saying it’s possible somebody at the leadership level has to say no to. Mr. Lee was a very big donor at our institution. Nobody’s willing to do that.
So the stars aligned. Where they said, okay, I guess we’re doing this. So we did.
Shelagh Maloney: You probably have similar challenges.
Ted Scott: Well, it’s very similar. Going back maybe two decades to all the arguments had to be made to digitize everything. And we would say things like, well, you know, paper can only be one place at one time and all these things, right?
But now with AI, there’s this tremendous opportunity to go to the next level of performance. And that is so virtuous because it forces you to have conversations. Well, well hang on now, what about equity? Where’s the data from this population? Mm-hmm. We never cared about that very much when we just did our usual things.
And now, I mean, people expect, you know, if you’re in a digital environment, you can audit and say, well, what is your performance? What is your downtime? What’s your uptime? All these things that matter. And so I think the exciting thing for AI is that we can finally start to break through in terms of equity and bias and start to really make that a priority.
And because it’s not acceptable, frankly, to run an AI algorithm that has. Known bias or known inequity within it. It’s just not acceptable. And so, but we accept it everywhere else, but, but with AI, we have to beat higher standard performance, so I think that’s very exciting.
Muhammad Mamdani: Absolutely. Maybe one other thing I’ll add to this that’s important is the data’s one piece, but we as people, as a society, it’s a massive piece that we often kind of don’t think about as much as we should.
So what I mean by that is as a society. We say, yes, you should look at bias and uh, uh, all these nuances and data make it equitable and fair and whatnot. Society demands it. And yet when we go to society and say, you know, actually there’s not a single hospital in Canada that collects race data well, so in order to make it equitable, can we have your race?
Absolutely not. I’m offended you would even ask. Well, you can’t have it both ways. So as a society, are we willing to actually. Talk the talk and walk the walk. Are we just going to claim issues and then not be part of the solution?
Katie Bryski: I have a question about that.
Muhammad Mamdani: Yes.
Katie Bryski: Because I think it’s a really important point and I just wonder, particularly for populations who have been maybe harmed by the health system, right? Who have those, those histories of broken trust and who may for those reasons feel less comfortable giving the data that is needed, I guess, in terms of strategies of building that trust, and then also our own kind of cultural competency capacity building as organizations in the health sector. And what are some of your thoughts around that?
Muhammad Mamdani: Absolutely. There’s a lot of stigma out there and a lot of nuances, absolutely. To take into consideration. So for example, we have very marginalized groups who feel threatened. I think a good part of this is about education on both fronts, educating marginalized groups and patients around. This is the value that your data would have in order for us to build more robust, equitable algorithms.
But education around the hospitals and the care providers and the data folks to say, well, maybe you should have a, maybe a special pathway around this data that actually respects the unique needs and desires of the marginalized populations. So there must be a compromise to be able to do it well. But I think it’s a give and take on both ends.
We just can’t throw up our hands and say, no.
Shelagh Maloney: I was just gonna say, I think we’re our own worst enemies in a lot of those ways, right? Absolutely. And you know, Infoway has done a lot of studies to say, you know, when you ask Canadians how they feel, ’cause privacy’s the big one, right? Well, we can’t do that.
And privacy’s legislation and they say, you know, if it’s going to help me or help somebody like me, I’m absolutely comfortable with having my data shared. If somebody is using my data for nefarious reasons or for reasons rather than it should be, I want them. Punished to the full extent of the law, but if it’s for the greater good, I’m all for sharing my data.
And I think we don’t, we allow ourselves not to hear those things. And because it’s easy to say no.
Muhammad Mamdani: Well, we also have to realize like we do this consciously or subconsciously every day we give data away. So our phones, they track us, right? And how many times have we actually downloaded app, skip the verbiage and said, I agree.
I’m wearing a smart watch that senses my body temperature, can estimate my heart rate. Yesterday at the talk, I didn’t bring ’em today, but I was wearing the Meta Ray Bans that have video and photography, like right in the, they have speakers and microphones, so you can hear me. It can see things that I see.
I’ve consented to all of it. Who knows where that data, and actually meta actually has said we will be using your data. Mm-hmm. For whatever we want. I’m okay with that for some reason.
Shelagh Maloney: Yeah. It’s interesting. Some things we take for granted and Absolutely. Absolutely.
Ted Scott: Yeah. I would just say that I think the path forward here is really to build that trust.
Through dialogue and we don’t have a great history of leadership and building community dialogue around health practices. It’s been very institution centric. We will, you know, come here for our procedure, we’ll deliver that procedure and away you go, as opposed to a real meaningful dialogue where the patient voice is actually heard and might adjust the course of care.
I think that as a society we need to mature a little bit, shift more towards. Patient-centric, classically as described model of care and away from the provider institutional perspective, because that’s what leads is sort of dependency and it’s not healthy. And you hear it in these comments.
Shelagh Maloney: Progress happens at the speed of trust.
And I think healthcare has always been a bit of paternalistic. Yeah. And, and we know what’s best and we won’t bother telling you things and we’re almost starting from a little bit flat footed and we need to build that trust and we need to put the effort in.
Ted Scott: The flip side is so. I remember when the days you’d go buy a car and the dealer held all the cart because there was no internet.
And so the best you could do is read, I don’t know, ads on newspapers, try to find the best price going out to a few dealerships. All of a sudden, with the advent of the internet, the entire equation flipped. I could now walk in there knowing far more about their cars than they do if I so desire to. And so something happened where that didn’t flip into healthcare.
And I think it was that people are afraid, well, you’re gonna self-diagnose and all these things. It’s like don’t have to go that far, but certainly you should exercise your prerogative to engage in a more positive way and expect more for your healthcare system. And I think when you set those expectations to a higher level, the system will have to adjust.
We expect very little, frankly, from our health system, and that’s gonna change.
Shelagh Maloney: We had that conversation at dinner the other night and we talked about like, we’re not, we’re okay to wait six months for an MRI or something like that and, and maybe we shouldn’t be. And, and so I liked your, the comment that you made earlier on AI might be the one, the way to do this and that democratization of dollars that we have with the coming of the internet and, and will AI take it that much further ahead?
Ted Scott: Yeah, and that’s a great choice of words. Democracy is all about the exercise of agency and power. And if you don’t exercise that right, then you’re literally not acting within a democratic society. So it’s upon everybody to raise the bar of performance.
Katie Bryski: Also thinking with AI too, that I liked that democratization of knowledge because I think it gives people a sense of what they should be asking for, right?
I think in Canada we’re often very guilty of comparing ourselves to one other international health system in particular. And it’s like, no, there’s a whole world literally out there of what healthcare could look like that people just may not be aware of what they could be missing.
Ted Scott: Exactly. And, and the last thing I would just add is kind of a pet peeve, but like we do tend to even, we’re pretty good as a country, honestly, compared to our southern partners, but we still way over test, way over treat, quite frankly, like for the value.
And so this time that we started to push back a little bit and say, okay, you wanna do all these things, but like. Really, how’s it gonna change my course of care?
Muhammad Mamdani: Absolutely. And if we look at this, Canada has among the most expensive healthcare systems in the world, and our outcomes are middling at best.
So, uh, there is this, it begs the question, you know, um, in the United States, they spend a lot more, they don’t have much better outcomes actually, than anybody else. So that’s, that’s a bad comparator, I think. Why don’t we look at countries that excel in this space and aspire to be the best, not just Okay.
Shelagh Maloney: So we talked a lot about sort of privacy and, and a little bit of that trust and access to data. When you are implementing ai, what other challenges? I, I know governance is a big piece, like how do you decide what projects you’re gonna work on and how, who decides and who controls or governs those initiatives and those projects?
Ted Scott: All right. I just wanna jump in ’cause you have the answer here. Oh, it’s a fantastic answer. Um, but I just wanna say that, uh, what you’ll hear from Muhammad is really the path forward and cut the gold standard.
The point of this was to like spotlight on the path forward, right?
So when Muhammad does the heavy lifting for 10 years. Why would we then spend another 10 years with a hundred other organizations to find the path when it’s already been found? So I just wanna emphasize like that is the purpose of this, this whole exercise, is to accelerate the adoption and show people the most efficacious way to move forward.
Muhammad Mamdani: Oh, that’s, that’s very kind. Uh, and this is one pathway that worked for us, and there may be other pathways for others, but we started off with basically defining value. So what is it that we’re trying to go after? Uh, because you could have a hundred ideas, but they may not be valuable to a patient or to a provider.
We all have different perspectives. So, uh, when we looked at the definitions of what is value in healthcare, the European Commission actually convened a whole committee to actually decide what, what does value mean in healthcare? And the conclusion was there’s not a single accepted definition of value in healthcare.
Exactly. So then what are you developing?
Katie Bryski: Good start.
Shelagh Maloney: I was gonna say, how long did it take them to figure that out?
Muhammad Mamdani: That’s what I wanna know. Uh, so we, we basically defined value as five metrics. And uh, the way we approached this was we said, what are patients telling us? Number one, they tell us, I don’t want to die.
Okay. So mortality number two, they tell us. No offense, but we’d much rather be at home with our families, so I wanna get outta here as soon as possible. So length of stay is the second one. The third is they said, your food is terrible. I never want to come back. So readmission, so mortality, length of stay, readmission.
Then we asked our clinicians, what do you want? And they said, I’m just stressed out. I just needed to help me save time. Anything you could do to reduce workload. So human effort was the fourth one. And administrators, what do you want? I’m just trying to survive. I need to manage my budget. So if you could reduce my costs, that would be amazing.
So that was our fifth cost. So death, readmission, length of stay, human effort and costs. Those are the five things. So we’re very explicit with people to say, look, you’ve gotta touch at least one of these five things for us to be able to work on this. And the next step was we’re not going to senior management, we wanna go on the ground.
Because Einstein’s quote is, if I had one hour to solve a problem, I’d spend 55 minutes understanding the problem and five minutes under, uh, developing the solution. So who understands the problem? It’s not gonna be somebody who sits at the desk. It’s going to be somebody who’s seeing patients every day, who’s making the tough administrative decisions.
Okay? Maybe they do sit at the desk. Uh, but the ones who are actually doing the day-to-day, those are the ones we engage. And the engagement model says you’ve gotta spend. A bit of time with us. A bit means every two weeks we meet for the next six months to a year to develop the solution. You’re at every meeting.
In fact, you’re leading many of the meetings. So if you can’t make that time commitment to co build with us, not only co build, you own it with us, then we don’t do the project. So we have that full engagement. So people come with really hard pressing problems. Like internists will come to us and say, our mortality rate’s unacceptable.
We need to decrease it. We need to help people survive. Great. You co build with us. Because you’ve well-built it to your specs to how you want to see it used in your workflows. Deployment is not somewhat an issue anymore because now because you own it, it’s so easy, so much easier to deploy it, and this is where we see some of our solutions published.
One of the CM AJ decreased mortality by 26% just by AI watching our patients and alarming our teams of when to go and see them. That I think has been our secret, is define value clearly. Engage people who actually do rather than talk and have a very disciplined approach around your data, around your model building and around deployment.
Shelagh Maloney: I love that. And that, that sounds very intuitive, right? Mm-hmm. Yeah. Like do something that makes a difference, that adds value. Talk to the people who are doing it and build it with them. And then I’m assuming there’s a very rigorous evaluation or monitoring process in there as well.
Muhammad Mamdani: Yes, absolutely. You have to evaluate because if you’re not getting, these things can be expensive.
So if you’re not getting the outcomes you expected to see, you should stop, stop wasting resources. And in fact, right in the, the initial discussions we have with our teams is, which of the five outcomes are, or metrics are you gonna hit? And by how much? So you have to give us a metric. So you have to come in and say, before we even begin the project, I’m gonna decrease mortality by 10% because that’s what we’ll hold you to.
So that level of clarity has to be there.
Shelagh Maloney: But how do they know? Are they just making it up?
Muhammad Mamdani: Yeah, actually, in some cases they do. In, in our intake form, we have a fairly, uh, it’s not that length, it’s about three pages, but they actually have to specify what are your interventions and how effective are they based on the literature?
Or if there isn’t any literature, it has to be compelling and you have to make up a number because we have to have a target. It’s your best guess, if there’s no evidence.
Shelagh Maloney: When you do something like this, very specific to a specific problem, to a specific group, how scalable is it? ’cause we always know in healthcare, like we have great innovations. We don’t scale.
Muhammad Mamdani: We failed at it. And we’ve actually learned to go into private sector partnerships. We literally gave away one of our solutions to about six hospitals. We literally emailed Python code to them hoping that they would use it. But quickly realized most hospitals are not equipped with AI shops or expertise.
And so you don’t know how to use the algorithms. And so that’s why we partner with private sector. So we actually have a couple of startups. At Unity Health that have actually worked with us to then try and scale. So for example, the first AI SCR in Canada was developed at Unity Health and turned into a startup.
But that was more out of, okay, we’re clearly not good at this. As a public organization. We need to be smarter and engage private sector. And Ted probably knows more about this than I do.
Ted Scott: Well, I, I do think it’s a great question because I think for every organization, the amount of. Internal and upskilling versus reliance on external skills, that balance will be slightly different.
I’m always pro in-house personally because in my opinion, like the more you outsource, first of all, you should never outsource the core competency. So the question is, is it an AI capability, a core competency within a modern healthcare system? I argue it is, and so you should never in principle, fully outsource that.
That being said, obviously the private sector has a huge role. So it’s figuring out the balance of, well, am I gonna run the ML lops or I’m gonna outsource that somehow. Maybe I’ll outsource that part. But the other part’s, boy, my clinician’s champion list must be at least this long, otherwise we’re not in the game.
It’s not a trivial exercise to figure out for every organization. Some things they’ll have to take on, like privacy, security, all that’s gonna have to upskill. There’s just no option. You can’t outsource that, but that after that, it gets a little more gray, I would say.
Katie Bryski: So, as you said, we found the path. We talked about some of the partnerships that can help enable it.
What would be some of your hopes for, say, maybe five years down the road for AI in Canada? Where would you like to see us?
Ted Scott: Well, I would like to see Canada as a leader in implementing and championing AI that has all the attributes of value that Muhammad outlined. Thereafter, and I think this will probably actually end up being the true leadership will be more on the cost side management aspect, so auto coding, all those sorts of sort of background things, which will actually drive efficiency in the healthcare system.
I suspect that will probably be the leading horse in this race, but it would be nice if we could show tremendous clinical benefit in the field, which is also, of course, the hardest to achieve. So we’ll see, I guess.
Muhammad Mamdani: Yeah, Canada has all the ingredients to be. Not only one of, but the world leader in AI.
Other countries are really investing in catching up. In fact, I think they’re outpacing, so I would hope that Canada becomes a bit more organized, to be honest, because while we have all the pieces together, I don’t think we’ve done a very good job at coordinating all of the pieces. We’re behind the eight ball in terms of regulation.
For example, if we look at what the European Union has done, we’re pretty far behind. We’re behind in terms of getting our data infrastructure right. If we look at countries like Estonia, for heaven’s sake, we need to take lessons from there. Denmark, Scandinavian countries are, are incredible while other countries actually have had pieces of this and really are, are leapfrogging others, I think we need to put ours together so we can have the collective pieces form to make the most comprehensive AI and help program in the world.
We have it here right in Canada. We just need to be more organized and thoughtful about it.
Shelagh Maloney: I love that answer. And I think there’s a general feeling, certainly the buzz around the conference. I mean, everyone’s talking about AI now, but I think people recognize its full potential. People are a little bit.
Still nervous about it, and there’s lots of unanswered questions, and I think the pace is a little bit scary for everybody. Yes. But both of you are at the forefront of this. And so if there are listeners and in the healthcare space or non-healthcare space who are wanting to get into ai, like what’s the one piece of advice you’d give to people to say, before you do anything in ai, do these, this thing or these two things?
Ted Scott: Yeah. So first of all, the conversation today is compared to five years ago, massively more mature. Five years ago, there was a ton of hype and it was just made up stuff, frankly. But now you’re getting very specific measurements. So like what I would say from a advice point of view is you really want to understand how much institutional backing you have, because this is a major effort to reimagine how your institution’s gonna deliver healthcare.
And if it’s not, it’s okay if they wanna play around the edges. Just be careful with the hype cycle because you can get caught out. But if you do have that institutional support, then you’ll want to follow in the path that has been trodden and that will help you to adopt in ways that you can basically de-risk a lot of your initiatives.
And if you think of that classic two by two box, you know where you have high impact, low risk, these are the ideals. Start there for heaven’s sake. Like don’t take something out that’s really edgy. Like if there’s a digital imaging mammogram Dutch algorithm that is like awesome at picking up cancers and reducing your error rates.
Then do that. Take the easy button, get some backing behind wind in your sails and then you can progressively work into the more challenging areas.
Muhammad Mamdani: Yeah. Just to build on on what Ted said, I think if I was starting off wanting to get into the AI space, not knowing where to go, first thing I would do is say, learn about it.
It’s all about AI literacy and so there are a lot of, uh, resources out there now around how you get this more bird’s eye view of ai. And because there are so many pieces to AI development, deployment, implementation, whatever, have you pick a lane, know the overall for sure, but pick something you’re really good at.
Because I, I got a lot of people who ask me, well, I wanna do AI. Do you have any coding or technical or math background? I don’t like math. Well, then, okay, you’re not gonna be a coder. You’re not gonna be on the technical side. Well then what do you like? Uh, well, I like behavior in people and stuff. Okay.
Well there’s a whole deployment pathway there. Uh, around change management respect to AI that you could pursue? Other people say, well, I like the social ethical side of things. AI ethics is a pretty big area. So because AI has had so much prominence now in society, and there’s so many facets to it, the technical side, the deployment, change management side, the project management side, the ethics side, the regulatory side.
Pick something that you’re really excited about. And dig deep into the AI spin on that side of things, because as we get more and more into the AI world, you’re going to have all these subfields and they’re gonna need experts. So have a general understanding and then pick a lane that you’re really passionate about and go deep in it.
Shelagh Maloney: You know, I really like that idea of be aware, learn as much as you can, choose something that is of interest and that you’re good at. And then the point that you made earlier, Ted, around the winning conditions document like. There are people at the forefront who have done this, and we, we are so bad at this in healthcare.
It’s like the guys at Unity have laid a foundation. There’s lots of people in Canada who are doing some really good things in ai. Don’t invent the wheel, find out what they’re doing, talk to them, do it the same way. And if we can, we’ve learned anything from, you know, the health system the last decade or two.
It’s that we don’t need to reinvent the wheel. We shouldn’t reinvent the wheel, especially when there’s, you know, costs. Crunches and workload issues and all those kinds of things, we are in the perfect storm where AI can potentially end AI and replication of effective AI can make all the difference in the world.
Ted Scott: Yeah. And that, and that’s not to say there’s still tremendous opportunity for invention and creativity because even though there’s a clear process that’s been defined in what, how to do it kind of part works, there’s an incredible array of things and problems that can be solved. Mm-hmm. And there’s so much problem space available.
You should not feel at all stifled or boxed in. Like there’s an incredible opportunity for young creative people to apply themselves and, and just generate new pathways.
Muhammad Mamdani: The one other thing maybe I’ll say is to be bold. And what I mean by that is that for countries, regions, societies that are risk averse, we will be complacent because we don’t like change.
We will use excuses of, oh, well we need to get our data governance and AI governance framework, right. Actually, we’ve been talking about that for decades. Enough. There’s enough data governance, AI frameworks out there. Pick one, make a move. Just do it. Be bold. Say it’s enough. I’m seeing what other countries are doing.
To Ted’s point, we need to be more innovative and creative, and we need to take some risks.
Katie Bryski: It’s really good advice and as we have been celebrating today in digital health and creating the future of digital health. Um, we have been asking people what digital health means to them just in a single word.
Ted Scott: Well, for me it’s been innovation.
Shelagh Maloney: Good answer. Love it. That’s right. You can’t say the same word. I
Muhammad Mamdani: I was gonna say transformation.
Katie Bryski: We just said that you don’t have to be afraid to not reinvent the wheel. Reuse a solution that’s already worked really well.
Muhammad Mamdani: This is true. I think for me, the one word is change.
Shelagh Maloney: Yeah. Cool. That’s good.
Katie Bryski: They’re very complementary. I like that. Well, thank you for this very humanizing look into AI and giving us some light on this path that’s been set out to hopefully a better future in health enabled through AI. We really appreciated your time on the show today.
Shelagh Maloney: Thank you. And, and setting the winning conditions for AI Empowered Healthcare.
It was a publication from Digital Health Canada. Led by, uh, Dr. Ted Scott. And Muhammad was a keen participant, and so it’s a really good document. It’s short, it’s well written, and it’s very concise, and there’s some really good advice in there. So check that out as well and listen to the podcast and if you like it, tell your friends.
Katie Bryski: And there’s a link in the show notes to that paper. Thanks so much.
Well, we’ve already started being excited about this episode and what we learned before we even hit the record button for this second segment.
Shelagh Maloney: That’s right.
Katie Bryski: What I was saying before we hit record was the five values Muhammad outlined. What I really loved is how they translated it from what people were telling them.
So from a patient’s perspective, you know, I don’t wanna die. And it’s like, okay, well what does that translate into in framework speak? It’s like, okay, well that’s mortality. That’s something we can measure. That’s something we can manage. And I, I thought that was a really cool way to go about it. It is.
Yeah. How do you draw the connection from how you express things as, as a human to like something we can actually action.
Shelagh Maloney: I love that too. And, and the thing that I liked was like, three of the values are patient values, patient driven values. Two of them are provider values, and one we ask the administrators.
So we ask the patients, we ask the physicians, we ask the administrators. And if you don’t probably get all three of those in your mix, you’re not going to create value for everybody. But he also said, you know, you only have to create value for one or hit one of those values, but be very specific about what you’re gonna do.
So I really like that process that just, it’s, it makes sense. It’s intuitive, right.
Katie Bryski: Yeah. And I like that being held to a target. It kind of goes hand in hand with that. Be bold. It’s just pick something.
Shelagh Maloney: Yes.
Katie Bryski: Like, just pick something that you’re gonna do and then try your best to do it.
Shelagh Maloney: And set a target.
But then you’ve got a goal. It’s like meet it or achieve it. And so if you don’t say, well we’re gonna do better, where well better could be 5% and then you check, but you can say we’re gonna be 15% better. And that, that gives you that target to set to right.
Katie Bryski: You know, it kind of reminds me totally not a healthcare example.
But it reminds me of when President Kennedy said, we’re gonna have a man on the moon by the end of the decade. Like not just, oh yeah, we’re gonna improve our space flight program. We’re gonna do more. It’s like, no. Like this is, at the time, it seemed a completely audacious target. Like no way that was gonna be possible.
Yeah. Put your stake in the ground. Yeah. Once you put your stake in the ground and kind of burn your ships a little bit. Yeah. They did it. And I feel like it’s a similar principle here.
Shelagh Maloney: It, it was interesting. One of the comments that Ted made about AI as a core competency for a facility for healthcare.
And should it be? And his comment was, it should be. And I think it’s, it’s almost like conversations we’ve had over the past, some episodes, right? It’s like virtual care is care, digital health is health. AI is health. You know, like it’s part of that whole digital health journey and your processes shouldn’t and don’t really change. You have to have good governance. You have to understand the privacy implications. You have to do co-design. You’re just doing it in a different, you have to have ethics.
Katie Bryski: Yeah. It’s almost like everything that you need to do AI. Like, yes, of course there are some particular considerations, but it’s like, well, you should have the competency to be doing that already in how you deliver care.
I would’ve loved to have gotten a little bit more into what public AI literacy might look like, showing my own biases.
Shelagh Maloney: Well, you know what? That’s interesting. So Ted and Muhammad and I were on a panel a couple of weeks ago and it was not a healthcare panel. It was a change leadership conference, and there was a lot more skepticism and bias against AI than I anticipated.
Like, how do I get my teenage son not to use chat GPT? And I thought, oh, that’s a, why would you want your teenage son not to use chat GPT? We’re taking away creativity. So there’s still a lot of that. To some extent, like it’s true, but, but we have to, like AI’s not going away. And I think there was that famous thing about the this, this is gonna destroy healthcare as we know it, and this is a bad thing.
And it was the invention of the stethoscope, you know? So I think for every time there’s something new that comes in, internet’s gonna kill everything off. And so it’s kind of an interesting thing. Are we at that point where the hype cycle, it’s low in the hype cycle that because people are thinking now it’s gonna destroy your brain cells and things like that.
So it’ll be interesting to see.
Katie Bryski: It is interesting that you had that outside of healthcare because I will say healthcare is where I’ve seen by far the most optimism towards AI and the most acceptance of it. Because I’ve been in other professional circles that very much have that resistance to it and it was interesting reading in the report, um, you know, the point about AI literacy helping with some of that change management.
But I dunno. I know a lot of people who do understand AI and how it works, and that’s why they’re resistant it.
Shelagh Maloney: Oh, yeah, that’s a good point too, right?
Katie Bryski: So it’s like, yeah, how do you – I think some of it is education and trust building, but I think some of it is also how do we implement it thoughtfully?
You’re right, it’s not going anywhere, so it’s like, okay, so how do we make this the best possible outcome? For all people.
Shelagh Maloney: And, and you know, it harkens back to that very first part of the conversation we had around building trust and educating people and shame on us if we’re not doing it. And that’s the first step.
And, and perhaps it’s a harder transition in healthcare because we haven’t done very well, especially some of those marginalized populations that’s, we’ve actually destroyed trust quite significantly. And so. You know, to go in and say, great news, we’re gonna use your data and AI and we’re gonna, you know, solve all your problems.
That might not go over so well. No, but it’s interesting, like, you are right. Because when you are educated and literate around AI doesn’t necessarily mean it will take your concerns away. But it’ll be interesting to see five years from now, 10 years ago, ’cause that pace of change is so crazy. And I think that’s what scares me the most.
Yeah. And um, you know, we just can’t, I think Muhammad mentioned it, like, you know, if we stop and say, well let’s figure it out and let’s develop the regulations and the legislation, and then we’ll, it’ll be too late. Take it back up again. Like, yeah, that ship has sailed.
Katie Bryski: So I guess we’ll see where we are at Digital Health Canada’s 60th anniversary.
Shelagh Maloney: Yeah that’ll be fun.
Katie Bryski: Still doing the podcast.
Shelagh Maloney: In the same way we got out the first newsletter article that Steve Huesing wrote, we’ll, get out the 50th anniversary podcast edition and, and revisit and say, wow, were we wrong? Or, wow, it was right on.
Katie Bryski: Can we revisit our first podcast ever?
But you won’t have to wait until the 60th anniversary to hear our next podcast episode, uh, which will be coming to you next month. Until then, you can absolutely check out the Winning Conditions for…
Shelagh Maloney: AI-Powered Healthcare.
Katie Bryski: Thank you. And it is linked in the show notes. Otherwise, we will see you next month right here on Digital Health in Canada, the Digital Health Canada podcast.
Thank you for listening to today’s episode. Digital Health Canada members can continue the conversation online in the Community hub. Visit digital health canada.com to learn more. Be sure to subscribe to the podcast to get new episodes as soon as they’re available and tell a friend if you like the show.
We’ll see you next month. Stay connected, get inspired, and be empowered.
