From drug discovery to virtual assistants, AI and IoT are transforming healthcare.|From drug discovery to virtual assistants, AI and IoT are transforming healthcare.
AI, applied properly in healthcare, will save lives, said Freenome’s CEO Gabriel Otte.
Freenome, an AI 100 startup working on cancer diagnostics, is backed by Andreessen Horowitz, Data Collective, and Founders Fund. The healthcare AI category has been heating up, with around 50 new companies raising their first equity after January 2015.
During the panel discussion on AI’s Transformation of Healthcare at the Innovation Summit, Otte stressed the importance of taking healthcare AI “step by step” to minimize risk to patients’ lives.
“I think we should also keep in mind that all diagnostics is subjective today. It’s up to the doctor [to make the final decision]… The concept of having an AI make that decision is still many, many years away,” Otte said, dismissing the dystopian notion of “machines killing people.”
GET the enterprise AI TRENDS report
Download the free report to learn about the biggest emerging trends in AI and strategies to watch for 2021.
Otte was joined on the panel by Jack Young of Deutsche Telekom and Matthew Zeiler from computer vision startup and another AI 100 finalist Clarifai, in a panel discussion moderated by journalist Nick Romeo.
Drawing a distinction between people’s skepticism of autonomous vehicles and people’s fear of having machines diagnose them, Young said, “[Healthcare] is not a luxury. It’s a must.”
Noting that we cannot afford to make mistakes when it comes to healthcare, Young predicted that the first areas of adoption for healthcare AI will be chatbots and companion robots for patients, where the risk is low.
Where the panel disagreed was on the issue of data privacy. Data will underpin many of the AI advances in healthcare. However, how that data is handled remains a critical question for healthcare companies and policy makers.
While Zeiler argued that opening up data will benefit everybody, Otte noted that opening up sensitive information shared between a patient and a clinician may result in patients “bending the truth” about their medical conditions.
Transcript:
Nick Romeo, The Atlantic, Moderator: Good morning.
Jack: It’s very bright here, it’s better than outside.
Nick: So am I audible, sounds like this is okay. Okay, so I was gonna start with just a quick anecdote. If you’ll go back with me to the year 1881. President Garfield was shot in Baltimore Maryland, and he languished for quite a while before dying. His assassin had this kind of unprecedented defense, which is basically, “Like I didn’t kill him. The doctors did.” So what happened apparently there were two bullet wounds. One of them they were incapable of finding where the bullet was. So Alexander Graham Bell actually developed a metal detector, and was using it on Garfield, as if it were like a beach and he was searching for treasure. So he used it and he was not able to find the bullet, and apparently the reason was that the doctors were so convinced that it was on one side of the body, when in fact it was on the other that they simply never scanned the relevant side. I don’t know if it was right or left. But this is sort of like a parable of Dr. Error or something and it’s still quite relevant.
If any of you have read the new Michael Lewis’ book “The Undoing Project” which is a sort of redux of the work of Kahneman and Tversky, these Israeli psychologists, kind of, the founders of behavioral economics. They have a nice anecdote one of many where they talk about the overconfidence of doctors, and basically they give a bunch of X rays to doctors who are trained diagnosticians, and who you would think would be very good at assessing their own level of competence in diagnosing something, and knowing what they don’t know. In fact they’re not even self-consistent right. So they might say, “Oh, look. I think this is X.” And then the next time they’re given the exact same data set they’ll say, “Oh, no this is Y.” So there might be something here, the conclusion might seem like, “Oh, look. Digital salvation. Machine learning can come in and eradicate human bias and error.” I think people on the panel have somewhat different views as to the likelihood of that possibility, so that’s one thing we’re gonna get to.
But I just like to sort of close this little intro by saying, there are reasons to be skeptical of that narrative. I think one is that there are maybe more systematic irrationalities in the healthcare system. One of which might just be that we still spend something like 90% of all healthcare spending on the last six months of life. Another would be something that you see already in responses to things like self driving cars, even if fatality rates are dramatically, lower you’re still gonna have the kind of human impulse to be afraid of being killed by a machine. So with that sort of parable narrative to open things off, I guess, I would like to just start by asking people to comment on human irrationality, and the potential for AI digital whatever kind of buzzwords you want to overcome those irrationalities. So maybe, I’ll start with you Gabe.
Gabe Otte, co-founder and CEO, Freenome: My comment on human irrationality, I think it exists, I think it’s very real. I think machine learning, deep learning, AI, whatever you call it I think applied in the proper way in healthcare will save lives. I think there’s a lot of debate around how regulated should this be? How thoroughly tested the system should be before we launch it live. I’m definitely on the side of we should take this step by step. Because yes, we may get there if we get rid of all these regulations, and we just launch these learning engines, and start treating people based on the results of these learning engines. But how many people are gonna die before we get to that end stage, or where we’re saving more lives than what we’re doing today. That’s really where I stand on the whole issue. So I think regulations do exist for good reasons. I don’t agree with all the regulations that exist. But I am also not of the opinion that we should just release these machine learning engines without doing the proper testing, without doing the proper regulatory clearance.
Nick: Do you guys wanna comment on?
Jack: I think the AI in healthcare is a very broad use cases. So we have to take a stage approach, I agree so human life treasure and we can’t afford mistakes. So the way that I see it that the doctor is not going away any time. So what I believe is AI, is probably not gonna be the front line, maybe on the fringe. We already see AI applications in certain areas which could be more subjective, rather than objective. For example, applications on dermatology. So if you have a mole, you can easily do image recognition, say whether this needs to be immediate attention or not. But other areas for example, cancer diagnosis. I don’t think you can just read a bunch of data, and say doctor decide what this is. So I believe the AI is gonna take its time to work into that. One of the area I think AI it’s gonna be very useful immediately, is dealing with the shortage of the care especially in the aging population.
So you probably know, most people know, United States unlike Japan. We still have a 12% of people are 65 and older, and in about 10, 15 years when some of the folks in this room gonna retire, unfortunately one fifth of the United States citizens is gonna be 65 and older. And seniors needs also care and companionship is a big thing. So it’s not feasible if you look at a population expansion of the way to have a direct contact with a human for the benefits the seniors enjoy today. So you’ll see now AI Chatbot companion robotics step in. So I think that’s probably the first area we can look in for adoption, which is very practical and probably lower risk..
Nick: Do you have any thoughts?
Matthew Zeiler, founder and CEO, Clarifai: I think with AI there’s this big misconception, that it’s just gonna replace everybody’s job, and it goes from every level of job from people labeling images. We deal with a lot of images and videos, so there’s a lot of outsourced work that is done to label them, so we can train on it, or people are doing this in the past to organize their data. And so that’s a lower level job than maybe a doctor who’s been studying for a dozen years, in order to be a specialist. And so people assume AI is just gonna replace all these jobs, but really it’s making the same workers just be better at their jobs. Either more accurate or be able to apply their learning’s to more data, and in the case of healthcare more patients. So it’s not about just correcting humans but it’s making them better and making them scale up much more.
Jack: Or reduce their mundane task. For example, one of thing for a doctor today…I’m sure many of you have gone to see a primary care physicians, and typically you have a 10 minute session. Unfortunately nowadays the doctor often times instead of looking at you looks at their screen because they have to type in EHR. So you probably know there’s a San Francisco company Leverage actually, probably one of the first only survivor for a real application for Google Glass. It’s a company called Orgamatics. I don’t know if some of you have heard of it. What they do is actually pretty clever, to solve that problem is basically they have the Google Glass Lifestream back to an outside facility oftentimes in India, or Pakistan whereby they have subscribers in the back. Not only are they taking for the notes for doctors putting the EMRs. But also whisper to the doctors here say, “Hey, maybe you should talk about this, maybe you should code it this way.”
So that could be a very easy application that AI can step in. Transcribe and also prompt the doctors about codification. So make sure that the patients receive the right type of treatment and hospital systems they get reimbursed properly.
Nick: So I think that’s a nice segue to the question of privacy. I mean, the idea of having a second set of eyes ghosting in and watching an entire medical interaction, might give some people goosebumps. Sort of similar technology was actually a premise of a recent “Black Mirror” episode, if anyone enjoys that show. I know there are some different views on the panel about the sort of value of radical transparency in data. Maybe Matt if you wanna talk a little about what you think the benefit of that would be. And then what if any downsides there might be to simply allowing all data to be analyzed by companies who presumably would increase the efficiency of medical processes.
Matt: Yeah I’m under the opinion that opening up data is just gonna benefit everybody, every single patient, to the doctors themselves, to the whole healthcare industry. We see this happen in all different industries. Think of Wikipedia, I’m sure everybody in the room has used it, it was created by all of us. And it’s completely open and they don’t even make money off of it. And it benefits all of us much better than any one company with a siloed set of data, and experts only in that company creating an encyclopedia. And so that is a potential to happen in health care if we can open up the data, and that doesn’t mean leveraging people’s names, and all their private information. All we need is kind of a unique identifier, just a hash string that we can represent a single person with. So it doesn’t have to expose your information.
Gabe: I think there’s two different things in there are actually. I think there is the shared data with everyone and everything being open. And then there is the facilitating sharing of data between entities, like companies and economic institutions and hospitals. I definitely think we need to do a better job of being able to share research data, share clinical data with each other to facilitate research. I definitely disagree that all the data needs to be open in order to do that. I think that can be very harmful, not only to the patients I mean medical data is a very sensitive set of information for people. There’s a lot of things that get discussed within between a patient and a clinician that should be kept between those two people. And the idea or the concept of having that completely be open like Wikipedia, is I think quite worrisome to not only individuals, but also to the clinicians who are trying to do their best to practice medicine the most responsible way possible.
Jack: And I think in healthcare actually, this is a very well regulated at least in the United States. There’s the HIPPA regulation versus a consumer places there’s lots of concerns because it’s not regulated. So here it’s relatively people respect how to use the data, and anonymize it so that can be using for the good of the society for research. And also at the permission of patients, they can go deep and wide as they want. Because imagine the first time a patient get diagnosed with cancer, all the sudden every single data, every piece that she or he has are much willing to share because this is a life and death situation. And once people realize these data are crucial over the time, societies will be more open to the people willing to share. For example, I’ve done the four sequence so sitting there…and it’s a big data trove of data. It’s probably now millions of people doing that but it’s just sitting there. So the people are now thinking about how we can donate those data for the good of the society for research. I think over time as the generation move on people are more and more open about that.
Nick: So Gabe since it seems like you might be somewhat of the skeptical voice in the panel, could you give us a few examples of why you find it worrisome. This idea of radical transparency of data. What are some of the problems specifically.
Gabe: So sensitive things get discussed between a doctor and their patient. Doctor patient confidentiality exists for a reason. We need as much truthful information as possible from the patient in order to diagnose patients properly. If we had this understanding that everything that we’re saying to the doctor is gonna somehow be released to the public, or is gonna open for other people to access. Patients already lie, or patients already tried to bend the truth to make themselves appear in sort of a better light as it is when they go to the doctors. And the concept of having sort of radical transparency around that I think, will actually break diagnostic medicine as it stands today.
Nick: That’s interesting. Do you guys have any thoughts on that?
Matt: That might be true, if it’s just considering the data you get from talking to the doctors, but hopefully in the future more technology is gonna be collecting new devices like the “Black Mirror” reference was a good one because…I love that show. And the technology there is not that far off. But it’s about this world of immersive technology that’s always watching you, collecting data. And so you can’t lie in some sense if you have the ground truth data, you would actually know exactly what the issue is, everything you’ve done in the past. So yeah there’s…
Gabe: I don’t think it’s unprecedented that analytical data can’t be fudge. I’m pretty sure people can fudge that data too. But on top of that you’re sort of proving my point in terms of these things have to be done step wise. If those sensors and…You’re talking ambient sensors existing to monitor people’s health. If those exist and if that’s what is legal and that’s what everyone has essentially agreed to do, I agree then we can maybe make those kinds of data more accessible in an anonymous way to facilitate research. We’re not there yet. So the concept of doing radical transparency within the system that we currently have today, I think, is gonna be more detrimental to the patient than helpful.
Nick: Also maybe just worth noting that Black Mirror is a dystopian show, largely about how terrible it would be to live in such a world. So speaking of terrible dystopian outcomes, maybe we could talk a little about the possibility of being killed by a machine. And what sort of…because this comes up a lot with the self-driving car thought experiments. Where there’s a car out of control and it can either kill a bunch of school children or the driver. And you have this sort of utilitarian philosopher would say, “Well, if you kill the driver.” But then the kind of intrinsic human horror being killed by a machine is such that now a lot of people are just saying, “Well, this thought experiment itself shows that we shouldn’t move forward with self driving cars.” I think there are lots of analogies in the medical domain, if you guys have thoughts on I guess the sort of the fear that might be engendered by entrusting very important, either procedures or diagnoses to AI. And whether that fear is justifiable.
Gabe: Also the concept of AI in cancer diagnostics was brought up earlier. And of course, I disagree because I run a company that does cancer diagnostics based on artificial intelligence, or deep learning algorithms. But I think we should also keep in mind that all diagnostics are subjective today. It’s basically up to the doctor based on the evidence that they have, what the final diagnosis is and the follow-up treatments are. Because no diagnostic test, whether it’s a blood test or even an invasive biopsy is actually deterministic, in terms of whether somebody has cancer or not. So the concept of having an artificial intelligence make that decision is still many, many, many years away if that’s ever going to happen.
I think our machine learning, artificial intelligence can do a much better job of giving the clinicians better information, more informative information with which they can make a final diagnosis. But it’s still very subjective. So I don’t think machine learning is necessary going to change how that’s done. I don’t think people are gonna be killed here by machines in the medical context. I think it’s gonna be clinicians who either misinterpret the data, or the learning engine giving less than accurate information. But neither of those are really artificial intelligence necessarily killing people.
Jack: It’s become autonomous driving people are still getting used to it. How many people actually are comfortable to get on the highway using auto cruising, show of hands? Anybody has Tesla I see one or two hands. I tell you I drive 50% of time on highway use autonomous driving these days, it’s wonderful. But don’t read your email. So people actually in healthcare believe it or not, this concept autonomous driving has been there for a long time for people that unfortunately have cardio conditions, the pacemaker. Which is been around 30, 40 years now. Think about that basically monitors every single heartbeat of yours, in the events that your heart having traveled to maintain that rhythm that kicks in right away. And that technology is so advanced now you can implant that devise, a tiny little one, it’s probably a stack of two quarters, that device can last seven eight years. And they monitor every single heartbeat of yours. So think about that compared to autonomous driving, it’s a magnitude of differences here. It maintains your life. So people are already happy with that. So in medical field, there’s lots of things that people haven’t really talked about, but these forms exist and people has embraced for many, many years. Because you’re dealing with a life and death situation, when you’re dealing with that it’s not a luxury, it’s a must.
Nick: Did you wanna jump in?
Matt: Sure, yeah. I think I agree with the other panelist. I think today AI is not gonna change medicine, but in the future I do think there’s gonna be less subjectivity, and more concrete results coming from AI, just because they’re gonna be more data available, and it’s gonna get smarter and smarter over time. And we’re already seeing cases where it’s actually working better than doctors in isolated environments. Not all different things that doctors can do, but in certain cases. for example, one company we work with in France has a device that goes in the back of your cell phone to look inside your ear drums, and diagnose about 10 different diseases. And they have an app that doctors use in hospitals that diagnose it, that create a training data train that trained Clarifai platform. And now they’re using our platform to recognize new patients automatically, and claiming over 99% accuracy in real world data. And so that’s much better than a doctor who gets tired, and is emotional and has a subjective nature. So there’s a lot of promise but we can’t solve everything.
Jack: People are very skeptical about doing that, right, just as simple as just imagine you have an ear drum that people are taking a picture, whether you have an infection or not. We had early iteration that this company that basically has this machine learning algorithm, that basically you send in and the machine give you a response. This is smart phone application, within 30 seconds the result come back yes or no. And the feedback I got is, whoa, the doctor is not looking into that. It’s a machine, I really believe that. Believe it or not the company has installed a means delay that diagnosis for like two hours artificially, for something that takes 30 seconds. That just tells you people they still want the tailored care. They don’t wanna talk to a machine. But in reality is probably in that situation machine is actually probably than doctor. Because you look at millions of the images, instead of the doctors sometimes do miss.
Matt: Yeah. I think that’s one of the interesting points is like a doctor is gonna go to school, they’re gonna have some university and in then there gonna…my dad’s a doctor, my brother’s a doctor. So I know kind of first hand we grew up in a small town. So my dad was a family practitioner and there’s only 2,500 people in the town. So the most patients he could possibly see in his life is somewhere around 2,500 as they turn over and so forth. But if a machine could treat all those patients, and every other person on the planet. So the 7 plus billion people as training data in some sense. So there’s a lot more information that goes through an AI platform than any single doctor.
Gabe: I agree with what you’re saying, I think, but the analogy is not exactly fair, because if your entire sample size is 2,500 and those are the people that you’re seeing, you’re effectively seeing your entire sample size. So you’re not really over fitting to the situation. I agree that in general that human beings just like learning algorithms can overfit. And if you’re in a major metropolitan area, and you’re only seeing a few hundred patients a year, but the sample size is much larger, millions and there could be a bunch of different scenarios happening, a single doctors going to over-fit to whatever he or she has seen. And the learning engine that’s able to see millions of data sets are less likely to do that. But I think that doctors and clinicians in a rural area when there’s pretty much seeing the entire population of who their treating, I think they can be very effective in what they see.
Jack: And it’s…
Nick: We’re gonna try to get to one other topic. Gabe and I we’re talking a little in the green room about a certain type of recklessness, or irresponsibility that sometimes comes up when VCs and financial type folk see a possible investment in a disruptive technology, which AI in medicine certainly may be. And there’s talking about the temptation to overfit, or to fudge data, or even just to willfully disregard data. I’m wondering if you could talk a little about some of your experience, I know you went to medical school.
What are some of the dangers of people without a background…you mentioned the three body problem, I’m thinking of that as well. People without a background in the sort of complex confluence of incentives that characterizes the medical world, coming in and saying, “Oh, yeah everything is gonna just…we’re gonna make this amount of profit, we’re gonna treat people as statistics. We’re going to ignore data that suggests this is not as effective as it might be because that could her earnings.” So I guess this sort of basic incompatibility of profit motive with health care. I mean, this is a perennial theme in discussions of health care but maybe exacerbated when you’ve get a billion dollars valuations of AI healthcare companies.
Gabe: Sure. There was a lot in the question, I think on the academic research institutions side, there is very little incentive for you not to fudge data. Given how the research world is sort of created. We talk a lot about peer reviewed journal articles, and how they’re sort of the standard of proper research being done. And of course there’s a lot written about how easy it is to get a scientific publication out. And so I don’t necessarily see that as a gold standard. But also the academic system is sort of structured in a way where if you don’t publish, then you don’t get the next grant. So there’s really very little incentive for people not to fudge data. Because no tenured professor in the United States has ever been fired for scientific misconduct.
So like there’s really sort of every incentive for people to sort of fudge data, and I’ve definitely seen this firsthand. I’ve certainly experienced it in that arena. So there’s sort of those set of incentives. And then on the other side, there’s the business realm and VCs. and I think my worry there is a little bit different. I think there is a lot of people especially on the tech side…I run a Silicon Valley company, I see this in the Valley all the time. Where bunch of people who are machine learning experts, who are computer scientists try to get into the space, and try to do something in biotech or life sciences without really understanding how that system works. So this is not necessarily willfully fudging data, or doing bad research. But not heading into this space without understanding the system well enough. And I think machine learning, AI they have their place and they can be very effective when they’re applied to solve the right solutions. But I think a lot of people are trying to do it in a way that’s less than effective, because they just don’t understand the system.
Nick: Any examples that come to mind?
Gabe: So we were talking about this earlier, the concept of consumers paying for diagnostics is extremely flawed I think. This idea that people are going to pay out of pocket to go in essentially get at best what’s neutral news, right, you don’t have that disease, is I think asking people to change their behaviors a little bit too much. There are cultures around the world, I think, Japan is a great example here where people do go and get diagnostics done on a regular basis, they get physicals done on a regular basis. And they do that as a culture, and I think that’s really great for them and they’re willing to pay for that. I think especially in the United States, the consumers aren’t there yet, but there’s a lot of companies that are going into that space right now, where they’re trying to apply machine learning to a direct-to-consumer diagnostic solution, and they’re finding that the business model just isn’t working out as well as…
Jack: It doesn’t scale right. So you’ll get early adoptors because typically when new things come out, like Kickstarter as a good example. You always get…I find it’s magical you always get 30, 50,000 people. Maybe that’s the early adopter percentage for every new app of interest. So you’re gonna get this early adoptions. But soon you find out that consumers in the United States are so conditioned, that insurance pay for that. So the next question is if it’s good enough, why is my insurance not paying for it. So that right there. So we did talk about healthcare, it’s a big promise. Think about 18% of GDP is in healthcare. So no matter how you look at it, the venture investors join to that, but one of the thing they don’t realize it’s not like Google. The consumer decide here is I wanna go I make my own decision.
In healthcare the consumer itself cannot make decisions, they’re bounded by at least two forces. One is your physician has to prescribe that, and your insurer has to agree to pay for that. So unless these three forces are somewhat lined together, three P’s, sometimes people even say the four Ps which is including the pharmaceutical companies. Unless these things align, it’s very hard to get one thing pushing through as an investor. Especially as I was in the early days in digital we learned house, we learned a big way, we wanna go bypass the other two Ps directly go to the patients, and you find out you get the early adopters assume you’re gonna hit the wall. And it’s not gonna scale. The answer to that is you say, “We can try a much willing or a systematical willing to self pay, societies like China, India.” But in the United States, it’s super difficult, it’s very complicated. So the number one thing whatever it’s AI or the machine learning that, if you want to have an adoption, you have to go through the same process to figure out why the doctor wanna do that, why the pair’s willing to pay for that and then finally how the patient will accept that.
Nick: Matt, did you wanna jump in with anything?
Matt: I don’t think I have anything.
Nick: I think maybe I’m supposed to go to questions. Or yeah, no.
Jack: No questions.
Nick: One second so one other thing…we are waiting on questions. I would love to hear each of you maybe just talk briefly about say 15 to 20 years from now, if necessarily speculative, but if you could sort of give a best case and a worst case scenario just briefly about what healthcare, and some of these technologies we’ve been discussing, what the confluence might look like 15 to 20 years from now, again, best case worst case. Anyone wanna start us?
Jack: I think the first thing today it’s kind of ludicrous. Because you go see…for the best behaved people go see a doctor once a year, see the annual checkup, and even the check up is incomplete because you did this and that. So what I see in 10, 15 years ago, you’re daily life is gonna be monitored by all the sensors and all the technologies that we have available today. Just like your car is automatically monitored, unfortunately our life are monitored. So some people are more disciplined than the others. I see people wearing fitbit, people getting up in the morning to do healthy things. My mom has check out the ECG sensors and what not. So what I see is all the IOTs and machine learning, big data learning, AI, it’s gonna warm into our daily life. So what I believe in the 10, 15 years. We don’t have these episodes all the sudden we have to rush into the emergency room, unless you get hit by a car. So these things will be…I see early diagnosis of any symptoms, so preventive measures enacted before we have to go into a treatment. So I think we are right on the trajectory, and 10, 15 years hopefully diabetes…preventive disease, lifestyle disease will be the history for human.
Gabe: I think in the United States were one of the only nations that has for profit payers. And so we have these guys that are reimbursing for our medical bills that needs to make money on their own right. And so that’s been the biggest struggle too in companies, and organizations trying to move towards preventative medicine. But preventative medicine is definitely where we need to head to. Because I’ve always really find this funny. Because we go to the dentist pretty much every year, and get checkups and get cavities fixed and all that stuff. But then we won’t go to doctors for years, and then we get really, really sick and then we go to doctors and say, “Make our bodies whole right.” We wouldn’t not go to the dentist for many, many years and then tell the dentist, “Make my teeth perfect.”
So this need for preventive medicine is very real and I think sensors will help with that. I think machine learning will make use of that big data in a way that makes preventive medicine very effective, but also very cost effective for all parties involved. Regardless of whether that’s the company that’s providing the product, or the reimbursement groups that are actually paying for those preventive medicine functions to actually happen. So I think best case scenario, we would have figured out all that, and I think machine learning is gonna be a facilitator of that to make it cost effective, and incentivize everyone involved.
Worst case scenario, I think, how we’re gonna move more and more toward sort of acute care, and we’re gonna be spending a lot more money on helping people die. Which as you mentioned the vast majority of what we spend on healthcare today is helping people die. And I think if we keep going the route that we’re going, and if we can’t figure out a better way to incentivize everyone to move towards preventive medicine, that’s where we’ll be in 10 years, where 100% of the cost we have in healthcare is going to be spent on helping people die.
Nick: Matt.
Matt: I think I agree with the other panelist, I think, new devices like my Apple watch is keep track of my heart rate now, I never had even a month ago. So it’s happening now in small forms, but I think we’ll have much more advanced devices on us, tracking us. But then in the hospital as well, I think, robotics I think it’s gonna play a huge role in medicine. The fine precision you can get out of a robotic arm, versus somebody is shaking hand for surgery is very powerful. And you see this in telerobotics medicine, already the doctors are using video game controllers to be better surgeons. So I think that’s gonna happen. And then when you think about that system, the doctors seeing a screen they’re not actually in the room they could be across the world, and doing surgery, I think, that’s a perfect opportunity for AI. Because that’s the exact same set up that AI needs. It needs training data where is has some input, and it has some outputs. So I’m optimistic that could even be automated in the future and be even more accurate than a physician.
Nick: Great. So in the last 38 seconds I would love to hear from the audience
Cameron McCurdy, CB Insights, Host: Yeah. We’ll ask…
Nick: Do we have questions?
Cameron: Yeah we’ll ask a few quick questions for the panel. The first one is what are some of the underutilized data sets in healthcare today?
Jack: The there are lots of data. In fact the EHR has been around for a while. So if you go to see a doctor, there is a longitude of data. So unfortunately those data are locked and sometimes in one health care system, if you would to transfer to another, not all the histories are transferred to another. So we have that rich data for most American patients today. For the last couple years, obviously you have lab data that’s available. So I mentioned earlier a select a few people, like, fortunate the one that already have a genomics data, so these data are available. The other thing is that…in fact lots of disease are sort of what behavioral issues. so not only we can find a certain diagnose person or providing better care through medical data we have, but we can also look at behavior data. So marketing companies actually know a lot of habits of yours. So we actually find companies not only not leveraging the data sets within the hospital system, but also the consumer data to help you to actually deliver better care to you, or preventive measure.
Cameron: Great. So the next question is…a lot of the questions are sort of surrounding the culture of healthcare, and the dealing with sort of the bureaucracy when it comes to selling to providers. How do companies approach selling to providers, and dealing with the much slower pace of technology adoption in the healthcare industry?
Gabe: Sorry. Selling to who?…I didn’t…
Nick: Yeah. What was that?
Cameron: How do how do companies deal with the slower technology adoption, and the cultural issues around the bureaucracy in healthcare?
Gabe: So our task that we’re developing are meant to be sold to clinicians, to provide to their patients for cancer diagnostics. And in our experience, what works best is really focusing on data, and to show them what this test does, how well it works, what are the sensitivities specificity numbers in terms of finding the disease or not, what are the false positive rates. And focusing really on what’s called Clinical Validation of whatever you’re providing less on, how it works. And you start throwing around words like machine learning, deep learning and things along those lines. And you just sort of see clinician’s eyes like glaze over. And that’s not really important part here. Yes, it’s important to understand how the test works, and it’s important to analytically validate that as much as possible, but the thing that the clinicians care more about, is really how well is this test going to work, and is this test actually going to help me in saving this patient’s life. And as long as you answer those questions with data, you’re gonna be able to sell a test that based on machine learning engines, or anything along those lines. That’s considered black box or new technology.
Cameron: And the last question is directed towards Matt. Have you seen any image or video recognition use cases in healthcare where it currently isn’t being used today, where it could significantly improve whether the process or delivery of care?
Matt: Yeah. Like I mentioned that company out of France that we’re working with, that’s diagnosing diseases inside your ear. That’s capturing both the image and video which gives us lots of training data. We’re working with some other companies that have huge databases that were created by doctors for different skin diseases and lesions, and I kind of stuff. It’s a tough one because a lot of those different diseases look identical on the skin. And so you have to take into account a lot of other factors beyond just the images or the video. And so I think the ultimate AI in the space is gonna take into account all different devices data, past medical history, and whatever sensors you have at the moment to do this stuff.
If you aren’t already a client, sign up for a free trial to learn more about our platform.