Search
  • Alexy

Sophia Sanchez: "A doctor's compassion after a difficult diagnosis cannot be replaced by a bot"


Machine Learning Engineer at Curai, Sophia Sanchez applies her knowledge and skills to scale the world's best healthcare to everyone using AI. While Machine Learning models should learn from their history, data collection and labeling is often the rate-limiting step of AI research. Curai's AI tools are deployed in a real-world healthcare setting, giving the team an opportunity to learn from their usage. In her talk at Scale By the Bay this November, Sophia will focus on how to build a semi-automated data feedback loop for ML model retraining, highlighting the specific use case at Curai.


In advance of her talk, Sophia shared her thoughts on her current work at Curai, why she considers Machine Learning in healthcare to be another tool in the doctor's toolbox rather than a threat to doctor-patient human interaction, and why building a ML data feedback loop is one of the most exciting areas of ML engineering.


How did you get interested in Machine Learning and what was the turning point when you decided to join Curai?


I'm a machine learning engineer at Curai where our mission is to scale the world's best healthcare to everyone using AI. I first became interested in Machine Learning after studying computational neuroscience, and have since completed my master's in computer science, with a focus on medical applications of AI. After graduating, I worked as a backend software engineer for a number of years before returning to my roots at the intersection of healthcare and tech. I decided to join Curai because I wanted to make a lasting, positive impact and work on the technical challenges associated with using AI models in a real-world patient setting.


What's your current role and what exciting things are you working on at the moment?


I focus on AI techniques and associated infrastructure to automate information gathering. One example of this is a tool to ask patients questions about their medical concerns and, ultimately, to help diagnose. I also integrate these models and systems into production, whether they are used by doctors and patients. It's very exciting to see real medical professionals use this technology and to build tools to help scale the doctor's office.

As part of my Machine Learning engineering work, I also build data feedback loops that collect model usage data from medical professionals and use that data to retrain or fine tune the next iteration of our models. Data feedback loops are particularly exciting because it means that a model can get better and better over time, learning from its past usage data.


What's the biggest challenge that you face in your work and how are you addressing the challenge?


The number one challenge in building ML-powered medical models is limited access to high quality data, and the ethical implications these limits entail.


Data scarcity in medicine exists for a number of reasons; hospitals and insurers are disincentivized to share data, heavy regulation around protected health information (PHI) limits access, and information is collected in a decentralized way, meaning that data labeling and quality varies widely between datasets. These regulations are in place for a good reason, namely, to protect patients and prioritize their wellbeing.


Our job is to figure out how to work within these bounds and build high-quality machine learning tools. For example, the engineers and clinical team work together to build data feedback loops, so that we can label and learn from model usage data in production, while keeping privacy and consent a top priority. This means that PHI has been scrubbed from the dataset, that any model used by a doctor and/or patient has passed tests of the highest level of rigor, and that the model can accurately assess its own confidence in a diagnosis (i.e. know that it doesn't know).


What's the biggest thing that is misunderstood about ML?


One particularly troubling misconception is that ML will automate away the need for human interaction, bar none. Particularly in medicine, there is a fear of a terminator-style robo-doc that will subsume the traditional doctor-patient interaction. In fact, this couldn't be further from the truth. There are many forms of care that a doctor provides that an AI cannot and should not try to emulate; a doctor's compassion after a difficult diagnosis cannot be replaced by a bot that has "learned" compassion, for example.


Instead, I think of ML in healthcare as another tool in the doctor's toolbox; similar to the fMRI or CT scan. In this doctor-in-the-loop model, AI as a whole refers less to artificial intelligence and more to augmented intelligence, helping the doctor to provide the highest quality care for his or her patients.


What are the three trends that will shape the future of ML?


ML ethics as a top-level requirement

It's no longer enough to have a retrospective think-piece on whether or not a particular ML-based technology is ethical. Especially as ML research advances in fields such as healthcare and self-driving cars, where user safety is paramount, the highest-quality ML work will put ethical considerations as a prerequisite. These considerations include, but are not limited to - where the data come from and potential sources of bias, safety of models, and what it means to leverage ML for "social good." For more on this topic, I highly recommend this fantastic collection of articles on ML and ethics, and this Nature Machine Intelligence review of AI ethics.


More pressure to explain how a model is making its predictions (model interpretability)

Along with a drive towards proving safety and efficacy of ML models is a trend towards model interpretability. This is motivated by a number of factors, chief among them trust. In the medical realm, if an ML model is asking me personal health questions, I want to understand why it chose that question. This mirrors my interactions with my doctor, where I might ask why her follow-up question about traveling to a tropical climate has anything to do with my initial complaint of a fever. Having a better grasp on how a model makes predictions can help us to understand why it succeeds and where it fails.


Improved Infrastructure and Frameworks for ML Development

As we move towards increased adoption of machine learning in industry, it is important not only to create accurate models, but also to deploy and scale them in a way that provides a quality experience for users and low-friction development for engineers. Building and deploying ML models to production presents its own unique challenges. Collecting, labeling, and versioning datasets, for example, has become increasingly complex with the advent of data-hungry ML models. Amazon Sagemaker and Snorkel are just a few tools that have emerged to help solve this problem.


Second, developers of ML frameworks have made a concerted effort to ease the transition from ML research to serving models in production. Tensorflow has Tensorflow Serving, and Pytorch has been pushing to simplify the workflow to get models production-ready.


Although the discipline of ML engineering is relatively new, style rules and guidelines around ML in production are already forming. Martin Zinkevich's "Best Practices for ML Engineering" is an excellent read. While we may not yet be at the point of automated model development, improved infrastructure and frameworks for ML development will speed the rate of progress as datasets, architectures, and ML deployment become increasingly complex.


What will you talk about at Scale By the Bay and why did you choose to cover this subject?



My talk is on building your own ML data feedback loop. I'll focus on building a semi-automated data feedback loop for ML model retraining, highlighting the specific use case at Curai. I wanted to cover this topic because I think it's one of the most exciting areas of ML engineering. Closing the data loop for model retraining can improve machine learning models over time, which is pretty revolutionary.


Who should attend your talk and what will they learn?


Learning how to build data feedback loops, where you learn from the data that you're generating from a model in production, is one of the most valuable things you can do to scale your machine learning models. A machine learning model that can learn from its history can become better over time, allowing for rapid iteration and continuous learning.

If you want to know more about scaling machine learning work through automation, building data feedback loops, or how to improve models that are in production, this talk is for you.


Anything else you'd like to add?


One of the things I love about giving conference talks is the sense of contributing to and being part of our tech community. So come say hi if you want to talk machine learning, are interested in volunteering in tech (shoutout to Mission Bit!), or just to chat.


Don't miss Sophia Curai and 70+ speakers covering all things Functional Programming, Service Architectures, Data Pipelines, and AI/ML at scale at Scale By the Bay this November. Book your ticket now.



0 views

© 2019 Framework Foundation,  For Questions /  Contact us at info@bythebay.io

  • Twitter Social Icon
  • Grey LinkedIn Icon
  • Grey Facebook Icon
  • Tumblr Social Icon