Skip to main content

Machine Learning

The LifeOmic Platform offers our Patient ML service to work with your own data, providing the ability to train, evaluate, and deploy your own models on your data in a simplified way. The ML service handles all the work of getting your patient or medical device data to your model for training and inference. Our platform currently supports semantic image segmentation and image classification ML models, and we are working on support for additional problem types.

The LifeOmic Patient ML service lets you deploy a supervised ML model into a production environment without having to write any complex ML infrastructure code related to training dataset curation, data labeling UI integration, hyperparameter optimization, model evaluation, model versioning, model deployment, or model monitoring. All of that is included in the service. Follow the steps in the developer guide below to begin using the Patient ML service.

tip

Our Jupyter Notebooks feature is a great way to securely and quickly iterate on your ML model architecture during the experimentation phase. Once you're ready to deploy your model to production you can use Patient ML Service to quickly get a robust version of your model into production.

Patient ML Service Concepts

Patient ML Service uses the concepts of model configs and model runs. A model config describes things about your model, including what its problem type is, what labels it uses, how to source its training data, its hyperparameter optimization space, and a pointer to a Docker image containing your model code. A model run represents a training run and trained version of the model defined by your model config. You can think of a model run as an instance of your model config.

A model run has artifacts associated with it such as an immutable, model-consumable training dataset snapshot of your data (an input to the model run), training, validation, and test set metrics, and a trained model artifact (outputs of the model run).

Accessing the Patient ML Service API

Our Patient ML Service API is public and provides a comprehensive way of interacting with the service. Follow these instructions to authenticate with and call the REST API.

We also have a phc Python package which provides a comprehensive SDK for interacting with the service.

Prepare your ML Data

In order for Patient ML Service to find your training data, the data must exist as FHIR records in LifeOmic's FHIR Service. For computer-vision models, each training image must live as a file in LifeOmic's File Service, and have a DocumentReference FHIR record referencing it from FHIR Service. In order to be pulled in to your model's training set, each FHIR record must have a CodeableConcept whose coding matches one of the codings present in your model config. For DocumentReference records, that coding must live on the DocumentReference.type field.

Defining your Own Model Config

Please see the API documentation of the createModel service endpoint for a description of how to create your model config.

Training Your Own Model

The Patient ML service allows customers to use their own proprietary code to train models within LifeOmic’s secure cloud environment using Docker images. At this time, the service supports training custom models from images hosted in AWS ECR repositories. Setting this up requires only a few simple steps.

  1. Create an AWS ECR repository for your training image in your AWS account.
  2. Edit the permission policy on your ECR repository to allow LifeOmic’s production AWS account to access your training image. You can use the following example policy to get started:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Pull",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::987492647756:root"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:DescribeImages",
"ecr:GetDownloadUrlForLayer",
"ecr:ListImages",
"ecr:ListTagsForResource"
]
}
]
}j
  1. You must also add a repository tag named LO_ACCOUNT_ID and set it equal to your LifeOmic account ID. See Find Your Account ID.
  2. Push your model training image to your AWS ECR repository. Note that Patient ML Service does not currently support manifest-style repositories.
  3. You may now use the URI of your training image within the Patient ML model config to train a custom model.

Creating a new Model Run

To begin a new run for a ML model:

  1. Make a POST request to the Patient ML createRun endpoint for that model. The createRun request will return the ID of the new run.
  2. You can watch the progress of your run in the Machine Learning tab of the LifeOmic Platform.

When a run is created, its parent model config is used to determine what training data to curate, what model to train, how to optimize its hyperparameters, and how to evaluate it. The Patient ML service handles all the ETL of curating your production FHIR data into immutable training/validation/test dataset splits. The service then runs your image, first downloading the training and validation splits to the running container. After the best model version is found during hyperparameter optimization, it is evaluated on the hold-out test dataset. That new candidate model version is called the challenger. The service also evaluates the current approved (usually the overall best) version of your model that you have running in production (the champion) on that test set as well. In this way, after the run is complete, you can compare how the new model version compares against your current best model version. This empowers you to make the objective decision of whether to promote your new model version to production and make it the new champion.

There are a few ways you can view the progress and results of your model runs. You can query the service’s Get Run or Get Runs endpoints directly to inspect programmatically, or you can view them in the Machine Learning tab of the LifeOmic Platform UI using our Runs Table, Metrics, and Run Details views. Visit the Machine Learning section of the user guide for more information on viewing model training data in the LifeOmic Platform and working with the ML Labeler module of the subject viewer.

Deploying a Model Run

A model run must be deployed in order for it to become the champion model version. You can deploy a model run by approving it, which is accomplished in this way:

  1. Send a POST request to the Patient ML createApprovalDecision endpoint for that run. The post body should be {“choice”: “approved”}.
  2. If the model deploy type is cloud, this will trigger the model being deployed as an endpoint in the LifeOmic platform. Using a custom inference image requires the same steps as described in Training your Own Model. If the deploy type is edge, no endpoint will be created, but the run will still be referenced on the model config as the new “deployed” champion model version.