

Discover more from NatML News
Happy hump day! Over the past two weeks, we’ve been working on something very exciting: extending NatML beyond Unity Engine. It’s the culmination of a few important themes we’ve picked up on while building the platform:
What We’ve Learnt So Far
First commit on NatML was on September 26th, 2020; first beta release was on May 12, 2021; first commit on NatML Hub was four days after that. NatML originally started out as just another machine learning runtime, one to rival the likes of Google’s TensorFlow Lite and Unity’s Barracuda. But as the beta program started gaining traction, we realized that just having a way to run machine learning models wasn’t enough.
Each ML model has its own quirks—quirks that developers shouldn’t have to care about. So we standardized how models are distributed by introducing Predictors and then NatML Hub. With these items in place, Unity developers would now have a central repository where they could discover ML models to then use in their apps with only ever up to 5 lines of code. Fast forward five months and NatML Hub has now powered over 1 million machine learning predictions across a multitude of devices and ML models. But even with these numbers, we’ve had reason to step back and think about the bigger picture.
Machine learning in mobile apps is still somewhat of a rarity. There are a few main reasons for this: ML is still relatively new technology, so it will take some time for more and more developers to catch up; there are less compute resources on mobile to use more powerful models; and the ecosystem remains highly fragmented, requiring specialized knowledge to deploy it properly. After weeks of discussions with developers and advisors, we’ve settled on a new vision for the NatML platform.
The Pivot
NatML will be the standard API for developers who want to integrate machine learning models into their applications, irrespective of their development or execution environments.
In many ways, the vision hasn’t changed: we still want to expose machine learning models to developers as simple functions that require absolutely no knowledge of machine learning. But we are expanding what kinds of developers can use our platform, and what kinds of models they can use. Let’s discuss the pieces:
Hub Predictors
We will soon be introducing Hub Predictors, which are predictors that perform ML predictions server-side. The API for using Hub predictors is extremely similar to that of regular (a.k.a “edge”) predictors that run predictions on-device:
// Creating an Edge predictor... | |
var edgePredictor = new UNetPredictor(model); | |
// vs. creating a Hub predictor... | |
var hubPredictor = new UNetHubPredictor(model); | |
// And using an Edge predictor... | |
var edgeResults = edgePredictor.Predict(...); | |
// vs. using a Hub predictor | |
var hubResults = await hubPredictor.Predict(...); |
Hub predictors mean that developers no longer have to care about where their ML models actually run, and are no longer limited by the compute resources available within their execution environment. This will be critical for developers building for IoT applications, wearables, and certain mobile apps. Running billion-parameter ML models on smart glasses or ancient Android devices will no longer be an insane proposition. And with the advent of 5G telecommunications technology, any latency involved in transmitting data to and from the Hub cloud infrastructure will only become more and more negligible.
Zero-Config ML Deployment
In addition to aggregating ML models that developers can quickly download and deploy, NatML Hub has provided developers with the ability to bring their own ML models to the platform with incredible ease. With the introduction of Hub predictors, we are extending the platform to allow developers and enterprises to deploy their own server-side ML with zero configuration.
Developers no longer have to build and manage microservice architectures, server and infrastructure provisioning, cloud monitoring, load balancing, and so on. Just upload your model and predictor code to NatML Hub, and request predictions from anywhere as usual:
// Fetch the model data from Hub | |
var modelData = await MLModelData.FromHub("@natsuite/resnet152", accessKey); | |
// Deserialize the model // The model is actually being created server-side | |
var model = modelData.Deserialize(); | |
// Create the ResNet classification Hub predictor | |
var predictor = new ResNetHubPredictor(model); | |
// Predict | |
Texture2D image = ...; | |
var (label, score) = await predictor.Predict(image); |
NatML Hub will handle all of the gory details, allowing your team to quickly develop and deploy machine learning-enabled products to your customers.
Python and NodeJS SDK’s
The final piece of the pivot brings NatML to different development environments, beginning with Python and NodeJS. Each SDK exposes a similar API as the Unity API, allowing you to write largely the same code whether you’re building a web API with Javascript:
// Fetch the model data from Hub | |
const modelData = await MLModelData.FromHub("@natsuite/resnet152", accessKey); | |
// Deserialize the model // The model is actually being created server-side | |
const model = modelData.Deserialize(); | |
// Create the ResNet classification Hub predictor | |
const predictor = new ResNetHubPredictor(model); | |
// Predict | |
const image = new MLImageFeature(...); | |
const [label, score] = await predictor.Predict(image); |
Or a containerized microservice with Python:
# Fetch the model data from Hub | |
model_data = MLModelData.from_hub("@natsuite/resnet152", access_key); | |
# Deserialize the model # The model is actually being created server-side | |
model = model_data.deserialize(); | |
# Create the ResNet classification Hub predictor | |
predictor = ResNetHubPredictor(model); | |
# Predict | |
image = PIL.Image.open(...); | |
label, score = predictor.Predict(image); |
For now, the Python and Node SDK’s only support Hub predictors. We will explore adding support for Edge predictors in these environments later down the line.
If you’re a current user of NatML, you might be wondering what this means for you. This pivot mostly brings additions to the platform, so the vast majority of Unity developers shouldn’t be affected by any breaking changes. The only breaking change worth mentioning is that we have removed the ability to load MLModelData
from file. Here’s why:
The next main item on our roadmap is focused on enhancing edge prediction performance on mobile. For this, we intend to further take advantage of the latest technology from device OEM’s, specifically with Apple’s CreateML in iOS 15 along with Google’s embedding of TensorFlow Lite within Android (we’re waiting to hear back from Google on joining their early access program). As a result of these developments, we no longer provide a guarantee that the NatML runtime for Unity will be able to run any specific ML model format on a given platform. Instead, we encourage developers to host their ML models on NatML Hub because Hub will deliver the best ML model format for a given device. Hub will still allow users to upload ONNX models—and soon, PyTorch models—when creating a predictor, but it will automatically perform any conversions and optimizations for devices as they request models at runtime.
If you’d like to know what’s coming in the next NatML update for Unity, check out the changelog. And if you’d like to become a beta tester for the new Hub Predictor feature, reach out to us! Happy coding.