

Discover more from NatML News
Happy Friday! Over the past two weeks, we’ve seen a steady increase in the number of on-device machine learning predictions by developers, surpassing a million per week. This usage has gotten so high for us that we’re upgrading our backend infrastructure because we no longer qualify for free usage with some of the services we use to power the platform—a problem that we’re very happy to have.
With all this activity has come incredible clarity. We’re hearing directly from developers about what they’d like to see where. And as such, we’ve updated our strategy for the near future—yet again.
Prioritizing JavaScript and WebGL
You are probably aware of our plans to bring NatML to JavaScript developers with our NatML for NodeJS project. If you’re not familiar, NodeJS is a popular runtime for building server applications with JavaScript (we use it too!). It has an amazing open-source ecosystem, with some of its most popular packages being downloaded over 40 million times a week. There was only one wrinkle in our plans: NodeJS is purely server-side.
We want to bring the power of easy-to-use machine learning to the browser. Imagine having an HTML project, or SquareSpace website, where you could easily integrate a state-of-the-art ML model with the same five lines of code as you currently do in Unity Engine. As such, we are working on creating an isomorphic package for Web and NodeJS—one which runs the same exact code in both environments.
The second part of this plan centers on Unity Engine. We realized that if we were bringing NatML to browsers, why not also add support for WebGL in Unity? So, we’ve decided to officially support the WebGL platform. We will initially roll out support for Hub predictors, then introduce support for Edge predictors in the next few releases. We’re currently considering partnering with Google (TensorFlowJS) or with Unity (Barracuda) to power Edge predictions in the browser.
Streamlining AR Development
The most prominent use case of machine learning in Unity Engine is for augmented reality applications. It’s no surprise: AR provides the perfect sandbox for enhancing user interaction with scene and object intelligence. NatML will be improved to streamline AR development with two features:
First, we’ll be bringing support for Apple Silicon on macOS. Currently, NatML is only compiled for the x64 architecture (a.k.a “Intel”). An important consequence of this is that NatML is currently unable to leverage the Apple Neural Engine (ANE) accelerator when running on macOS. Adding support for the M-series Macs will allows NatML unlock the full power of the ANE.
Second, we’re working on adding support for the popular ARFoundation Remote package to NatMLX. With this, developers will be able to run ML predictions in the editor, simulating the full AR+ML experience without having to go through lengthy build processes.
Moving Away from ONNX
If you’ve played with NatML since the very early days, you’ll know that we’ve spent some time searching for the ideal ML graph format. Our attempts began with the ONNX format, a very promising format currently being evangelized by Microsoft among others. The early versions of NatML for Unity supported deserializing raw .onnx
files. But over time we’ve been seeing growing limitations with relying on this format:
Different platforms have different ML graph formats. iOS and macOS run on Apple’s proprietary CoreML format; Android will soon run on the TensorFlow Lite format (Google has given us early access to Android’s embedded TFLite runtime); Unity came out with their proprietary, unpublished format for Barracuda; and Windows ML runs on ONNX. When performing Edge predictions, you will always want your model to use the correct format for the platform in order to fully leverage hardware acceleration. So what NatML has to do is to convert the ONNX graph to the platform’s format at runtime. Doing this conversion on device is incredibly expensive, and restricts us from using certain optimizations that platforms provide (think 16-bit floating point ops on CoreML or custom operators on TensorFlow Lite).
We will be shifting this conversion process to happen server-side within NatML Hub. Developers will still be able to upload ONNX model graphs when creating predictors on NatML Hub, but at runtime Hub will automatically convert and deliver the appropriate format for the device that fetches MLModelData
. This shift has a few exciting implications:
We can drastically reduce NatML’s bundle size because we will be stripping all intermediate graph conversion code. We can instead use platform libraries directly. Apple Silicon support on M1 MacBooks will directly benefit from this.
We can easily support Edge predictions for new execution environments in a centralized way through NatML Hub—with zero effort from the model owner. Browser support will directly benefit from this.
We can support loading and using
.tflite
and.coreml
graphs directly in Unity Engine. This would give Unity devs much more flexibility than using Barracuda or hobby projects like tf-lite-unity-sample.We can further optimize runtime performance of the graphs we deliver using ML compilers like Apache TVM from OctoML. These optimizations will happen automatically, again with zero effort from the model owner.
We can support more model graph formats when creating a predictor on NatML Hub. Hub currently supports uploading ONNX and TorchScript graphs, but we plan to add support for TensorFlow and Scikit-learn graphs down the line. All conversions for edge devices will be handled automatically by NatML Hub.
Work with Us
As our usage accelerates, we’re looking to grow the team to execute on all the developments we have planned. We’re looking for content writers to author tutorials on using different predictors with NatML; developer relations experts to help us with our open-source evangelism efforts; and model scouts to help us find interesting ML models to bring to NatML Hub. If any of this sounds interesting to you; or if you think you’d like to work with us in a different capacity, please drop me a note!
Happy holidays!