

Discover more from NatML News
Happy hump day! The past two weeks have been incredibly busy for us. Ethan’s been busy implementing NatML for web; I’ve been busy adding CoreML and TensorFlow Lite support to NatML; and we’ve had to rework our entire billing plan (yes, only two weeks after we announced the original plan). Let’s try that again:
Open Source ML Runtime
With the launch of CoreML and TensorFlow Lite model support in the next update, Unity developers will be able to run their models with the NatML Unity library completely locally. As we implemented these features, we realized that the library itself was already an incredibly strong free tier offering for developers. And with other features that the library provides—video and audio streaming, ML pre- and post-processing, and more—it provides a larger value offering than Barracuda and other hobby ML runtime projects for Unity. As such, our open source library for Unity will constitute the “free tier”.
Exploring Predictors
For developers who want to supercharge their ML workflows, we’ll be making NatML Hub a subscription service. With NatML Hub, developers can quickly find and deploy a predictor from our growing catalog. We’ll be investing heavily in expanding the catalog to keep up with the latest ML models as they come out, freeing developers from spending time to figure out what models are out there; converting said models to different formats; writing predictor code for said models; and using those models in code.
For developers who want to deploy their own custom models, NatML Hub will provide automatic format conversions: PyTorch, TensorFlow, Keras, SciKit-Learn to CoreML, ONNX, TensorFlow Lite, and any other runtime format in the future. Furthermore, we’ll be introducing a “Fork” button on predictors, so that developers can take an existing predictor and swap in their custom-trained model while keeping everything else the same (devs who want to fine-tune models will especially benefit from this).
Finally, NatML Hub will provide access to Hub predictors for running some ML models server-side instead of on-device. This will drastically open up the types of models that developers have access to, especially in using State of the Art (SOTA) models (StyleGAN3?).
Enterprise Support
We’ll be keeping our enterprise tier as-is. Here, we’ll work tightly with studios to create custom ML solutions that meet their application needs exactly. This will include everything from model discovery, custom and exclusive features, native source code access, and hosting NatML Hub on-premises, separate from the outside internet.
What this Means for You
First, we should note that these changes will be implemented when the next version of NatML for Unity is published. This is likely going to happen early-to-mid next week (we’ll announce on Discord). Once that happens, we’ll introduce the new plans to NatML Hub and place all existing accounts on a 14-day free trial (this will be standard for new users).
At this point, developers can still use all of NatML Hub with zero cost. But when the trial ends, developers will need an active subscription in order to use any predictor from the platform (i.e. fetching models in code or requesting server-side predictions). The subscription will cost around $29/mo (we’ll announce a final price on Discord). For developers who do not wish to have a plan, they can still use NatML by manually finding and placing raw ML models in their project, since NatML will support using CoreML, ONNX, and TFLite models.
We are excited to finally launch plans that provide value to everyone. Till next time, happy coding!
— Yusuf.