Discover more from NatML News
Introducing NatML Plans
With over 45 Million ML Predictions Powered by NatML
Happy New Year! Yes, it’s already mid-January, but it’s better late than never. We’ve been busy improving NatML’s infrastructure to better meet our plans for the immediate future. A key component of this has been in introducing paid plans for developers who want to extract more value from the platform.
Introducing NatML Plans
From day one, we’ve approached NatML as somewhat of a high school science project: start with as few as assumptions as possible; run experiments to confirm or refute our hypotheses; review and react to our learnings; repeat. Through this process, we’ve developed a somewhat detailed sketch of the developers who use NatML, and more importantly, why. With this knowledge, we’ve come up with usage plans: tiered offerings that will empower developers and studios to streamline their development of ML-powered applications with NatML.
Prototyping with the Explorer Plan
Serverless ML with the Enthusiast Plan
Beyond the explorers are enthusiasts. We consider enthusiasts to be developers who, just like explorers, want to rapidly prototype different ML-powered features. But unlike explorers, enthusiasts aren’t limited to just mobile apps. They can leverage the full power of NatML across both on-device (a.k.a “Edge”) and server-side (a.k.a “Hub”) machine learning. For these developers we’ve come up with the Enthusiast plan, a pay-as-you-go offering with access to server-side ML predictions. Under this plan, access to public Edge predictors will remain free (like in the Explorer plan), but we will charge per 1,000 Hub predictions.
Production ML with the Expert Plan
Typically, off-the-shelf machine learning models can only get you so far. You might ship a “1.0” feature with an off-the-shelf model, but as your users interact with said feature and expose areas for improvement, it becomes critical to re-train or fine-tune your own model to meet your needs exactly. Once developers have their own custom models, they would always want to keep it private to retain their competitive advantage. For these developers, we’ve designed the Expert plan which offers the ability to create private predictors on NatML. We charge a flat monthly subscription for this plan, along with any usage accrued from using Hub predictors (same as the Enthusiast plan).
Enhanced Support with the Enterprise Plan
Finally, for larger studios that require very fine control over their machine learning infrastructure, we’re offering an Enterprise plan. We’ll work tightly with these studios to develop their custom ML models, deploy them with NatML, provide priority bug fixes, and provide access to NatML’s internal source code. We’ll also provide the ability to host all of NatML on-premises, separate from the outside internet. With these offerings, larger studios can spend all their time building out their product—instead of building ML infrastructure.
Updates Coming to Unity
We’ve also been working on a few exciting updates for Unity Engine. First on the list is full support for using CoreML graphs on iOS and macOS. This means you can drop a CoreML
.mlmodel file in your Unity project, and use it immediately. It also means that NatML will always leverage the Apple Neural Engine machine learning accelerator on Apple Silicon. Oh, and it means that NatML now supports Apple Silicon in addition to the Intel architecture (finally!).
We’re also adding enhanced feature types for working with media files:
MLVideoFeature for making predictions on video files; and
MLAudioFeature for making predictions on audio files (or audio tracks in video files). With these feature types, we’re providing critical infrastructure for developers who want to use ML for video analysis, activity detection, background processing, and more.
On a final note, we’re finally wrapping up our internal infrastructure updates, so we’ll be shifting our focus almost exclusively on two things: writing more tutorials on our blog; and bringing more open-source models to NatML for you to experiment with. Till next time, happy coding!