Run:AI, a startup developing orchestration and virtualization software for AI workloads, today announced it has raised $30 million. The company says it will use the investment to fund expansion and recruitment globally.
AI models are great — after all, they’re at the core of everything from voice assistants to cooling systems. But what isn’t great is the time and effort required to fine-tune them. The datasets ingested by production algorithms comprise hundreds (or millions) of samples and take powerful PCs up to weeks to process. New techniques promise to expedite model training, but not all of them are generalizable.
It’s this perennial challenge that inspired Omri Geller, Ronen Dar, and Meir Feder to found Run:AI, a software provider developing a platform that autonomously speeds up AI development. Run:AI’s software creates an abstraction layer that analyzes the computational characteristics of AI workloads and uses graph-based algorithms to minimize bottlenecks, effectively optimizing the workloads for faster and easier execution. Run:AI also allocates these workloads in such a way that all available resources are maximized, taking into account factors like network bandwidth, compute resources, cost, and data pipeline and size.
Under the hood, Run:AI mathematically “breaks up” AI models into multiple fragments that run in parallel, an approach that has the added benefit of cutting down on hardware memory usage. This in turn enables models that would otherwise be constrained by hardware limitations (chiefly GPU memory) to run unimpeded. Run:AI claims its customers see GPU utilization increase from 25% to 75% on average.
Dar and Geller founded Run:AI in 2018 after studying together at Tel Aviv University under Feder, who specializes in information theory and who previously led the exit of two startups. Dar was a postdoc researcher at Bell Labs and R&D and algorithms engineer at Apple, Anobit, and Intel. Geller was a member of an elite unit of the Israeli military, where he led large-scale projects and deployments.
They aren’t the first to market with tech that can optimize algorithms on the fly — Ontario startup DarwinAI taps a technique called generative synthesis to ingest AI models and spit out highly optimized, compact versions of them. And Run:AI saw delays in Q2 2020 as businesses reorganized in response to the pandemic. But as of Q3 2020, Run:AI claims to have built a customer base spanning “dozens” of automotive, finance, defense, manufacturing, and health care industries businesses with “hundreds” of users.
In March, 30-employee Run:A announced the availability of a product based on Kubernetes that enables customers to virtualize GPU resources specifically. And in July, the startup said it would partner with London Medical Imaging & Artificial Intelligence Centre for Value Based Healthcare to provide technology to better manage AI resources and provide elastic resource allocation, visibility, and control. Run:AI says the Centre saw their experiment speed increase by 3,000% after installing the Run:AI platform.
“The pace of AI development is incredible but we’re actually only scratching the surface when it comes to unleashing the full potential of this technology,” Geller told VentureBeat via email. “In the same way, better roads make mobility easier, we believe at Run:AI that building the best compute infrastructure will enable the smartest people on earth do things they never thought possible and help humanity solve the unsolved.”
Run:AI’s series B round announced today was led by Insight Partners with participation from existing investors TLV Partners and S-Capital. It brings the Tel Aviv-based company’s total venture capital raised to $43 million following a $13 million round in April 2019.
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more
Source: Read Full Article