Join today’s leading executives online at the Data Summit on March 9th. Register here.
Developing efficient and reliable applications involving complicated components such as artificial intelligence and machine learning is difficult even for experienced software engineers, so let’s face it: Any previously built in model or template shortcut that helps smooth the way to a useful solution in such a development project is a good shortcut.
Deci, has positioned itself as a deep-learning software provider that uses AI to build AI-powered apps, classification models, dubbed DeciNets, specifically for servers using Intel Cascade Lake central processing units (CPUs).
While graphics processing units have conventionally been the hardware of choice for running power-intensive convolutional neural networks, CPUs — far more commonly utilized for general computing — serve as a much less-expensive alternative for many enterprises who already have them on hand. Although it is possible to run deep-learning inference on CPUs, they are significantly less powerful than GPUs; thus, deep learning models typically perform three to 10 times slower on a CPU than on a GPU.
Closing the CPU performance gap
Deci, a three-year-old startup that recognized this issue, has a solution for closing the speed gap between GPU and CPU performance for convolutional neural networks. Its proprietary Automated Neural Architecture Construction (AutoNAC) technology automatically generates new DeciNets that significantly improve all published models and deliver more than two times improvement in runtime. This is coupled with improved accuracy, the company claimed, as compared to the most powerful models publicly available from EfficientNets, developed by Google.
Using these DeciNets models, tasks that previously could not be executed on a CPU because they were too resource-intensive are now possible, Yonatan Geifman, cofounder and CEO of Deci, said in a media advisory.
“Additionally, these tasks will see a marked performance improvement by leveraging DeciNets; the gap between a model’s inference performance on a GPU versus a CPU is cut in half, without sacrificing the model’s accuracy,” Geifman said.
As a deep learning practitioner, Deci not only wants to find the most accurate models, but to uncover the most resource efficient models which work seamlessly in production.
“This combination of effectiveness and accuracy constitutes the ‘holy grail’ of deep learning,” Geifman said. “AutoNAC creates the best computer vision models to date, and now, the new class of DeciNets can be applied and effectively run AI applications on CPUs.”
In March 2021, Deci and Intel announced a strategic collaboration to optimize deep-learning inference on Intel Architecture CPUs. Prior to this, Deci and Intel worked together at MLPerf, where on several popular Intel CPUs, Deci’s AutoNAC technology accelerated the inference speed of the well-known ResNet50 neural network, reducing the submitted models’ latency by a factor of up to 11.8 times and increasing throughput by up to 11 times, , the company said. Deci has a major competitor in OctoML, a startup that similarly purports to automate machine learning optimization with proprietary tools and processes. Others in the market include DeepCube, Neural Magic, and DarwinAI, which uses what it calls “generative synthesis” to ingest models and spit out highly optimized versions.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More
Source: Read Full Article