Over the past decade or so, the ability of graphics processing units to handle many non-gaming workloads more efficiently than central processing units has helped Nvidia (NVDA) significantly grow its addressable market.
And in recent years, the company has taken things a step further by creating end-to-end solutions for non-gaming workloads that pair its GPUs with proprietary software for developers and others.
In that context, the new offerings and partnerships that Nvidia disclosed at the Mobile World Congress Americas conference extend its broader strategic push to have its GPUs power everything from AI/deep learning workloads to content-creation apps to autonomous driving systems.
During an MWC Americas keynote, Nvidia CEO Jensen Huang unveiled EGX, a platform for running so-called edge-computing workloads for manufacturers, retailers, municipal governments and other organizations that want or need to process large amounts of data close to where the data was created.
EGX effectively ties together a variety of GPUs and software solutions Nvidia has been providing for edge-computing deployments. In that sense, it has a bit in common with Nvidia's Drive platform for autonomous and semi-autonomous cars, as well as the recently launched Nvidia Studio platform for notebooks aimed at content creators.
Nvidia also unveiled Aerial, a software developer kit that allows off-the-shelf servers packing Nvidia GPUs to handle baseband processing and packet processing for 5G radio access networks (RANs) -- tasks traditionally handled by other types of chips that are placed inside of base stations. Aerial will be part of EGX.
In tandem with these announcements, Nvidia disclosed an expansion of its existing partnership with Microsoft's (MSFT) Azure cloud unit, through which Azure will have its IoT edge-computing and machine-learning services run on top of EGX. The company also said it was partnering with mobile infrastructure giant Ericsson (ERIC) and IBM-owned (IBM) Red Hat on Nvidia-powered solutions for 5G RANs.
Nvidia, which has already been working with Azure, Amazon (AMZN) Web Services and some server OEMs on solutions for the burgeoning edge-computing space, notes that existing EGX users include Walmart, BMW, Procter & Gamble, Samsung's semiconductor unit and the cities of Las Vegas and San Francisco.
Regarding Walmart, which is also a big Azure client, an Nvidia spokesperson mentioned that the company is using EGX in tandem with cameras and sensors to help it do everything from restocking shelves to opening up checkout lanes to guaranteeing the freshness of meat and produce. He added that a single Walmart store could use EGX to process more than 1.6 terabytes of data per second.
Nvidia is by no means the only chip developer that's going after the edge-computing market. Xilinx (XLNX) has been promoting the use of its FPGAs (hardware-programmable chips) for certain edge workloads, and Intel (INTC) , whose Internet of Things Group has been seeing healthy growth, has been pitching both its FPGAs and Xeon server CPUs as edge-computing options.
This won't be a winner-take-all space. But GPUs are proving to be a good fit for video analytics and certain other AI/deep-learning workloads, and the launch of EGX should help Nvidia add to its recent momentum.
Meanwhile, in the 5G RAN space, Nvidia is looking to displace both Xilinx and Intel's FPGAs, as well as specialized base station processors -- developed both by independent chip suppliers such as Marvell Technology (MRVL) , as well as base-station suppliers such as Nokia (NOK) , Ericsson and Samsung (SSNLF) . And it's also competing against Intel's efforts to have Xeon CPUs handle RAN processing functions via off-the-shelf servers.
Much like Intel, Nvidia argues that centralizing RAN processing functions on commodity servers (i.e., creating virtual RANs) enables the more efficient use of resources compared with tying resources to individual base stations. Also like Intel, the company argues that the software programmability of its chips enables them to handle a wider set of workloads than traditional processing solutions, as well as more quickly support new workloads.
In the latter respect, GPUs aren't quite as versatile as CPUs. But thanks to their ability to handle large numbers of jobs in parallel, they could often be more efficient than CPUs when it comes to handling signal-processing workloads on 5G networks.
Time will tell just how much traction Nvidia sees for its virtual RAN solution. Certainly, the strong design win activity that Xilinx and Marvell continue seeing for their 5G infrastructure offerings makes it clear that more traditional RAN solutions aren't by any means going away. But some telcos could be drawn to the versatility and centralized nature of Nvidia's platform, and getting Ericsson's backing definitely doesn't hurt.
And more broadly speaking, Nvidia over the past decade has shown a knack for driving strong adoption of its GPUs for workloads that GPUs once were never considered for.