Intel has lifted the lid on new technologies for the edge and AI ahead of the Mobile World Congress conference including new Xeon D chips with integrated acceleration features and an updated OpenVINO toolkit for AI inferencing.
The chipmaker said the world is moving towards software-defined network infrastructure, with computation increasingly happening at the edge. Modern networks call for programmable hardware and open software, Intel claimed, and this is what it is aiming for with its new and updated products.
Nick McKeown, VP and general manager for the Network and Edge Group at Intel, told us that the firm had made large investments in silicon and other technology over the past several years. He talked up Intel’s FlexRAN implementation of OpenRAN, which delivers the radio access network (RAN) functions in software, and claimed that “today, nearly all commercial vRAN deployments are running on Intel.”
Intel disclosed that its forthcoming (and delayed) Sapphire Rapids version of the Intel Xeon Scalable processors will feature 5G-specific signal processing instruction enhancements in the CPU cores specifically to support RAN processing, and claimed these will enable double the capacity and support high-cell densities for Massive MIMO.
Vodafone recently switched on the UK’s first 5G OpenRAN site, and according to Intel this is built around Xeon processors and making use of Intel workload acceleration and connectivity technologies.
The new Xeon D-1700 and D-2700 processors, codenamed Ice Lake D, are aimed at edge and 5G deployments. These feature a built-in 100Gbit Ethernet controller, plus support for Time Coordinated Computing (TCC) and Time Sensitive Networking (TSN). TCC provides time synchronisation and capabilities to support real-time applications, while TSN is a set of features added to Ethernet to enable quality of service (QoS) for greater robustness, reliability, and determinism in industrial settings.
Like other Ice Lake Xeons, the new chips feature Intel’s QuickAssist Technology (QAT) for cryptographic and compression acceleration, plus the AVX-512 extensions to boost AI processing. A full technical discussion of the new Xeon D processors can be found on The Next Platform.
For AI inferencing at the edge, Intel has also released a new version of its OpenVINO developer toolkit, which it claims offers a broader selection of deep learning models, more device portability choices, and higher inferencing performance.
“OpenVINO 2022.1 is easier to use with frameworks, it has broader model coverage for popular emerging models, and it automates optimisation,” said Adam Burns, Intel VP for OpenVINO Developer Tools.
According to Intel, the new version requires fewer code changes when transitioning from frameworks such as TensorFlow and PyTorch. It also features improved optimisation capabilities such as the ability to discover and automatically use available system features, Burns said.
“Customers use a wide variety of platforms at the edge,” he explained. “Systems can have many different combinations of CPU cores, integrated GPUs, or even discrete accelerators. 2022.1 automatically understands the hardware and optimises the application. As a developer, you don’t need to hand tune and understand the hardware nuances anymore.” ®
Source link