Server consolidation is becoming an increasingly important part of major semiconductor device manufacturer Advanced Micro Devices Inc.’s reach, according to Kumaran Siva (pictured), corporate vice president of strategic business development at AMD.
He believes that companies upgrading aging servers not only massively reduce their costs because of new-found efficiencies, but it also allows for new enterprise workloads to be run, such as artificial intelligence. It’s particularly relevant for firms that can’t move to the cloud, for regulatory reasons.
“Calculation where we showed 27, I think, five-year old servers, can be consolidated down into five AMD servers,” he said. New kinds of network cards are also involved.
Siva spoke with theCUBE industry analysts Lisa Martin and David Nicholson at the recent VMware Explore event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how AMD remained bullish on hardware, despite an obvious shift by many digital-oriented enterprises to the cloud. (* Disclosure below.)
Shift from legacy servers saves money
Fully depreciated hardware, where the costs have been written off already, shouldn’t be knee-jerk-retained to save money, according to Siva. He points out that running costs are considerably lower with new servers and that the investment will pay for itself.
“The power and the administration, the [operating] costs that are associated with it are greater than the cost of acquiring a new set, a smaller set of AMD servers,” he said.
That also ties well into a modern company’s sustainability goals: “TCO involves energy consumption,” Siva added.
There are hidden cost savings realized in shifting away from legacy servers to new ones, and by not transitioning to a cloud-based operation: “People have depreciated their data centers [already],” he said. “So, the cost for them to just go put in new AMD servers is actually very low compared to the cost of having to go buy a public cloud service.”
The role of the data processing unit
Data processing units are increasingly going to play a part in AMD’s semiconductor-based offerings.
“What the DPU allows you to do is actually offload a bunch of functions directly onto the card,” Siva stated. He’s talking about traditionally-functioned CPU activities, such as packet processing in a “dumb” network card, where the packets are transitioned to a PCI, or virtual switching.
“You free up some of the CPU cores,” he says, explaining where the DPU comes in and about the difference between CPUs and DPUs. “Makes your infrastructure run more efficiently, but probably even more importantly, it provides you with greater security, greater separation between the networking side and the CPU side.”
Encryption point-to-point, within the network, notably, is also performed via the DPU card. Separating network from server is a “kind of a better network for enterprise,” Siva stated. “Every connection there is actually encrypted and managing those policies and orchestrating all of that — that’s done through the DPU card.”
Additionally, the card, called, the Pensando SmartNIC handles compression. VMware’s Project Monterey along with vSphere security functionality is included.
“Cloud-like techniques [are now incorporated] into the mainstream on-premises enterprise,” he added.
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of VMware Explore:
(* Disclosure: Advanced Micro Devices Inc. sponsored this segment of theCUBE. Neither AMD nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.
Source link