Cisco Methods Inc. is increasing its {hardware} portfolio with two knowledge heart equipment lineups optimized to run synthetic intelligence fashions.

The programs debuted at present at a accomplice occasion the corporate is internet hosting in Los Angeles.

The primary new product line, the UCS C885A M8 sequence, includes servers that may every accommodate as much as eight graphics processing models. Cisco provides three GPU choices: the H100 and H200, that are each provided by Nvidia Corp., and Superior Micro Units Inc.’s rival MI300X chip.

Each graphics card in a UCS C885A M8 machine has its personal community interface controller, or NIC. This can be a specialised chip that acts as an middleman between a server and the community to which it’s hooked up. Cisco provides a selection between two Nvidia NICs: the ConnectX-7 or the BlueField-3, a so-called SuperNIC with extra parts that pace up duties equivalent to encrypting knowledge site visitors.

Cisco additionally ships its new servers with BlueField-3 chips. These are so-called knowledge processing models, or DPUs, likewise made by Nvidia. They pace up a few of the duties concerned in managing the community and storage infrastructure hooked up to a server. 

A pair of AMD central processing models carry out the computations not relegated to the server’s extra specialised chips. Prospects can select between the chipmaker’s newest fifth-generation CPUs or its 2022 server processor lineup.

Cisco debuted the server sequence alongside 4 so-called AI PODs. In line with TechTarget, these are massive knowledge heart home equipment that mix as much as 16 Nvidia graphics playing cards with networking gear and different supporting parts. Prospects can optionally add extra {hardware}, notably storage gear from NetApp Inc. or Pure Storage Inc.

On the software program aspect, the AI Pods include a license to Nvidia AI Enterprise. This can be a assortment of prepackaged AI fashions and instruments that corporations can use to coach their very own neural networks. There are additionally extra specialised parts, such because the Nvidia Morpheus framework for constructing AI-powered cybersecurity software program.

The suite is complemented by two different software program merchandise: HPC-X and Crimson Hat OpenShift. The previous providing is an Nvidia-developed toolkit that helps clients optimize the networks that energy their AI clusters. OpenShift, in flip, is a platform that eases the duty of constructing and deploying container purposes.

“Enterprise clients are beneath strain to deploy AI workloads, particularly as we transfer towards agentic workflows and AI begins fixing issues by itself,” mentioned Cisco Chief Product Officer Jeetu Patel. “Cisco improvements like AI PODs and the GPU server strengthen the safety, compliance, and processing energy of these workloads.”

Cisco will make the AI Pods accessible for order subsequent month. The UCS C885A M8 server sequence, in flip, is orderable now and can begin delivery to clients by the tip of the 12 months. 

Photograph: Cisco

Your vote of help is necessary to us and it helps us hold the content material FREE.

One click on under helps our mission to offer free, deep, and related content material.  

Join our community on YouTube

Be part of the group that features greater than 15,000 #CubeAlumni consultants, together with Amazon.com CEO Andy Jassy, Dell Applied sciences founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and lots of extra luminaries and consultants.

“TheCUBE is a vital accomplice to the business. You guys actually are part of our occasions and we actually admire you coming and I do know individuals admire the content material you create as effectively” – Andy Jassy

THANK YOU


Source link