{"id":102224,"date":"2025-11-14T17:06:16","date_gmt":"2025-11-14T17:06:16","guid":{"rendered":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/"},"modified":"2025-11-14T17:07:41","modified_gmt":"2025-11-14T17:07:41","slug":"gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register","status":"publish","type":"post","link":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/","title":{"rendered":"GPU monsters eat supercomputing, legacy storage starves \u2022 The Register"},"content":{"rendered":"<p> <a href=\"https:\/\/go.fiverr.com\/visit\/?bta=1052423&nci=17043\" Target=\"_Top\"><img loading=\"lazy\" decoding=\"async\" border=\"0\" src=\"https:\/\/mailinvest.blog\/wp-content\/themes\/breek\/assets\/images\/transparent.gif\" data-lazy=\"true\" data-src=\"https:\/\/fiverr.ck-cdn.com\/tn\/serve\/?cid=40081059\"  width=\"601\" height=\"201\"><\/a>\n<\/p>\n<div id=\"body\">\n<p>The supercomputing panorama is fracturing. What as soon as was a comparatively unified world of large multi-processor x86 methods has splintered into competing architectures, every racing to serve radically totally different masters: conventional educational workloads, extreme-scale physics simulations, and the voracious urge for food of AI coaching runs.<\/p>\n<p>On the heart of this upheaval stands Nvidia, whose GPU revolution has not simply made inroads, and it has detonated the previous order totally.<\/p>\n<p>The results are stark. Legacy storage methods that powered many years of scientific breakthroughs now buckle beneath AI&#8217;s relentless, random I\/O storms. Services designed for sequential throughput face a brand new actuality the place metadata can eat 20 p.c of all I\/O operations. And as GPU clusters scale into the hundreds, a brutal financial reality emerges: each second of GPU idle time bleeds cash, reworking storage from a help operate right into a make-or-break aggressive benefit.<\/p>\n<div aria-hidden=\"true\" class=\"adun\" data-pos=\"top\" data-raptor=\"condor\" data-xsm=\",fluid,mpu,dmpu,\" data-sm=\",fluid,mpu,dmpu,\" data-md=\",fluid,mpu,dmpu,\">\n        <noscript><br \/>\n            <a href=\"https:\/\/pubads.g.doubleclick.net\/gampad\/jump?co=1&amp;iu=\/6978\/reg_specialfeatures\/202511supercomputingmonth&amp;sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&amp;tile=2&amp;c=2aRdhiKnkjdKtgQOODnQK1AAAAUc&amp;t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0\" target=\"_blank\"><br \/>\n                <img decoding=\"async\" src=\"https:\/\/mailinvest.blog\/wp-content\/themes\/breek\/assets\/images\/transparent.gif\" data-lazy=\"true\" data-src=\"https:\/\/pubads.g.doubleclick.net\/gampad\/ad?co=1&amp;iu=\/6978\/reg_specialfeatures\/202511supercomputingmonth&amp;sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&amp;tile=2&amp;c=2aRdhiKnkjdKtgQOODnQK1AAAAUc&amp;t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0\" alt=\"\"\/><br \/>\n            <\/a><br \/>\n        <\/noscript>\n    <\/div>\n<p>We sat down with Ken Claffey, CEO of VDURA, to know how this seismic shift is forcing an entire rethink of supercomputing infrastructure, from {hardware} to software program, from structure to economics.<\/p>\n<div aria-hidden=\"true\" class=\"adun\" data-pos=\"top\" data-raptor=\"falcon\" data-xmd=\",fluid,mpu,leaderboard,\" data-lg=\",fluid,mpu,leaderboard,\" data-xlg=\",fluid,billboard,superleaderboard,mpu,leaderboard,\" data-xxlg=\",fluid,billboard,superleaderboard,brandwidth,brandimpact,leaderboard,mpu,\">\n            <noscript><br \/>\n                <a href=\"https:\/\/pubads.g.doubleclick.net\/gampad\/jump?co=1&amp;iu=\/6978\/reg_specialfeatures\/202511supercomputingmonth&amp;sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&amp;tile=4&amp;c=44aRdhiKnkjdKtgQOODnQK1AAAAUc&amp;t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0\" target=\"_blank\"><br \/>\n                    <img decoding=\"async\" src=\"https:\/\/mailinvest.blog\/wp-content\/themes\/breek\/assets\/images\/transparent.gif\" data-lazy=\"true\" data-src=\"https:\/\/pubads.g.doubleclick.net\/gampad\/ad?co=1&amp;iu=\/6978\/reg_specialfeatures\/202511supercomputingmonth&amp;sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&amp;tile=4&amp;c=44aRdhiKnkjdKtgQOODnQK1AAAAUc&amp;t=ct%3Dns%26unitnum%3D426raptor%3Dfalcon%26pos%3Dmid%26test%3D0\" alt=\"\"\/><br \/>\n                <\/a><br \/>\n            <\/noscript>\n        <\/div>\n<div class=\"adun_eagle_desktop_story_wrapper\">\n<div aria-hidden=\"true\" class=\"adun\" data-pos=\"mid\" data-raptor=\"eagle\" data-xxlg=\",mpu,dmpu,\">\n                <noscript><br \/>\n                    <a href=\"https:\/\/pubads.g.doubleclick.net\/gampad\/jump?co=1&amp;iu=\/6978\/reg_specialfeatures\/202511supercomputingmonth&amp;sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&amp;tile=3&amp;c=33aRdhiKnkjdKtgQOODnQK1AAAAUc&amp;t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0\" target=\"_blank\"><br \/>\n                        <img decoding=\"async\" src=\"https:\/\/mailinvest.blog\/wp-content\/themes\/breek\/assets\/images\/transparent.gif\" data-lazy=\"true\" data-src=\"https:\/\/pubads.g.doubleclick.net\/gampad\/ad?co=1&amp;iu=\/6978\/reg_specialfeatures\/202511supercomputingmonth&amp;sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&amp;tile=3&amp;c=33aRdhiKnkjdKtgQOODnQK1AAAAUc&amp;t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0\" alt=\"\"\/><br \/>\n                    <\/a><br \/>\n                <\/noscript>\n            <\/div>\n<\/p><\/div>\n<p><strong>Blocks &amp; Information:<\/strong> How do you outline a supercomputer and an HPC system? What are the variations between them?<\/p>\n<p><strong>Ken Claffey:<\/strong> The strains are undoubtedly gray and more and more blurred. Traditionally the delineation has actually been concerning the dimension (variety of nodes) of the system, as Linux clusters of commodity servicers grew to become the defacto constructing block (vs beforehand customized supercomputers just like the early Cray methods or NEC vector supercomputers). Right now the normal segmentation of Workgroup, Division, Divisional and Supercomputer most likely wants extra updating, as a small GPU cluster&#8217;s greenback worth is now such that it might be categorized by the analysts as a supercomputer sale.<\/p>\n<div aria-hidden=\"true\" class=\"adun\" data-pos=\"top\" data-raptor=\"falcon\" data-xsm=\",fluid,mpu,dmpu,\" data-sm=\",fluid,mpu,dmpu,\" data-md=\",fluid,mpu,dmpu,\">\n            <noscript><br \/>\n                <a href=\"https:\/\/pubads.g.doubleclick.net\/gampad\/jump?co=1&amp;iu=\/6978\/reg_specialfeatures\/202511supercomputingmonth&amp;sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&amp;tile=4&amp;c=44aRdhiKnkjdKtgQOODnQK1AAAAUc&amp;t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0\" target=\"_blank\"><br \/>\n                    <img decoding=\"async\" src=\"https:\/\/mailinvest.blog\/wp-content\/themes\/breek\/assets\/images\/transparent.gif\" data-lazy=\"true\" data-src=\"https:\/\/pubads.g.doubleclick.net\/gampad\/ad?co=1&amp;iu=\/6978\/reg_specialfeatures\/202511supercomputingmonth&amp;sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&amp;tile=4&amp;c=44aRdhiKnkjdKtgQOODnQK1AAAAUc&amp;t=ct%3Dns%26unitnum%3D426raptor%3Dfalcon%26pos%3Dmid%26test%3D0\" alt=\"\"\/><br \/>\n                <\/a><br \/>\n            <\/noscript>\n        <\/div>\n<p><strong>Blocks &amp; Information:<\/strong> What totally different sorts of supercomputer are there, and do they differ by workload and processors?<\/p>\n<p><strong>Ken Claffey:<\/strong> Not all supercomputers are the identical. There are Linux Cluster supercomputers.\u00a0These dominate at present\u2019s Top500 record. They&#8217;re constructed from hundreds of commodity servers linked through InfiniBand or Ethernet or proprietary interconnects. Variants embrace:<\/p>\n<ul>\n<li>Massively parallel clusters\u00a0with distributed reminiscence (e.g., the DOE&#8217;s Frontier). Every node runs its personal OS and communicates through message passing.<\/li>\n<li>Commodity clusters constructed from off-the-shelf x86\/GPU servers; hyperscale AI clusters fall right here.<\/li>\n<\/ul>\n<p>Completely different workloads favor totally different architectures; CPU-heavy or GPU-heavy, or memory-centric. Climate and physics simulations profit from vector or massively parallel clusters with low latency interconnects.<\/p>\n<p>Fashionable AI coaching typically makes use of GPU heavy commodity clusters.<\/p>\n<p>Particular function methods serve slim domains like cryptography or sample matching, however are gaining traction once more in AI-related use instances, particularly for Inference, Grok, SambaNova and so on.<\/p>\n<div aria-hidden=\"true\" class=\"adun\" id=\"story_eagle_xsm_sm_md_xmd_lg_xlg\" data-pos=\"mid\" data-raptor=\"eagle\" data-xsm=\",mpu,dmpu,\" data-sm=\",mpu,dmpu,\" data-md=\",mpu,dmpu,\" data-xmd=\",mpu,dmpu,\" data-lg=\",mpu,dmpu,\" data-xlg=\",mpu,dmpu,\">\n            <noscript><br \/>\n                <a href=\"https:\/\/pubads.g.doubleclick.net\/gampad\/jump?co=1&amp;iu=\/6978\/reg_specialfeatures\/202511supercomputingmonth&amp;sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&amp;tile=3&amp;c=33aRdhiKnkjdKtgQOODnQK1AAAAUc&amp;t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0\" target=\"_blank\"><br \/>\n                    <img decoding=\"async\" src=\"https:\/\/mailinvest.blog\/wp-content\/themes\/breek\/assets\/images\/transparent.gif\" data-lazy=\"true\" data-src=\"https:\/\/pubads.g.doubleclick.net\/gampad\/ad?co=1&amp;iu=\/6978\/reg_specialfeatures\/202511supercomputingmonth&amp;sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&amp;tile=3&amp;c=33aRdhiKnkjdKtgQOODnQK1AAAAUc&amp;t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0\" alt=\"\"\/><br \/>\n                <\/a><br \/>\n            <\/noscript>\n        <\/div>\n<p><strong>Blocks &amp; Information:<\/strong> Is an Nvidia NVL72 rack-scale GPU server a supercomputer?<\/p>\n<p><strong>Ken Claffey:<\/strong> Nvidia describes its GB200 NVL72 as an \u201cexascale AI supercomputer in a rack.\u201d Every NVL72 encloses 18 compute trays (72\u00a0Blackwell GPUs coupled with Grace CPUs) tied collectively by fifth era NVLink switches delivering 130\u00a0TBps of interconnect bandwidth. The NVLink cloth creates a single unified reminiscence area with\u00a0over 1\u00a0petabyte per second\u00a0combination bandwidth, and one NVL72 rack can ship 80\u00a0petaflops of AI efficiency with 1.7\u00a0TB of unified HBM reminiscence.<\/p>\n<p>From a purist HPC perspective, a single NVL72 is extra precisely a\u00a0rackscale constructing block\u00a0than a full supercomputer, it lacks the exterior storage and cluster administration layers wanted for full blown HPC. However when tens or a whole bunch of NVL72 racks are interconnected with high-performance storage (for instance, VDURA V5000), the ensuing system completely qualifies as a supercomputer. So NVL72 sits on the boundary: a particularly dense GPU cluster that may be half of a bigger HPC system.<\/p>\n<p><strong>Blocks &amp; Information:<\/strong> Do you suppose the Nvidia GPU <a href=\"https:\/\/blocksandfiles.com\/2022\/04\/30\/hbm-2\/\">HBM<\/a> will or can switch to different varieties of supercomputer? Why did Nvidia get HBM developed and never different supercomputer sorts?<\/p>\n<p><strong>Ken Claffey:<\/strong> Excessive bandwidth reminiscence (HBM) stacks DRAM dies by silicon vias to supply thousand bit broad interfaces; HBM3e can ship as much as 1.8\u00a0TB\/s per GPU. HBM isn\u2019t distinctive to Nvidia, AMD\u2019s MI300A\/MI300X, Intel\u2019s Ponte Vecchio and lots of AI accelerators use HBM as a result of streaming information at terabyte per second speeds is important for feeding hungry cores. HBM adoption is determined by economics and bundle design: GPUs can justify the fee as a result of they ship very excessive flops per watt, whereas basic function CPUs typically depend on DDR\/LPDDR reminiscence with decrease bandwidth.\u00a0<\/p>\n<p>Nvidia\u2019s management in GPU HBM has been pushed by AI\u2019s insatiable demand for reminiscence bandwidth. GPU distributors codesign the silicon with HBM suppliers (Samsung, Micron, SK\u00a0Hynix) to maximise bandwidth. Conventional supercomputer distributors typically concentrate on CPU centric workloads the place massive DDR reminiscence footprints matter greater than uncooked bandwidth. We count on HBM to proliferate in GPU-based AI methods and a few CPU architectures, however commodity servers will proceed to steadiness value and capability with DDR reminiscence. In the end, reminiscence know-how will unfold the place the economics make sense.<\/p>\n<p><strong>Blocks &amp; Information:<\/strong> How is the world of supercomputing reacting to AI workloads reminiscent of coaching and inference?<\/p>\n<p><strong>Ken Claffey:<\/strong> The AI revolution has turned HPC services into\u00a0AI factories. It is clear from clients that their software panorama is altering as their customers deploy increasingly AI based mostly purposes which is creating new challenges for the HPC infrastructure as they enhance the variety of GPUs of their clusters. This in flip impacts storage as AI purposes are GPU centric\u00a0and create spiky, random I\/O patterns, inflicting metadata to develop into 10\u201320\u00a0p.c of I\/O. Each coaching and inference require sustained throughput: Nvidia recommends\u00a00.5\u00a0GBps reads and 0.25\u00a0GBps writes per GPU\u00a0for DGX B200 servers and as much as 4\u00a0GBps per GPU for imaginative and prescient workloads. Meaning a ten,000 GPU cluster wants\u00a05\u00a0TBps learn\u00a0and\u00a02.5\u00a0TBps write\u00a0bandwidth.<\/p>\n<p>To fulfill this demand, HPC facilities are embracing\u00a0parallel file methods and NVMe first architectures. AI coaching nonetheless depends on excessive throughput parallel file methods to feed GPUs and deal with large checkpointing, whereas inference workloads shift towards object shops and key worth semantics, requiring sturdy metadata efficiency and multitenancy.\u00a0The rise of GPU accelerators has shifted I\/O patterns from massive sequential writes to extremely random, small file operations. Consequently:<\/p>\n<ul>\n<li>HPC services are upgrading networks to InfiniBand NDR and Ethernet 400\u00a0Gb\/s and deploying NVMe\u2011based mostly storage servers to saturate GPUs.<\/li>\n<li>Distributors are including <a href=\"https:\/\/blocksandfiles.com\/2022\/04\/30\/gpudirect\/\">GPU Direct<\/a> and RDMA\u2011based mostly I\/O paths to bypass CPU bottlenecks and cut back latency.<\/li>\n<li>AI and HPC groups more and more deal with information pipelines as\u00a0manufacturing strains, emphasizing resilience and automation. VDURA\u2019s white paper highlights how GPU idle time and sluggish checkpointing waste cash, prompting new storage architectures that decrease stalls.<\/li>\n<\/ul>\n<p><strong>Blocks &amp; Information:<\/strong> How has supercomputing and HPC storage developed? What are the principle threads?<\/p>\n<p><strong>Ken Claffey:<\/strong> HPC storage has developed from\u00a0proprietary, hardware-bound architectures\u00a0to\u00a0software-defined, scale-out methods\u00a0designed for AI and GPU-driven workloads. Moreover whereas HPC was very a lot designed on the idea of momentary \/Scratch performant file methods, AI is extra targeted on sustained efficiency and a broader SLA that cares rather more about operational reliability.<\/p>\n<ul>\n<li>From proprietary to software program outlined:\u00a0Early HPC relied on closed methods with HA pairs and devoted RAID controllers. Fashionable platforms have shifted to\u00a0SDS fashions aligned with hyperscaler designs,\u00a0shared-nothing architectures that scale horizontally throughout commodity {hardware} containing NVMe nodes and open provide chains.<\/li>\n<li>Flash &amp; HDDs, not flash-only:\u00a0The transfer from HDD to NVMe flash introduced large efficiency positive factors, however effectivity at scale now is determined by utilizing the complete spectrum of media;\u00a0SLC, TLC, QLC flash and CMR\/<a href=\"https:\/\/blocksandfiles.com\/2022\/05\/06\/smr\/\">SMR<\/a> HDDs\u00a0to steadiness throughput, IOPs endurance, and price.<\/li>\n<li>Metadata and automation:\u00a0AI&#8217;s billions of small recordsdata make metadata more and more a possible efficiency bottleneck and an rising share of the quantity information saved; say 10\u201320 p.c. VDURA&#8217;s\u00a0VeLO distributed metadata engine\u00a0eliminates this bottleneck, supporting billions of operations with ultra-low latency.<\/li>\n<li>Operational Reliability and Resilience at scale.\u00a0Legacy node native RAID has been changed by\u00a0network-level erasure coding\u00a0for larger resiliency to failures &#8211; rising sturdiness and availability. VDURA&#8217;s really gives much more with\u00a0multi-level erasure coding (MLEC)\u00a0that achieves higher availability and as much as 12 nines of sturdiness, making certain steady operation.<\/li>\n<\/ul>\n<p>HPC storage has developed into\u00a0AI-ready, software-defined infrastructure; flash-first, media-aware, metadata-accelerated, and operationally resilient sufficient to maintain tempo with the quickest GPUs 24 by 7 by 365.<\/p>\n<p><strong>Blocks &amp; Information:<\/strong> What are the principle supercomputer storage methods and the way do they differ?<\/p>\n<p><strong>Ken Claffey:<\/strong> Supercomputing storage has diverged alongside a transparent line between\u00a0legacy, hardware-bound methods\u00a0and fashionable, software-defined architectures\u00a0constructed for AI and data-intensive workloads.\u00a0<\/p>\n<div class=\"CaptionedImage Border width_85\"><a href=\"https:\/\/regmedia.co.uk\/2025\/11\/12\/vdura.jpg\" target=\"_blank\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/mailinvest.blog\/wp-content\/themes\/breek\/assets\/images\/transparent.gif\" data-lazy=\"true\" data-src=\"https:\/\/regmedia.co.uk\/2025\/11\/12\/vdura.jpg?x=648&amp;y=597&amp;infer_y=1\" alt=\"Vdura vs other file systems\" title=\"Vdura vs other file systems\" height=\"597\" width=\"648\"\/><\/a><\/p>\n<p class=\"text_center\">Vdura vs different file methods &#8211; Click on to enlarge<\/p>\n<\/div>\n<p>The business is shifting on from\u00a0hardware-defined &#8220;methods&#8221;\u00a0(controller pairs, proprietary arrays) to\u00a0software-defined storage (SDS) &#8220;platforms&#8221;\u00a0that run on commodity NVMe and HDD media. SDS permits quicker innovation, mixed-media tiering (SLC, TLC, QLC flash + CMR\/SMR HDD), metadata acceleration, and cloud-like scalability &#8211; the inspiration of VDURA&#8217;s structure.<\/p>\n<p><strong>Blocks &amp; Information:<\/strong> Why are there so lots of them? Are they suited to totally different supercomputing workloads?<\/p>\n<p><strong>Ken Claffey:<\/strong> Whereas the HPC ecosystem seems numerous, solely a\u00a0small group of file methods\u00a0have been confirmed at manufacturing scale throughout hundreds of environments. Many others stay analysis initiatives or area of interest deployments.<\/p>\n<ul>\n<li>Legacy methods vs. software-defined platforms:\u00a0Legacy HPC file methods like Lustre or GPFS are methods\u00a0hardware-tied and manually scaled. Fashionable parallel file methods reminiscent of\u00a0VDURA&#8217;s PanFS characterize software-defined platforms\u00a0that separate the management and information planes, align with hyperscaler-style shared-nothing architectures, and run on commodity NVMe and HDD provide chains.<\/li>\n<li>Tasks vs. Merchandise:\u00a0Open-source efforts (e.g., <a href=\"https:\/\/blocksandfiles.com\/2025\/04\/15\/daos-post-optane-resurrection\/\">DAOS<\/a>) push innovation however typically stay project-grade, whereas industrial SDS platforms evolve because of long run funding and steady improvement into hardened merchandise that steadiness efficiency, manageability, and long-term help.<\/li>\n<li>Workload alignment:\u00a0AI and HPC workloads range broadly, some stream multi-terabyte sequential information, others learn billions of tiny recordsdata randomly. No single file system can optimize all instances, so\u00a0purpose-built storage\u00a0is changing general-purpose designs like NAS and SAN based mostly methods. Hybrid SDS platforms like VDURA combine flash and HDD tiers, deal with metadata acceleration, supply practically limitless linear efficiency scalability and ship the supply and sturdiness at present&#8217;s AI factories demand.<\/li>\n<\/ul>\n<p>There could also be many\u00a0names\u00a0in HPC storage, however only some really function at scale\u00a0in manufacturing environments and the clear course is away from legacy {hardware} methods towards versatile, software-defined, purpose-built information platforms.<\/p>\n<p><strong>Blocks &amp; Information:<\/strong> Why is it that DAOS has not develop into extra standard?<\/p>\n<p><strong>Ken Claffey:<\/strong> DAOS is an open-source venture. At this level, it\u2019s considered extra as a set of applied sciences than a completed product. It\u2019s now housed at HPE, and I count on they\u2019ll make investments to make it a real product, very like I did with Lustre at ClusterStor. That may take a few years of heavy funding, large-scale deployments, and operational maturity to take it from \u2018venture\u2019 to \u2018product\u2019.<\/p>\n<p><strong>Blocks &amp; Information:<\/strong> How may VDURA use DAOS? May PanFS evolve to make use of DAOS ideas?<\/p>\n<p><strong>Ken Claffey:<\/strong> We see the\u00a0key-value retailer (KVS) metadata method\u00a0as directionally appropriate, similar to how\u00a0PanFShas lengthy operated with its personal built-in KVS. This similar idea is now mirrored within the\u00a0VDURA Information Platform, the place we\u2019ve additional superior and scaled our metadata engine to satisfy the calls for of contemporary AI and HPC workloads.<\/p>\n<p><strong>Blocks &amp; Information:<\/strong> There are IOPS and throughput. Inform me why throughput issues for AI workloads<strong><br \/><\/strong>\u00a0<br \/>\n<br \/><strong>Ken Claffey:<\/strong> IOPS (enter\/output operations per second) measures what number of small 4\u00a0KiB operations a storage system can carry out. It&#8217;s a nice metric for transactional databases and VMs. However AI and HPC workloads stream massive datasets and checkpoints. Specializing in IOPS can mislead: AI workloads are\u00a0throughput pushed, measured in GBps or TBps, as a result of they transfer massive, sequential datasets. Excessive bandwidth ensures that GPUs stay busy and that checkpointing doesn&#8217;t stall coaching. Parallel file methods distribute information throughout many nodes to ship this combination bandwidth. With out adequate throughput, GPUs are starved and costly compute cycles are wasted.<br \/>\n<\/p>\n<p>VDURA\u2019s V5000 system delivers\u00a0&gt;60\u00a0GBps per node\u00a0and &gt;2\u00a0TBps per rack. This ensures that AI pipelines are restricted by mannequin complexity, not storage. VDURA additionally supplies as much as\u00a0100Ms IOPS per rack, so it handles meta information heavy inference workloads as effectively. The lesson: throughput and IOPS each issues, however for AI coaching, throughput is king.<\/p>\n<p><strong>Blocks &amp; Information:<\/strong> Do parallel storage methods deliver particular benefits to supercomputers that non-parallel (serial?) storage methods can&#8217;t present?<\/p>\n<p><strong>Ken Claffey:<\/strong> Completely. Non-parallel NAS methods like NetApp ONTAP depend on a small variety of controllers dealing with I\/O. As I beforehand identified, basic function NAS can&#8217;t ship the throughput or resiliency required for AI. NetApp\u2019s AFX is their try at a parallel file system. Mainstream storage methods had been designed for basic function computing.\u00a0<\/p>\n<p>In a transparent acknowledgement of superior computing in AI, NetApp has acknowledged that they want a brand new kind of product that may be a parallel file system. They weren&#8217;t ready for the long run and now they&#8217;re attempting to catch up.<\/p>\n<p><strong>Blocks &amp; Information:<\/strong> Is GPU Direct a method of constructing non-parallel storage methods, like NetApp, successfully parallel?<\/p>\n<p><strong>Ken Claffey:<\/strong> No. Should you\u2019re not parallel you&#8217;re restricted to how briskly the one path can go. Positive, GPU Direct could make that one path go quicker, though that&#8217;s not as scalable as a parallel file system that may go down many paths concurrently. Particularly when these parallel paths are GPU Direct enabled.<\/p>\n<p><strong>Blocks &amp; Information:<\/strong> Now that VDURA\u2019s PanFS helps GPU Direct, how else may VDURA adapt it to serve Nvidia GPU servers higher? For instance, KV Cache offload.<\/p>\n<p><strong>Ken Claffey:<\/strong> We&#8217;re engaged on issues on this space, keep tuned. \u00ae<\/p>\n<\/p><\/div>\n<iframe data-lazy=\"true\" data-src=\"https:\/\/www.fiverr.com\/gig_widgets?id=U2FsdGVkX18x7XQvttUTrv1oEqmGNGTgvvCUiUoJ\/AP4z\/UyMz8lXGOLpu15jIMxBbTR0gmD5uBoFvhC4KWeALQRp3h\/X\/AwcVD0K8Wj9H\/ZzYKzcCNHosB9oS4SCJJFWiN85P9ICAc4OgCoE\/wHKIY7CDkf2\/DQ1vqGvk4smVe5cRDEmrLPCWi4FC8p40VUhSmWQ5udCm0zoJtorgWv3vbDQw0kKYkwn39ozAnQXDe+YvWMxkLFWA+O3TFwkJvdkIK+\/AUSnRssPKt5WHY0FhNOxnSPcLslEL4G4\/RfP95ve99U+kRnDy3X+KtzdQLY+u935ghON\/o3UE4IMv9oN6JX9RnxzL\/LRcOgnHigxStSGPKsZYtnz8RWNVT\/rOLAibqiWJadC5MYHRbekF3eg6FOGrQGkXYbsn0+a5aovnlLCbLwIqY9fcS17UX8J235iQ6cdmHNbrPeS84CMm34RA==&affiliate_id=1052423&strip_google_tagmanager=true\" loading=\"lazy\" data-with-title=\"true\" class=\"fiverr_nga_frame\" frameborder=\"0\" height=\"350\" width=\"100%\" referrerpolicy=\"no-referrer-when-downgrade\" data-mode=\"random_gigs\" onload=\" var frame = this; var script = document.createElement('script'); script.addEventListener('load', function() { window.FW_SDK.register(frame); }); script.setAttribute('src', 'https:\/\/www.fiverr.com\/gig_widgets\/sdk'); document.body.appendChild(script); \" ><\/iframe>\n<br \/><a href=\"https:\/\/go.theregister.com\/feed\/www.theregister.com\/2025\/11\/14\/evolving_supercomputers_hpc_ai_and\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The supercomputing panorama is fracturing. What as soon as was a comparatively unified world of large multi-processor x86 methods has splintered into competing architectures, every&#8230;<\/p>\n","protected":false},"author":1,"featured_media":102225,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[],"class_list":["post-102224","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-universe"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>GPU monsters eat supercomputing, legacy storage starves \u2022 The Register - mailinvest.blog<\/title>\n<meta name=\"description\" content=\"Technology is forever changing, and there are always new pieces of technology to replace obsolete ones. Tons of people enjoy reading tech blogs on a daily basis.mailinvest.blog tracks all the latest consumer technology breakthroughs and shows you what&#039;s new, what matters and how technology can enrich your life. mailinvest.blog also provides the information, tools, and advice that helps when deciding what to buy.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"GPU monsters eat supercomputing, legacy storage starves \u2022 The Register - mailinvest.blog\" \/>\n<meta property=\"og:description\" content=\"Technology is forever changing, and there are always new pieces of technology to replace obsolete ones. Tons of people enjoy reading tech blogs on a daily basis.mailinvest.blog tracks all the latest consumer technology breakthroughs and shows you what&#039;s new, what matters and how technology can enrich your life. mailinvest.blog also provides the information, tools, and advice that helps when deciding what to buy.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/\" \/>\n<meta property=\"og:site_name\" content=\"mailinvest.blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/freelanceracademic\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-14T17:06:16+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-11-14T17:07:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/mailinvest.blog\/wp-content\/uploads\/2025\/11\/shutterstock_1915654084.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2000\" \/>\n\t<meta property=\"og:image:height\" content=\"1000\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"admin@mailinvest.blog\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin@mailinvest.blog\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/\"},\"author\":{\"name\":\"admin@mailinvest.blog\",\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/#\\\/schema\\\/person\\\/012701c4c204d4e4ebd34f926cfd31a4\"},\"headline\":\"GPU monsters eat supercomputing, legacy storage starves \u2022 The Register\",\"datePublished\":\"2025-11-14T17:06:16+00:00\",\"dateModified\":\"2025-11-14T17:07:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/\"},\"wordCount\":2366,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/mailinvest.blog\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/shutterstock_1915654084.jpg\",\"articleSection\":[\"Tech Universe\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/\",\"url\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/\",\"name\":\"GPU monsters eat supercomputing, legacy storage starves \u2022 The Register - mailinvest.blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/mailinvest.blog\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/shutterstock_1915654084.jpg\",\"datePublished\":\"2025-11-14T17:06:16+00:00\",\"dateModified\":\"2025-11-14T17:07:41+00:00\",\"description\":\"Technology is forever changing, and there are always new pieces of technology to replace obsolete ones. Tons of people enjoy reading tech blogs on a daily basis.mailinvest.blog tracks all the latest consumer technology breakthroughs and shows you what's new, what matters and how technology can enrich your life. mailinvest.blog also provides the information, tools, and advice that helps when deciding what to buy.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/#primaryimage\",\"url\":\"https:\\\/\\\/mailinvest.blog\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/shutterstock_1915654084.jpg\",\"contentUrl\":\"https:\\\/\\\/mailinvest.blog\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/shutterstock_1915654084.jpg\",\"width\":2000,\"height\":1000},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/2025\\\/11\\\/14\\\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/mailinvest.blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"GPU monsters eat supercomputing, legacy storage starves \u2022 The Register\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/#website\",\"url\":\"https:\\\/\\\/mailinvest.blog\\\/\",\"name\":\"mailinvest.blog\",\"description\":\"Technology is forever changing, and there are always new pieces of technology to replace obsolete ones. Tons of people enjoy reading tech blogs on a daily basis. mailinvest.blog tracks all the latest consumer technology breakthroughs and shows you what&#039;s new, what matters and how technology can enrich your life. mailinvest.blog also provides the information, tools, and advice that helps when deciding what to buy.\",\"publisher\":{\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/mailinvest.blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/#organization\",\"name\":\"mailinvest\",\"url\":\"https:\\\/\\\/mailinvest.blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/mailinvest.blog\\\/wp-content\\\/uploads\\\/2022\\\/01\\\/default.png\",\"contentUrl\":\"https:\\\/\\\/mailinvest.blog\\\/wp-content\\\/uploads\\\/2022\\\/01\\\/default.png\",\"width\":1000,\"height\":1000,\"caption\":\"mailinvest\"},\"image\":{\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/freelanceracademic\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/mailinvest.blog\\\/#\\\/schema\\\/person\\\/012701c4c204d4e4ebd34f926cfd31a4\",\"name\":\"admin@mailinvest.blog\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/98ed217bd0f3d6a6dcae2d9b0c76e305b049a07275e315e1407e19ec8b08e139?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/98ed217bd0f3d6a6dcae2d9b0c76e305b049a07275e315e1407e19ec8b08e139?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/98ed217bd0f3d6a6dcae2d9b0c76e305b049a07275e315e1407e19ec8b08e139?s=96&d=mm&r=g\",\"caption\":\"admin@mailinvest.blog\"},\"sameAs\":[\"https:\\\/\\\/mailinvest.blog\",\"admin@mailinvest.blog\"],\"url\":\"https:\\\/\\\/mailinvest.blog\\\/index.php\\\/author\\\/adminmailinvest-blog\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"GPU monsters eat supercomputing, legacy storage starves \u2022 The Register - mailinvest.blog","description":"Technology is forever changing, and there are always new pieces of technology to replace obsolete ones. Tons of people enjoy reading tech blogs on a daily basis.mailinvest.blog tracks all the latest consumer technology breakthroughs and shows you what's new, what matters and how technology can enrich your life. mailinvest.blog also provides the information, tools, and advice that helps when deciding what to buy.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/","og_locale":"en_US","og_type":"article","og_title":"GPU monsters eat supercomputing, legacy storage starves \u2022 The Register - mailinvest.blog","og_description":"Technology is forever changing, and there are always new pieces of technology to replace obsolete ones. Tons of people enjoy reading tech blogs on a daily basis.mailinvest.blog tracks all the latest consumer technology breakthroughs and shows you what's new, what matters and how technology can enrich your life. mailinvest.blog also provides the information, tools, and advice that helps when deciding what to buy.","og_url":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/","og_site_name":"mailinvest.blog","article_publisher":"https:\/\/www.facebook.com\/freelanceracademic\/","article_published_time":"2025-11-14T17:06:16+00:00","article_modified_time":"2025-11-14T17:07:41+00:00","og_image":[{"width":2000,"height":1000,"url":"https:\/\/mailinvest.blog\/wp-content\/uploads\/2025\/11\/shutterstock_1915654084.jpg","type":"image\/jpeg"}],"author":"admin@mailinvest.blog","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin@mailinvest.blog","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/#article","isPartOf":{"@id":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/"},"author":{"name":"admin@mailinvest.blog","@id":"https:\/\/mailinvest.blog\/#\/schema\/person\/012701c4c204d4e4ebd34f926cfd31a4"},"headline":"GPU monsters eat supercomputing, legacy storage starves \u2022 The Register","datePublished":"2025-11-14T17:06:16+00:00","dateModified":"2025-11-14T17:07:41+00:00","mainEntityOfPage":{"@id":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/"},"wordCount":2366,"commentCount":0,"publisher":{"@id":"https:\/\/mailinvest.blog\/#organization"},"image":{"@id":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/#primaryimage"},"thumbnailUrl":"https:\/\/mailinvest.blog\/wp-content\/uploads\/2025\/11\/shutterstock_1915654084.jpg","articleSection":["Tech Universe"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/","url":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/","name":"GPU monsters eat supercomputing, legacy storage starves \u2022 The Register - mailinvest.blog","isPartOf":{"@id":"https:\/\/mailinvest.blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/#primaryimage"},"image":{"@id":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/#primaryimage"},"thumbnailUrl":"https:\/\/mailinvest.blog\/wp-content\/uploads\/2025\/11\/shutterstock_1915654084.jpg","datePublished":"2025-11-14T17:06:16+00:00","dateModified":"2025-11-14T17:07:41+00:00","description":"Technology is forever changing, and there are always new pieces of technology to replace obsolete ones. Tons of people enjoy reading tech blogs on a daily basis.mailinvest.blog tracks all the latest consumer technology breakthroughs and shows you what's new, what matters and how technology can enrich your life. mailinvest.blog also provides the information, tools, and advice that helps when deciding what to buy.","breadcrumb":{"@id":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/#primaryimage","url":"https:\/\/mailinvest.blog\/wp-content\/uploads\/2025\/11\/shutterstock_1915654084.jpg","contentUrl":"https:\/\/mailinvest.blog\/wp-content\/uploads\/2025\/11\/shutterstock_1915654084.jpg","width":2000,"height":1000},{"@type":"BreadcrumbList","@id":"https:\/\/mailinvest.blog\/index.php\/2025\/11\/14\/gpu-monsters-eat-supercomputing-legacy-storage-starves-the-register\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/mailinvest.blog\/"},{"@type":"ListItem","position":2,"name":"GPU monsters eat supercomputing, legacy storage starves \u2022 The Register"}]},{"@type":"WebSite","@id":"https:\/\/mailinvest.blog\/#website","url":"https:\/\/mailinvest.blog\/","name":"mailinvest.blog","description":"Technology is forever changing, and there are always new pieces of technology to replace obsolete ones. Tons of people enjoy reading tech blogs on a daily basis. mailinvest.blog tracks all the latest consumer technology breakthroughs and shows you what&#039;s new, what matters and how technology can enrich your life. mailinvest.blog also provides the information, tools, and advice that helps when deciding what to buy.","publisher":{"@id":"https:\/\/mailinvest.blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/mailinvest.blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/mailinvest.blog\/#organization","name":"mailinvest","url":"https:\/\/mailinvest.blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/mailinvest.blog\/#\/schema\/logo\/image\/","url":"https:\/\/mailinvest.blog\/wp-content\/uploads\/2022\/01\/default.png","contentUrl":"https:\/\/mailinvest.blog\/wp-content\/uploads\/2022\/01\/default.png","width":1000,"height":1000,"caption":"mailinvest"},"image":{"@id":"https:\/\/mailinvest.blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/freelanceracademic\/"]},{"@type":"Person","@id":"https:\/\/mailinvest.blog\/#\/schema\/person\/012701c4c204d4e4ebd34f926cfd31a4","name":"admin@mailinvest.blog","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/98ed217bd0f3d6a6dcae2d9b0c76e305b049a07275e315e1407e19ec8b08e139?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/98ed217bd0f3d6a6dcae2d9b0c76e305b049a07275e315e1407e19ec8b08e139?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/98ed217bd0f3d6a6dcae2d9b0c76e305b049a07275e315e1407e19ec8b08e139?s=96&d=mm&r=g","caption":"admin@mailinvest.blog"},"sameAs":["https:\/\/mailinvest.blog","admin@mailinvest.blog"],"url":"https:\/\/mailinvest.blog\/index.php\/author\/adminmailinvest-blog\/"}]}},"_links":{"self":[{"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/posts\/102224","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/comments?post=102224"}],"version-history":[{"count":1,"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/posts\/102224\/revisions"}],"predecessor-version":[{"id":102226,"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/posts\/102224\/revisions\/102226"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/media\/102225"}],"wp:attachment":[{"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/media?parent=102224"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/categories?post=102224"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mailinvest.blog\/index.php\/wp-json\/wp\/v2\/tags?post=102224"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}