Spanish AI firm Multiverse Computing has launched HyperNova 60B 2602, a compressed model of OpenAI’s gpt-oss-120B, and revealed it at no cost on Hugging Face.

The brand new model cuts the unique mannequin’s reminiscence wants from 61GB to 32GB, and Multiverse says it retains near-parity tool-calling efficiency regardless of the 50% discount in measurement.

In concept, this implies a mannequin that when required heavy infrastructure can run on far much less {hardware}. For builders with tighter budgets or power constraints, that’s a probably big benefit.

Multiverse Computing HyperNova 60B 2602 performance

(Picture credit score: Multiverse Computing)




Source link