Nvidia's new texture compression tech slashes VRAM usage by up to 95%

TBF, VRAM is probably the most expensive component on modern GPUs, so in theory (assuming decent market pressure) this would reduce the need for more VRAM leading to lower prices.

The actual market price is only slightly related to production and distribution cost of the cards. The actual cost of a card is a lot lower than what Nvidia asks. They ask what they can get the consumer to pay. Thus a lower BOM would only increase the profits of Nvidia. No way they would pass that advantage onto the consumer.
 
Video and image compression has been worked on for decades by some brilliant people and now Nvidia has achieved 95% compression ratio for video game assets? This tech is ground breaking for all walks of computing and it’s only for games?
The press release states that the GPU handles compression & decompression of the textures, so my guess it this will only work on Nvidia GPUs. Which sucks because now devs would have to package two sets of every texture in the game: one set for Nvidia cards, one for everybody else.
 
The actual market price is only slightly related to production and distribution cost of the cards. The actual cost of a card is a lot lower than what Nvidia asks. They ask what they can get the consumer to pay. Thus a lower BOM would only increase the profits of Nvidia. No way they would pass that advantage onto the consumer.
Hence the "assuming decent market pressure" qualifier I put in. It *always* falls to competition to prevent gouging the margins.
 
My only question? Which generation will have the exclusive ability to use the technology? I'd love if it was back ported to the 3 or 4 xxx generation. But something tells me if I want to actually use it it'll be exclusive to the 7th gen at 2 grand a pop.
 
Video and image compression has been worked on for decades by some brilliant people and now Nvidia has achieved 95% compression ratio for video game assets? This tech is ground breaking for all walks of computing and it’s only for games?
It'll get to other areas fast. Gaming will help the tech establish.
 
The blame lies with the game devs. They’re the ones who insist on making games so VRAM-dependent.
If you want high quality textures and bigger worlds then VRAM is needed. And VRAM is not just for gaming. The real problem is that even with such great looking textures, they just blur the whole image with bad TAA implementations or upscaling.

Video and image compression has been worked on for decades by some brilliant people and now Nvidia has achieved 95% compression ratio for video game assets? This tech is ground breaking for all walks of computing and it’s only for games?
From what I can see, it's essentially just taking a super compressed texture and upscaling it with AI to get close to how it looked originally. This kind of on the fly "decompression" is obviously going to cut the FPS a lot.
 
If you want high quality textures and bigger worlds then VRAM is needed. And VRAM is not just for gaming. The real problem is that even with such great looking textures, they just blur the whole image with bad TAA implementations or upscaling.

Personally, I couldn't care less about fidelity. I'm from an era where you could count the polygons on the 3d models. Today's most popular and actively played games are on average nearly a decade old (hint: nVidia knows this too, which is why x60 has always been a great deal for 85-90th percentile performance in esports). AAA titles are impressive from a technical standpoint, but the kind of people looking for high levels of visual fidelity are also not going to be interested in 2025's AAA titles in 2035.


The blame lies with the game devs. They’re the ones who insist on making games so VRAM-dependent.

After some thought I would say it is the consoles providing those devs 16GB of VRAM to play with in the first place which enabled them to push towards higher VRAM requirements.
 
This could be important. Should have been done a long time ago. Now we just need to apply a similar tech to game installs.. Because 100GB+ installs are damned irritating and need to be reduced back down to 20GB to 40GB.
 
Man, this is great! Your frame rate can tank because you don't have enough VRAM, or it can tank because you're using a compression algorithm to make the current scene fit into your available VRAM. Maybe they can find a way to get the frame rate back up while still doing the compression, but I don't really see the point of this as-is.
 
The press release states that the GPU handles compression & decompression of the textures, so my guess it this will only work on Nvidia GPUs. Which sucks because now devs would have to package two sets of every texture in the game: one set for Nvidia cards, one for everybody else.
No that's not how it works.
 
This could be important. Should have been done a long time ago. Now we just need to apply a similar tech to game installs.. Because 100GB+ installs are damned irritating and need to be reduced back down to 20GB to 40GB.
The more complex the compression algorithm the more performance hit you'll have with on the fly decompression. Devs could have done it decades ago with existing algorithms, but you are losing both performance and fidelity. At that point you are just better off using a small texture to get a performance boost instead.
 
Last edited:
The upside is more and more anti-consumer companies such as EA and Ubisoft are going bankrupt:
"Everything is Live Service! Get used to not owning games! We will report you to the police for swearing in-games!" to quote but a few, not to mention THE MESSAGE and game-altering choices (uglifying women in Dead Space, SW Outlaws and adding in anti-capitalist messages in game remakes that didn't had them) initially. I am many times more patient than their budgets.

After the inevitable Assasin's Creed: Shadows f-up, which I'm eagerly waiting to see what's coming next, hopefully the Bethesda bankruptcy and Naughty Dog getting buried for forcing up the "strong female acting like a violent man" trend upon us, we'll have some nice indie games which go back to what gaming means: ESCAPISM.

I'm playing OLD games currently, and by OLD I mean Half Life 2, Starcraft 1 or Civilization IV level old.
I'm laughing at the expense of AAA companies which just hopes we'll blindly follow the "throw everything at the wall and see what sticks" solution.
Except CyberPunk 2077, I have no need for a new GPU, my old trusty 3080 will happily keep it's place in my rig and since I'm playing less and less nowadays, Nvidia and their pricing are not on the table, should I need to change my GPU I'll go AMD all the way.

Oh, I'm positive GTA VI will be a resounding success, but I'm not going to play as a "female gangsta". Yeah, not for me, I'm going all in for a Starcraft Ghost release with a main female character, but DEFINITELY not another "strong diverse latina acting as a man". I'm fed up with the woke ****, I've had enough of the political discourse so unless companies revert to good character creation and story telling, sorry not sorry, they can go f themselves.

The more of us ignore the DEI and woke-infused products, the less options but to create good games the companies will have.
 
Last edited:
TBF, VRAM is probably the most expensive component on modern GPUs, so in theory (assuming decent market pressure) this would reduce the need for more VRAM leading to lower prices.
It's not. 4060Ti 16GB was $100 more than 4060Ti 8GB and AMD's 7600 increased price by only $30 for the extra 8GB. The most expensive single component is, and will remain the GPU itself. Then VRAM. Then everything else.
 
Why dont graphics cards have memory slots ? There must be some good reason I'm missing...
One reason mainly: Speed.

There is no socketable memory standard fast enough.
GPU's use incredibly fast memory. In terms of gaming card the speeds can reach near 2 TB/s while the current fastest DDR5 kits only offer up to 153 GB/s in dual channel configuration (mainstream boards) or 307 GB/s in quad channel (workstation boards).
Even 307 GB/s is below the speed of most entry level GPU's.

Another reason is bus width. GPU VRAM runs on a wide 128-512bit (or even 4000-8000bit in case of HBM) bus where as DDR5 is only 64bit per module.
Socketable VRAM would also need a fast interface, but even something as new as PCIe 6.0 with a x16 link would only offer 128GB/s and would be massive in size on the card.

There is CXL memory that plugs straight into PCIe slots, but it's limited to 64GB/s currently and meant for servers where there's need for capacity expansion (up to 2TB per module) compared to DDR5 6TB per socket, but it does not offer speed advantage over DDR5.
 
The more complex the compression algorithm the more performance hit you'll have with on the fly decompression. Devs could have done it decades ago with existing algorithms, but you are losing both performance and fidelity. At that point you are just better off using a small texture to get a performance boost instead.
Compression was impactful in the 90's and early 2000's. As CPU, GPU and memory speeds increased, that resource overhead became much less important. Today, it's almost a free feature as it really doesn't take much power to utilize.
 
Compression was impactful in the 90's and early 2000's. As CPU, GPU and memory speeds increased, that resource overhead became much less important. Today, it's almost a free feature as it really doesn't take much power to utilize.
WHen talking about very high compression levels for high resolution textures things change. Which is why texture compression algorithm efficiency is so important.
 
Back