Average Manatee
Arcane
- Joined
- Jan 7, 2012
- Messages
- 14,360
If video cards had never wasted silicon on RT we'd probably have 120 fps as the expected minimum for new games on midrange hardware. In native resolution
yeah, fuck nvidia for pushing technology forward. let's just all take it up the ass from amd instead where they refuse to implement any new features unless nvidia goes first and they scramble to get something out of the door to have parity.
also, amd couldn't compete because rdna1 was a shit buggy unfinished architecture and also had no RT hardware built in to begin with. amd's problem was that they didn't just follow directx or vulkan development to see what nvidia was up to.
I listen to their podcast occasionally and John once came right out and said he would have been happier if graphics stopped progressing after PS2 era. I get the sense he is burned out on the typical DF style analysis and just wants to focus on the retro content. His end of year lists usually feature a lot of interesting smaller titles compared to the triple-A stuff (and his 2023 GOTY was Armored Core VI anyway which is ). Yeah I still have issues with the guy and DF in general, but as someone with interest in the technical side of games, there is nobody on their level.But when it comes to games themselves, if anything the main DF members (aside from Rich) do tend to prefer older and non-AAA games (hell, almost every other DF Direct he is in, John tends to have some rant against AAA games :-P despite all the flashy technical stuff they mention being mainly available in AAA games).
That's not really my problem. If they refuse to put serious effort into their GPUs which means they totally lack in features compared to the competition, then I have no incentive to buy them either. If their upscaling tech isn't at least on Intel's level next upgrade cycle then I just won't bother. Also why the fuck did they buy ATI if they didn't give a single shit at all for HPC, media, GPGPU? It was clear by 08-09 that this was going to take off with PS3 clusters and later CUDA. Jesus Christ. I guess they've made low effort bux off of consoles anyway.GPUs aren't AMD's primary business. It is for NVidia, though.
I just prefer a less desaturated look and I like the bloom and shadows. Go yell at the staff who worked on both the original and the remaster if you think the artistic sense was ruined and how they don't understand anything.The resident retard saying that the remastered version of Wind Waker looks better than the original is a perfect example of that.
Nvidia is "pushing the technology forward" in the same way that Creative "pushed the technology forward" in terms of sound in games. You think sound in games today is different compared to 20 years ago? The only different thing you might find is some games use some custom software sound engine that does the things the hardware would be doing if there actually was any progress done in that time.yeah, fuck nvidia for pushing technology forward.
Many relatively new games - like Baldur's Gate 3 - do not use RT at all.If video cards had never wasted silicon on RT we'd probably have 120 fps as the expected minimum for new games on midrange hardware. In native resolution
Many relatively new games - like Baldur's Gate 3 - do not use RT at all.
What if you combine such game with graphic card that is true rasterization beast - like Radeon 7900 GRE ?
If all silicon dedicated to matrix-multiply-accumulate acceleration would be spent on 'rasterization' - you still probably wouldn't reach 120 fps at 4k.
- 140 fps at 1080p
- 130 fps at 1440p
- 75 fps at 4k
I don't know much about Radeons - I've just assumed that $549 card from 2024 will be better than 999$ card from 2022. Apparently that's not how Radeons work .Why would you go for that model? https://tpucdn.com/review/baldur-s-...nce-analysis/images/performance-3840-2160.pngMany relatively new games - like Baldur's Gate 3 - do not use RT at all.
What if you combine such game with graphic card that is true rasterization beast - like Radeon 7900 GRE ?
If all silicon dedicated to matrix-multiply-accumulate acceleration would be spent on 'rasterization' - you still probably wouldn't reach 120 fps at 4k.
- 140 fps at 1080p
- 130 fps at 1440p
- 75 fps at 4k
Nevermind that bg3 isn't exactly a well optimized game. It's barely good enough for a type of game that doesn't require good fps.
Hardware upscaler is not magic.Even deciding to go for "upscale is required route" would have meant a good hardware upscaler not generic AI bullshit.
'Pushing graphics forward' is too vague.Neah, pushing graphics forward was not what made them push RT and the tensor cores.
Ray tracing is the future and ever will be..
Creative infamously monopolized many of the features (later open sourcing them as the widely used OpenAL) by suing and then purchasing the competition. it wasn't until AMD TrueAudio* that there started to be some hints toward better 3D positional audio again which kind of fizzled out and the same idea was later repackaged by RTX Audio or whatever it was called, which fizzled out worse than TrueAudio. Besides, it's a dumb comparison because it was clear by the Xbox 360 that audio was going to be done in software in the future since CPUs were really fast at that point. Creative just put the bullet in the corpse with their shitty drivers on Windows Vista.Nvidia is "pushing the technology forward" in the same way that Creative "pushed the technology forward" in terms of sound in games. You think sound in games today is different compared to 20 years ago? The only different thing you might find is some games use some custom software sound engine that does the things the hardware would be doing if there actually was any progress done in that time.
What does "hardware upscaler" mean to you? It's as hardware as it can get considering it's running on Tensor cores which could otherwise go totally unused for games. PS4 Pro had some form of hardware acceleration for checkerboard rendering, and that hardware was wasted because temporal upscalers quickly became better. Generalization > specialization as far as upscaling goes because this field is moving too fast build hardware around.Even deciding to go for "upscale is required route" would have meant a good hardware upscaler not generic AI bullshit. Neah, pushing graphics forward was not what made them push RT and the tensor cores.
4x MSAA wasn't even free in 2006. MSAA is great for geometry (so is FXAA/MLAA/SMAA...) but it can't do anything at all for specular or subpixel aliasing which is going to be like 90% of aliasing and by far the most bothersome kind of aliasing in a modern game.Hence the need for TAA. Reality is that texture space precompute doesn't need it and was just fine with MSAA (nearly free for what it does).
Yeah no shit, that's because it's pre-rendered hollywood CGI, not real time graphics. they have to show those effects on massive >8k screens often extremely close to the camera view. The point of TAA is to temporally imitate the effect of supersampling with similar ms cost to analytical AA which is as "free" as any AA can get, instead of actually running the game at 4x4 SGSSAA or whatever.They dont use temporal in Hollywood cgi and theres no shimmer (16x SSAA is usual).
yeah, fuck nvidia for pushing technology forward. let's just all take it up the ass from amd instead where they refuse to implement any new features unless nvidia goes first and they scramble to get something out of the door to have parity.
I guess I just don't like chasing realism, so I'm looking at all this from that angle. Many of my favourite games just need the ability to blit rectangles, so...
Btw, the last recent 3D game I played was ELEX, and my view is you don't quite need anything more advanced than that to create a good game.
The problem with RT is that you just know eventually developers aren't going to bother painting proper shadows 'cause, just turn on RT bro.
Rage and Mass Effect are graphically unimpressive forward rendered games
They used to try a lot harder at this stuff in the early 2010s pushing TressFX, TrueAudio, Mantle (also another AMD innovation I forgot admittedly), etc on their sponsor titles. On the hardware side, they also had a large say in developing HBM which is foundational for AI obviously.The problem with AMD here is that their market share is both too small and they do not seem to have the developer resources Nvidia has to court game developers, so it is much harder for AMD to introduce new features and have those features - Nvidia can introduce some new feature and both send their army of coders to big name game developers to help them integrate that new feature and rely on their big market share for the rest of the developers to adopt it.
Chiplets were just an implementation of TSMC's CoWoS, they don't really have a hand in manufacturing. There's definitely serious architectural brains that had to have been put into it to make it work with a consumer gaming GPU though, no question there. Even if it came with the bugginess and caveats that first generation AMD products often do.It'd be nice if AMD tried to push things but TBH i personally do not expect them to do that, the most i expect them is to come up with new things to push current featureset (like chiplet-based manufacturing, which they managed to do before Nvidia/Intel).
Yeah I was admittedly doing Rage dirty there. I know it was technically impressive but at the same time, it came out right around Crysis 2 which still holds up way better than Rage imo.IMO RAGE was graphically impressive for its megatexture approach which influenced virtualized resources in GPUs - something that several games use these days and AFAIK is a cornerstone feature of Unreal Engine 5's Nanite (as well as its inspiration).
Aesthetically the base game didn't look that great overall, though the DLC had some very good looking scenes (probably the best looking scenes in any id Tech 5 game).
I can certainly sympathise. Related: I had a period when I was heavily into creating 3D art, and I was on my way to become a "technical" 3D artist. Extreme photorealism, that kind of stuff, you know. It's kinda fun to learn those 3D tools just for the learning's sake.Well, i mentioned elsewhere recently that my favorite RPG is New Vegas and my favorite FPS is Quake, both being very light in terms of GPU resources (or no GPU at all in the case of Quake :-P). In terms of what i'm fine with, as long as something gives the impression of what it is supposed to be i am fine with it - e.g. i don't need realistic reflections, an spherical environment map (let alone a cube map) to give the impression that something is supposed to be "shiny" works as well as raytraced reflections for me - the latter are just bonus, but i wouldn't ignore a game for that.
This is just literally false.Let's remind everyone one clear thing : Digital Foundry doesn't play games, they measure them. And they measure with fucking framemeters to whine and complain about how this game plays at 60 FPS moooooost of the time but when you enter a big city while it loads it switches to 57 or sometimes even 55 FPS for a couple of seconds. Then they do a DLSS vs FSR analysis when they zoom on the edge of 3D models and compare stills of them being blurred and while a difference is noticeable, it's never apparent to a normal player who, you know, play the game. As a result of this kind of excessive scrutiny, people are complaining about horrible ports when they are, in fact, not. Jedi Survivor played nice on my GTX 980. I didn't fucking spend every minute of gameplay looking at the framerate, I just played it. The Last of Us was mostly 30 FPS on PS3. GTA 5 could hardly reach 30 FPS. And people had fun.