04dcarraher's forum posts

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

47

Followers

Reviews: 2

User Lists: 0

#1  Edited By 04dcarraher
Member since 2004 • 23832 Posts

@ionusX:

I would try resetting cmos and then try updating the motherboard bios. Its possible that the newer gpu is not being recognized. And you could try disabling any type of secure boot options in bios which can can issues with gpus.

Note* just seen your other thread that your still using a OCZ 650w ZS, I think its time to retire that power supply, its possible its not supplying clean power to the system anymore, the GTX 1060 and RX 6650xt are probably more sensitive too it than that old 6950. You can drive to a Bestbuy and grab a new Corsair 650/750w or any Microcenter having more options of good psus.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

47

Followers

Reviews: 2

User Lists: 0

#2  Edited By 04dcarraher
Member since 2004 • 23832 Posts
@Bond007uk said:

Isn't Startfield gonna be 130GB+? I couldn't imagine running a modern huge open world game such as this on 'spinning rust'. It might 'run', but it will be an absolute stutterfest. Horrid, horrid performance.

This isn't really a problem anyway. SSD storage is cheap. Absolutely no reason to run such a game on such archaic technology.

I've got 5GB of SSD storage on my PC with about 2GB free.

I've just recently just replaced my old 2tb Firecuda HDD( it had 8gb of SSD buffer for most commonly used data) with a 4tb 2.5" Samsung SSD for $170. I have a total of 11tb of SSD storage now. But I really only used that 2tb HDD for storage and older titles, that Hybrid drive was pretty quick though.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

47

Followers

Reviews: 2

User Lists: 0

#3 04dcarraher
Member since 2004 • 23832 Posts

@osan0 said:

@pc_rocks: I'm now remembering having a similar discussion before...was that you?

Anyway we are going to have to disagree on this. Your taking offense from comments, using them in isolation and running them to the nines, like the media did, and treating them as PR. Did you even watch the video? It's a thought exercise for level designers and nothing more. Seriously look at it if you haven't (just stop at the ears). His conclusion and yours are in alignment believe it or not. He uses an extreme example to get people thinking. What he suggests is just the holy grail...the target. Doesn't mean he hit it and the numbers he shows demonstrate that they clearly didn't hit it.

On the PC comment: Look. Find an SSD (that's 1X SSD. Not a raid setup or some high end data server stuff that costs 50000) on the PC that people could buy that was available on or before Sept 2019 (When he said it) that can transfer data faster than 5.5GB/s and i'll certainly concede that he got it wrong on the hardware front (The PC was sorted on the GPU front: Turing was out by then). I had a look and the earliest I could find of an actual PCi-E 4 SSD product with >5.5GB/s was Oct 2020.

Hell even if the PC just had a 3.5GB/s SSD available, maybe he is still wrong and GPU decompression would be enough to get the overall data transfer rate over the PS5s....but we currently have limited data on that so we can't prove either way (I could only find some synthetic benchmarks for a 4080). Maybe a gen 3 SSD and a 2080TI will clobber a PS5 in data transfer speeds from the SSD to main memory. We currently don't know.

The rest of it.....yeah i'm not going to even bother. What are you actually complaining about? Is it the fact that Cerny didn't explicitly say that the PC would catch up to the PS5 by the time the PS5 released...is that what has you so annoyed?

There were 5gb/s SSD's show cased all the way back in May of 2019. The controllers used at that time were limited to 5GB/s. Back in the summer of 2019 You could grab a ASUS Hyper M.2 expansion card( and raid them if you actually needed the speed. But really the argument with 5.5gb/s vs 5 gb/s as a talking point was "short sighted". Because Even before the PS5 officially released there were SSD's available that were "faster". But the kicker is that the PS5 m.2 was disabled for nearly a year after its launch.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

47

Followers

Reviews: 2

User Lists: 0

#4  Edited By 04dcarraher
Member since 2004 • 23832 Posts

You had a lot of games in the 90's into the early 2000's that had FMV cut scenes some had choices during them then you had games that nothing but FMV.

Games that come to mind is Silent Steel, Toonstruck, Star Trek Starfleet Academy, Tex Murphy games, Command & Conquer, Star wars Jedi Knight Dark Forces 2

Silent Steel for me was really cool, because you were put into the shoes of a modern sub captain trying to prevent a conflict while figuring out who sabotaging the mission while being hunted by enemy sub.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

47

Followers

Reviews: 2

User Lists: 0

#5  Edited By 04dcarraher
Member since 2004 • 23832 Posts
@gifford38 said:

um it took over a year after the ps5 to get ssd that could be ps5 compatible in the pc market. it has to be 5gig per second ssd.

Actually no Western Digital SN 850's(over 7gb/s read) were available to pre-order on the 8th of October in 2020 it took Sony until September of 2021 to update the PS5 to actually allow users to use the M.2 slot.... On top of the fact that 5gb/s m.2 SSD's were available back in 2019. It was Sony who fumbled not SSD manufactures....

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

47

Followers

Reviews: 2

User Lists: 0

#6 04dcarraher
Member since 2004 • 23832 Posts

@hardwenzen said:

Lets say that the Cell wasn't actually helping, i still want some custom tech in my new systems. Unless they put the absolute best off the shelf specs, which is obviously impossible, you kinda know what to expect from your system, and that's boring.

the Cell did help by augmenting the RSX but the quality of doing so was totally on the developers able to work with it. Custom tech can be hit or miss with actually being beneficial or a hindrance. Adding unwanted complexity and can affect future compatibility with newer consoles.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

47

Followers

Reviews: 2

User Lists: 0

#7 04dcarraher
Member since 2004 • 23832 Posts

@hardwenzen said:
@Juub1990 said:
@hardwenzen said:

I mean shit, the cell did deliver games that trash anything on the 360. You ain't seeing any KZ2-KZ3, UC2, GoW3, etc on the 360. I just want to see some custom stuff made for the system so i can damage control poor performance of my ps5 with the "let developer learn the toolz before criticizing the system".

Sony managed to deliver those games in spite of the Cell, not thanks to it. Don't conflate devs talent for hardware. This gap continued well into the PS4 era where Sony still produced one looker after the other despite having a very similar architecture to the Xbox One.

The best looking game on the X1 is probably Ryse and it's a launch title and why is this? Because it's Crytek who were some of the best in the business at producing great visuals.

The Cell was a piece of trash.

I don't believe that. They had a shitty gpu and ram, and were still able to deliver games that are far ahead of what was available on the 360.

The 360 was more advanced than the PS3 and was easier to work with too. Developers&Sony had to figure out a way to use the Cell processor's SPE cores to augment the "RSX",because the PS3 gpu alone would have been thrashed by the 360 with every turn. Sony originally designed the PS3 just with the Cell to do cpu & gpu tasks but quickly found out it was lacking. So Sony turned to Nvidia for a trimmed down G70 core aka Geforce 7800.

The Xbox 360 typically did better than PS3 in multiplat titles. The PS3 had an extra complexity in the coding for the Cell processor, having to juggle cpu work and gpu work along all the SPE cores in the Cell since it only had a single "normal" cpu core, while the 360 had a triple core cpu able to handle upto 6 threads. Also PS3 had to fight its two separate 256mb memory buffers one for system and other for vram. While the 360 had unified 512mb memory pool which allowed more flexible allocation. Then onto the main GPU of the PS3, it was a cut down Geforce 7800 gpu with fixed pixel/vertex shader processor count. While the xbox 360 had an unified shader architecture that allowed developers to use any combination of the shader processors to achieve the performance needed for what they wanted to focus on vs limited fixed amount of pixel/vertex shader processors.

While 1st party or dedicated/experienced developers were able to squeeze all they could out of the PS3 it was a nightware for everyone else which lead to worse looking and or under performing multiplatform games.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

47

Followers

Reviews: 2

User Lists: 0

#8  Edited By 04dcarraher
Member since 2004 • 23832 Posts

Did you do a fresh install of windows?

Also you can force perfer max performance with game profiles in the nvidia control panel

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

47

Followers

Reviews: 2

User Lists: 0

#9  Edited By 04dcarraher
Member since 2004 • 23832 Posts
@lamprey263 said:

Guess AMD has to play dirty, was just a few short years ago Nvidia was working with devs to create custom APIs to tank performance on AMD hardware. People were okay with that though, as long as their games performed better on Nvidia's hardware they were happy to buy Nvidia GPUs. Nvidia used the money to improve on their graphics card performance. Now that they've managed to put themselves in advantageous market position they want to play nice all the sudden. If people are going to be consistent as they were the last decade or more, they really shouldn't care what AMD resorts to in order to gain an edge, as they seemed fine when Nvidia played nasty.

Actually it was AMD who was creating custom graphical API that catered to their products first.... It was called Mantle, the only way Nvidia would be able to use it was by disclosing their gpu information... which they were not going to do. Mantle was slick but shady move for AMD because it shifted optimizations and hardware coding to the developer shifting the blame and not having worry about spending resources patching drivers for each new title that needed tweaked. Because at that time frame AMD Direct x11 optimizations were all over the place and performance always lagged behind Nvidia.

If your referring to Nvidia software API. AMD uses their software API the same way.... look at TressFX, which performed poorly on Nvidia gpus.

The thing is that those proprietary features or "gimmicks" as some would call them, push innovation into future standards. You may complain about Nvidia over saturating games with tessellation killing AMD performance, but it forced AMD to design gpus to handle high levels of tessellation, which are now common place. Or look at Physx, putting gpu real time physics in the fore front, which lead to more open sourced physics engines that could use any gpu, which forced Nvidia to forgo that proprietary feature.

I think to problem is AMD touting it being "opensource", but when the other guy known for its "gimmickery". Is promoting more openness for dev's is not a great look.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

47

Followers

Reviews: 2

User Lists: 0

#10  Edited By 04dcarraher
Member since 2004 • 23832 Posts

Also I would like to suggest that more recent 8gb vram limitation increase in games has one common thread.... these newer AMD sponsorship titles.....(STAR WARS Jedi: Survivor,Resident Evil 4, THE LAST OF US etc) Is AMD purposely influencing developers to inflate vram requirements to make their gpu's look better? AMD holding back RT effects in sponsored games to cater toward their shortcomings? AMD Goaded Nvidia over vram ahead of RTX 4070 launching ..... All these things seem a bit fishy in being hypocritical in supporting "open source" yet cutting/limiting features.