https://news.ycombinator.com/item?id=33373729 details about the problematic 12VHPWR adapter shipped by nVidia, with extremely questionable engineering (14AWG wires soldered to thin tabs), which should absolutely be recalled as a fire hazard.
I believe it is entirely possible to implement a solid and safe 12VHPWR connector, and its reputation is being destroyed by a shoddy implementation... but that's very much nVidia shooting themselves in the foot.
NVIDIA is a huge company, if they can not manufacture a safe (and durable) 12VHPWR connector, then the PCI SIG design is bad and should be recalled. The design should be such that it would be difficult even for a shoddy manufacturer to create a fire hazard.
600W from a tiny connector is questionable but the issue isn't caused by the 12VHPWR connector. Fault happens on the part that converts 4x older PCIe into the new headers. So this is an adapter issue and not something related to the PCI SIG design.
According to Igor's labs investigation what happens is one or two of the welds in the adapter crack and adapter turns into 2x PCIe to 12VHPRW which is not what it's specced out for.
Yes, if "the right materials" are used and the cables are treated "properly" (not bent out of spec, not too many connection cycles) then it's "OK", but those shouldn't be requirements designed into something that cost-conscious consumers are going to be assembling.
I think the 50A is a bigger issue than the 600W. Consider that the latest spec for usb-c allows up to 240W in a much smaller cable, but does so at 48V, meaning that the current is only 5A.
To be clear - the typical temperature rating you’ll find on mains power cabling in a house is 60c.
The highest you’ll typically be able to find is 75c.
70 Celsius is NUTS for that usage, not ‘oh, that’s ok then’
It's easier to spec out heat resistive materials for a 5cm cable than for the entirety of a house's mains power cabling. Sure, 70°C is a lot, but I think the point is that it would work if it had to.
It’s a sign of a lot of
resistive heat losses and requires specialized insulation and connectors at that level (same for mains). Typically insulation is starting to weaken or even melt at that point.
It’s well outside normal expected operating temperatures for wire.
Crazy high resistive heating when driven at multiple factors above the rated power isn't cause for alarm though. If you put 50A through your wall outlet you wouldn't be surprised when it got hot. The only application where pulling 1500W through a 12VHPWR connector is even close to possible is when using cryogenic cooling (when the conductive cooling will keep the power cable below ambient despite how many amps are crammed through them).
This data point isn't exhaustive, but it does indicate that the actual safety margin is not as tight as is assumed from this news cycle.
You and I have a very different concept of safe? I don’t think it’s saying what you think it’s saying.
That brand new connectors and wire that are custom specced wouldn’t catch on fire (but come close!) at only 2x the power draw means it’s very likely any damage to a connector, wire, etc. especially with normal wire and connectors likely would cause damage and a fire, at much less than that power draw.
Which is what we’ve been getting reports on.
And that is just from minor physical damage it seems,
ignoring corrosion or fraying wires over time which is usually the bigger problem.
That is not an issue with the connector though. That is the adapter. That is my entire point. The connector is safe. Shoddy adapters are the problem. Putting multiple supplies in parallel is always very shady, doubly when it's high current through a small piece of metal. That's what's dangerous, not the 12VHPWR connector.
That has yet to be shown in the field. We know of one incredibly dodgy adapter, but the issues I’m calling out take time to show up, and lead to issues as well.
The truth is the plug is more demanding and expensive than lower density options and it's dangerous when corners are cut. This is always true in power electronics, but the risk surface area is higher with a new and demanding connector.
I'm not quite sure what your point was in the first sentence - that is always better, yes. We don't have access to analysis done by the EEs in charge of making the plug decisions, but we're already seeing high profile failures, which is unusual.
Even with connectors and cables without those early problems, there are often longer term problems that show up over time - cables fraying at the connectors due to movement, plugs and connectors building up corrosion (and hence having higher resistance), connections loosening or getting bent, etc.
The link you posted, and your later statement seems to support the same view. I'm just repeating it so the folks dismissing this as just an issue with the badly designed adapters Nvidia distributed (in some cases?). It's overall a connector and spec that's on the edge of the performance envelope with some 'obvious' types of field failure modes that don't seem to be properly addressed.
Honestly, I'm a bit shocked they went with 'mountains of parallel power feeds' solution instead of... I don't know, doing 6 gauge stranded, which seems like the obvious choice to me? Parallel power feeds like this are always the source of endless fussing and headaches due to exactly the problems we're seeing. Trying to save a couple cents by using more of the same materials they have on hand I'm guessing?
I think some of the other comments miss an important point. 600W of power draw means that you need 50A of current through those connectors. Or a bunch of transformers. You can design it around the old 12V standard but why? For reference, the amperage leading into the house is typically around 100A; you only need around 24A for an electric range / oven. 50A at 12V is probably not optimal.
Does the old way of building PCIe cards still make sense when the GPU now draws more power than all of the other system components combined? GPU sag (i.e. the card weighs so damn much) is such an issue that one manufacturer actually put a level in their card.[0] The 4090 probably needs three slots just to keep it attached to the motherboard.
When someone buys a desktop these days, it's very often because they need a discrete GPU. You buy a dedicated desktop GPU because the thermal envelope of a laptop can't support a card that would draw 600W - even if it were possible to shrink the card into a laptop (or make the laptop big enough to fit the 600W card). The top of the line Intel processors draw in the neighborhood of 150W but that doesn't necessitate going to a different form factor. Future motherboard and south bridge designs should mount the GPU independently of the other components.
> You can design it around the old 12V standard but why? For reference, the amperage leading into the house is typically around 100A; you only need around 24A for an electric range / oven. 50A at 12V is probably not optimal.
In your opinion, if you could force power supply makers to add another voltage rail, which voltage would you pick?
48V is kinda industry standard, and just below "low voltage" thresholds is most jurisdictions (often 50 or 60V). The cards have switching stages anyway, so it's not that much change (but probably means different power transistors)
There's not exactly a right choice of voltage per say, it's more of which tradeoffs you are willing to accept. Nothing is free, and higher voltage means more noisy, complex, and expensive SMPS power supplies.
Solving for the problems of higher current, lower voltage DC we use in ATX format supplies today requires higher gauge copper, but the cable run between the GPU and PSU is so shot that the cost of extra copper could very well be less than the increased cost of a higher voltage supply.
200A is common for newer construction in the US. Many small post-war homes are 100A. It is common to upgrade these homes when it’s time for a new panel.
In the US, generally, the transformer sizing and the line from the pole to your house is the utility company’s problem. When an electrician upgrades a service from 100a to 200a, basically they just ensure the parts between the service line and panel is capable of 200a. —- just the home’s wiring.
Often, those transformers on the pole have some wiggle room in what they can deliver, and just because you upgraded from 100a to 200a service doesn’t mean you’ll actually see a real world change in usage. You usually don’t. Utility companies can over provision that capacity because not everyone on a street switches on their microwave at the same time, etc.
There’s power supplies from reputable manufacturers which ship this cable and they do this connector correctly.
Unfortunately NVidia has really screwed the pooch with this as this dangerous adaptor is the first encounter most people will have with the 12VHPWR form factor
Perhaps so, but these power supplies are few and far between, so we'll have to see about how these actually perform in the long run.
However, you're missing the point. The spec should be designed such that even "non-reputable" manufacturers with sloppier manufacturing methods wouldn't create a fire hazard. These connectors are extra-delicate even though they're supposed to carry higher loads. They will be less robust in practice, for very little gain. That's a poor tradeoff, which makes it poor design.
I am not here to defend Nvidia, nor the quality of the design, I’m not in anyway capable of judging it on it’s technical merits. I think this debacle is a farcical mess.
These new adaptors are included with products Nvidia themselves ship, with themselves as OEM. They know this standard is not common, and they know they need to ship adaptors with them, If they can’t guarantee the quality of it’s manufacturing, then it says there’s possible more worrying things going on behind the scenes.
I'm not saying it's not NVIDIA's fault, but that the spec has been setting companies up for failure, and it's not just NVIDIA who had these cables fail. Rather NVIDIA is the only company actually shipping these cables at scale into the real world to see the extent of the damage.
Maybe nothing, maybe EVGA, with their reputation for quality and sound engineering, would have looked at the garbage adapter cable Nvidia shipped to board partners and said "Hell no, we're not shipping that to customers" and sounded the alarm on it before thousands of these got into people's homes and offices where they may yet kill someone.
I don't personally like EVGA, had a bad experience trying (and failing) to get warranty replacement for a defective PSU, but I can certainly follow GP's thinking about how EVGA's involvement with this launch might have made a difference.
Seems like a bit of an insult to the colleagues that manage to successfully design a 50+ billion transistor GPU that their product gets messed up by something like this. Talk about the last mile.
You are right of course, but that 50+ billion transistor design is mostly copy-pasted from a small number of blocks.
For example, the RTX 4090 has 16,384 cores, so you have to divide 50+ billion by that number. Then inside every core you have lots of similar repetition, etc.
The biggest challenge is to create a die that size with little to no defects, which is of course the foundry's achievement.
I get the point but that's also a little like saying Windows is quite simple software because it is simply full of 'if' and 'for' and 'while'.
A modern great CPU or GPU chip is not that simple to design. It is not just an ALU copy pasted thousands of times. Otherwise there would be more competition.
> I get the point but that's also a little like saying Windows is quite simple software because it is simply full of 'if' and 'for' and 'while'.
Not really, the cores are really similar units.
> A modern great CPU or GPU chip is not that simple to design.
Of course not, for starters there's not just the shader cores (a 4090 also has 512 TMUs, 176 ROPs, 128 RTX, and 512 tensor cores), you need to design all of these before you can replicate them, then you need to design the common components.
But the point was that the 50 billion transistors were not designed for individually, there is a large amount of block reuse. Your analogy would hold if windows inlined everything.
"Extremely sophisticated" has many axes. An "extremely sophisticated" car might mean the engine is sophisticated, or the dashboard electronics are. The two are rarely correlated.
NVidia clearly has spectacular software and digital hardware designers.
Mechanical, analog, and power engineering are entirely different disciplines.
Not really because nVidia already had previous designs as a starting point. They seem to be getting mostly bigger (more replication), and the most important thing to consider when growing like that is power distribution.
This is not a good way to look at complex systems. Both designing a GPU chip and making it are too complex to use reductive reasoning, especially when comparing one thing to the other. It’s foolish.
It doesn’t add anything to OP’s point that it took so much effort to engineer a GPU only to be messed up by a stupid connector.
How about building a space shuttle carrying 7 humans to space but it fails because of a <$1 o-ring? Then knowing that engineers had sounded specific warnings prior that were ignored by management.
We've seen these failure stories time and again to various repercussions, but they seem to all come down to some form of greed whether it be reputational or monetary.
This is what failure is like, there’s a lot of surface area and sometimes it’s things you never thought of worrying about.
I would bet the cause coming down to minor manufacturing defects and testing that was “too good” at plugging things in not accounting for sloppy end user assembly.
Also, power supply design (including power cables) is a really foundational piece for any electronic device, and the one that typically causes complete failure/letting the blue smoke out if not done carefully. And of course, safety/fire issues if screwed up particularly badly.
It is weird that in their push to all time high TDP they didn’t spend more time thinking about it. That said, cutting edge often means bleeding edge, something something.
Self-insult for the company overall. But yeah must feel bad for those not responsible. Those responsible probably don’t care, which is exactly the problem.
When the BlueGene/Q came online for testing and early science in Livermore, some of my colleagues burned out a number of power supplies. IBM wanted to make sure they weren't doing some "trivial" task; I guess while(true){x+=1} or something? No, they were really running lattice QCD calculations. Anyway they were not prepared for kernels in hand-optimized assembly, and IBM had to beef up the power supplies.
I don't think the problem here is that users are running heavier calcs on their cards than nvidia intended - there's been a teardown showing that some of the plugs on those power cables have weak connections
For context, the RTX4090's power budget is 450W, and if all 4 plugs are connected to the cable adapter, the system can allow up 600W (at 12V) to the GPU.
That's 6A to 8A per pin (there's 6 pins for current, 6 for GND plus 4 signaling pins), so pretty sensitive to resistive heating in case of a bad contact.
There isn't an issue with resistive heating. The safety margin is down a few factors from the 8-pin PCIe adapter, but there is still a healthy safety factor.
This isn't like running kA of current where the threat of thermal runaway is always present, even with active cooling. I worked on a stellarator that had a coil feed blow (years before I started). Pushing even 10% more current meant a complete overhaul of the feeds. That's what thin margins looks like.
It’s only a healthy safety factor if absolutely nothing goes wrong. Connectors are normally a weak point and internal PC components aren’t supposed to be plugged and unplugged regularly. Simply testing the multiple hardware configurations can quickly result in failures.
More interestingly, modern ASICs run their logic at something like 1V (I don't know the exact voltage of this particular one of course), so the current going into the actual chip (after voltage downconversion on the board) is hundreds of amperes.
AFAIK the top-end for modern Ryzen chips is in the 1.5-1.6V range, usual operating levels are lower. My 3xxx Threadripper is at 1.3V right now. In comparison, my 3xxx-series RTX card is at 1.06V (but only operating at 40% of max power). The EVGA software only allows me to set a +100mV voltage offset, but the default voltage curve looks like it peaks at 1.100V for ~1950mhz.
Yeah, sending 48V would be more appropriate, but unfortunately that's not really backwards-compatible (well, the way to do that is typically to allow the PSU to be farther from the use-site, like having a datacenter with tall adult high cabinets (racks; they have 4 posts for holding the equipment, which is standardized as occupying one-or-more slots of 1¾ inch height and free to occupy the entire length) and putting 4 AC/DC converter blocks into each such cabinet, to power a few layers of servers/equipment both above and below the PSU layer.
They do this so that the PSU can be made up of multiple redundant modules, and to integrate some batteries that can cover a generator starting up. This way they don't need to pay for a dedicated UPS battery with chargers and inverters just to convert it back to lower-voltage DC near the server.
They can make it so that for example 5-8 PSUs spread over different electrical circuits can share the load of the servers, even if 2-3 of them fail
(so they, for example, only need 2 extra on top of the 5 that can power the servers and the 1 that recharges the battery (62.5% load if everything works and the servers are running full-power), or even relying on the servers occasionally running below full power to spread the full rated server power over 6 modules with 2 spares (75% load if everything works and the servers are running full-power)).
A big issue in this situation here is though that Nvidia skimps heavily on board space for the 4090 Founders Edition cards, seemingly to use the back for heat sink fins/air-flow cross-section.
It would take up some board area, but in theory, they could add a 48V power option that makes it all sleek and pretty, possibly with a 12V->48V adapter for the people who don't already have a 48V supply.
Maybe it's own power supply? Although this would require electric decoupling from the PC because of switching power supplies. I see here an opportunity for Jensen and his marketing people to come up with a fancy name for it.
Countries like Australia/China 240v is the standard. You need step-down transformers for 110v devices. China has both voltages/plugs running in its cities - not sure how they manage that.
A common misconception about the US is that the US is only 110V. 220V service in homes in America is standard, it's just that standard power plugs around the house get 110V because of split phase breaker boxes. This lets 220V be available for large appliances like Washers, Dryers etc using the full phases, and 110-120V to the rest of the house using only half phase.
In my next house, I think I'll request a 220V outlet in my kitchen with a UK style plug, so I can have my fast tea kettle.
Doesn't seem like a misconception. It seems like y'all have 110v power outlets. Having 220-240v somewhere else is kind of moot... The power socket IS the interface, so if the interface is 110v, then the switcher box, the power line, the high voltage power line, the power plants voltage aren't really relevant.
You plug the AC cable into the PC's power supply, right? That converts to 5V and 12V DC and feeds it to the GPU internally. There's no way to plug external power directly into the GPU.
Often, anything above 48 or 50 volts is considered high voltage and in some places might be subject to regulation. Not to mention just being more dangerous to handle.
The problem is not external, 500 or 1000 W is not that much in the grand scheme of things (a euro plug allows up to 625W, unearthed).
The issue is internal: because the top PSU rail is only 12V, high-power devices (mostly GPU) need to push very high amperage totals. And because PC internal layouts, they can’t do that through large wires with huge connectors.
Exactly. This is why high wattage USB-C cables rely on the power supply negotiating a higher voltage in order to work. Sending 100W over 5V would be ridiculous (20A). Instead the device tells the power supply to switch to 20V and then the cable only needs to be rated for 5A. Something like that for GPUs would make sense.
It is KINDA external. 1000+w is starting to get into “trip the breaker” pretty easily if you have anything else on that circuit with any kind of transient load - like say a minifridge.
> 1000+w is starting to get into “trip the breaker” pretty easily
I was wondering where you'd get that from, and then I remembered that you're probably american so your baseline is only 1800W, in which case... yeah that's true. If you're installing a really powerful PC in the US you probably want either a dedicated circuit or a 20A circuit (or both).
A gaming PC uses maybe 20-30% of power of an average kitchen appliance like a kettle or a blender. Those are plugged into standard wall sockets just fine.
Depends on how many GPUs are in the PC. Circa 2011 I had quad-Radeon 6970 machines pulling ~1000w each from 1200 watt power supplies, measured with a kill-a-watt.
Those machines ran 24/7 for at least 36 months before I had to start servicing stuff like GPU fans and heat sinks.
This stuff can be designed NOT to melt the dang 12v rail
Depends on how many GPUs are in the PC. Circa 2011 I had quad-Radeon 6970 machines pulling ~1000w each from 1200 watt power supplies, measured with a kill-a-watt.
Those machines ran 24/7 for at least 36 months before I had to start servicing stuff like GPU fans and heat sinks.
This stuff can be designed NOT to melt the dang 6 pin 12v rail
Ha ha only serious. There were cases on local news when homes burned down because PC caught fire. Not only circuit breaker — please give me something (IR sensor?) that will trip and cut power if anything in the room and in the PC enclosure exceeds, say, 120°C
In short: They couldn't replicate it, no conclusion yet, further testing needed. What's weird is that all their adapter cables are clearly different (better) than the one from "Igor's Lab".
For anyone who doesn’t want to watch the video the key differences that made Gamers Nexus cables better than what Igor showed was GNs cables were 300V rated vs 150V and the solder joints where the cable meets the connector bus bar was higher quality.
Imagine trying to run a LAN party at a typical residence with a room full of modern, high-end gaming PCs... I thought we were saved on power consumption when we went from CRT to LCD, but nothing can compensate for the madness we are working with now.
Back in the early 00's, I've had 20A breakers trip with ~8 high-end PCs on the same circuit. Today, I think we would struggle to handle half the number of PCs. My entire panel is only good for 150A of service, so I am wondering if I could even fill up a battlefield match before my main trips.
The high end these days is insane. Huge diminishing returns in performance/watt and performance/cost. My current mid-range gaming PC (Ryzen 5600X, Radeon 6700XT) uses about the same amount of power as my 2008 mid-range PC (Core2 Q6600, GeForce 8800 GTS 512) while of course running laps around it in performance. Unfortunately this current generation across the board seems to be throwing GHz at the problem as if we didn't learn our lesson from Pentium 4.
I seem to recall things getting worse before the industry swung back into sane specs. The bulldozer era cpus were toasters that could do 5-6 ghz and both amd and nvidia made high power gpus with 2 chips on a single board.
>My entire panel is only good for 150A of service, so I am wondering if I could even fill up a battlefield match before my main trips.
Keep in mind, while the US uses 120V at the outlet, virtually all residential service is 240V (typically two phases of 120V), so in theory your home has 240V@150A=36 kilowatts of peak capacity. With some high gauge extension cords to distribute the load to different circuits in your house, you'll likely run out of space or friends before you run out of power.
It's still single phase, one AC sine wave. The transformer supplies power using three current carrying conductors, two hots, and a neutral. The voltage, or potential difference, between the two hots is 240V. The neutral is a center tap on the transformer, and the voltage between each of the hots and the neutral is 120V. This allows circuits to use either or both 120/240V.
You are correct, however, in your math for total capacity.
Back then components didn't tend to be as power bursty so I think it matters whether you compare peak power consumption or average power consumption under "typical" load.
A wall load from a gaming PC with a 4090 and 13900k is definitely going to burst passed 700Ws, but average while playing a game is probably lower than that.
I know it's not a "problem" but I wonder why they didn't just went with some actual high power connector instead of just adding pins over and over again. XT60 have been well tested in RC industry and it's 720W sustained and 3x that in peak. Just add sense connector and you're done
Computer power supplies have (usually) multiple rails, which are multiple voltage regulator circuits outputting the same voltage. You wouldn't be able to do that with two wires. Also, I think the connector needs to be larger to distribute load, and the ATX style pin connectors are already used and no one wants to add a completely new connector type on their assembly line.
The current connector 12VHPWR connector already joins all the wires together on GPU end of connector. So multiple PSU power rails isn't an argument why they couldn't have used two beefier wires and connector.
Why even sense connector?
Like, what's that aspect needed for, here? I understand it for the lower-voltage rails, but the 12V GPU bulk power supply shouldn't need remote sensing, just mandate appropriate limits on voltage drop and rate cable resistance in a way that DIY builders can test their parts (not like we need high accuracy here, just need to 4-wire measure resistance to like +-10%).
But to answer your question on "why adding pins": electrical regulations regarding fire risks eschew PSUs that don't trip over current protection when you hit like 20-ish A on a single output, due to the risk of starting a fire if you were to dump that much power into a small place and happen to have enough resistance in that "short" to not trip the over current protection.
That's why GPUs feed multiple power rails from the PSU and control themselves to balance their draw across their input rails to not overload any single input.
And IMO as long as we let people build computers in cases that won't protect the room/building from e.g. a broken capacitor on the GPU's 12V side drawing 600W until the GPU is burnt to a crisp and the case fan has blown the flames out the back against the e.g. wooden shelf on/near the desk, we maybe should keep up the regulations that such fire-non-containing cases can't use PSUs where individual power rails willingly deliver enough power for an electrical fire to escape the case.
the only fire I have ever had in a local box was exactly this connector type - cheap, from China. I had no warning it was a hazard at the time, busy with thoughts of my (data science) project at hand. The event was of course at the wrong time, and could have been very, very serious. Please know your connectors !
Just looking at the article photos of the burnt connectors had tricked my brain into smelling the melted/burning plastic. And if you ever worked on your own rig, it's the worst smell ever.
For a little more than the price of a 4090 you can get double the memory with two 3090s. I’m still struggling to understand which is the better deal but the lower power draw of 3090 is nice as well. And I mean just 3090, not 3090 Ti.
AMD has said they won’t use the new power cable, and since the existing approach of using 2x or 3x 8pin works fine, I imagine they’ll continue doing that :)
They are also rumored to be more power efficient which would help things
The 4090s perf/$ is much better than the 4080, so if you are using the card for anything except gaming it makes a lot of sense. For gaming, yeah, you'd need triple 4k screens on a latest AAA game to actually need than kind of performance.
The 4090 is a gaming card, and thus optimized for games, not compute.
Buying a card that costs that much and draws that much power to do CUDA is not a choice somebody would make within the realm of sanity.
>don't care about AMD.
AMD has HiP, which is supposedly an open CUDA.
I hear that it's enough to replace calls to "CUDA" to calls to the same function names except "HIP" and then you're set, and has swappable backends, working with both AMD and NVIDIA hardware.
Note that I can't vouch for it, as I haven't tried it, but with the swappable backends I'd switch immediately even if I kept using NVIDIA hardware, just to have assurance of not being under vendor lock-in.
That sucks, but the situation is improving. RDNA2 support was added early this year.
Now that AMD isn't constrained by financial dire straits anymore, they seem to be pushing the HiP ecosystem very hard, adding HiP support into relevant software and frameworks.
They were focused on CDNA cards first (aimed at datacenter/computer), but expanding it to cover RDNA2 (the gaming cards) is a key step forward; that's what potential developers already have for gaming, and thus important for HiP adoption.
Otherwise, the widely deployed Vega have worked with HiP for a long time. I agree the RDNA1 hiccup was ugly and hurt adoption.
AMD has Ray Tracing, too. In RDNA3, it might even perform really well. Nov 3rd is close, and we'll know by then.
>dlss
Current FSR (open, works on AMD, NVIDIA and Intel) is considered equivalent by reviewers. Differences exist, but neither is better than the other. These differences are now up to individual taste.
>rtx voice
AMD Noise Suppression.
>shadowplay
AMD ReLive. But you should be using OBS: It's better than either, and open source. That's what most streamers use, pro and else.
I've never seen a single reviewer say FSR is equal/better than DLSS especially now with 3.0. I would like to see a source for that. I do like that it's open.
AMD's ray tracing isn't good yet and is years behind NVIDIA
Even if you use OBS nvenc is way better than AMD's offering. You'd just use nvenc through OBS, unless you want to dedicate half of your CPU to recording. Even if you do that you'll still take a performance impact in anything else you're doing.
AMD's drivers are so consistently awful that I no longer bother looking at their cards. And if you keep waiting for the next announcement in the pipeline then you'll never buy anything.
Is it unusual to have that severe an issue with a power cable?
I thought the whole point of going direct to the manufacturer or someplace less shady than Amazon like Monoprice.
(I haven't ordered from them in a while, but while they don't ship as fast as Amazon I never had a failure, versus Amazon constantly selling me shit that breaks, then demanding I go print a shipping label and send them their less than twenty dollar hunk of whatever -- it felt like a very purposeful strategy to exhaust folks who were more than willing to do some research.
Then again, I have an uneasy alliance with "the gamers" since I switched to being a film snob in 2009 -- that's why they took out all the payphones -- it's a long story, but apparently someone formally complained to the FBI that I said they should ring the Vatican with Italian tanks and start shelling if they don't join NATO and pay the victims, so they're putting off legalizing Jenkem and taxing it for revenue for another 69 or so years.)
My landlord recently told me about his other tenants have 600$ power bills before swapping to a locked rate. I wonder if the lad with the epic gaming PC is running a 4090.
Right. So when the house somehow burns down and it turns out the fella's super rig had one of the first 4090's; am I supposed to go "Oops. Sorry landlord. I knew this GPU had issues, and you asked me how much power these sorts of things use, but I decided to ignore the possible consequences; just to keep my nose out of others business?"
How about the families of the 5 people in that house? What should they be told?
"Sorry folks. I knew this GPU has issues and could possibly cause a fire, but people on the internet decided I was a bad person for doing the right thing, so I ignored the potential issue and didn't let the landlord know."
Yeah, no. Every gamer with a nosy neighbor does not need to be “protected” by people like you telling the landlord they’re going to burn the house down.
Except it was made my business when the landlord asked my opinion on why their house was using so much power, and he told me they had a high end gaming rig, multiple fish tanks, and Air conditioners going.
Nope, what your neighbors do in the privacy of their own home is none of your business even if the landlord asks. I don’t know why that isn’t obvious to you.
Except they aren't my neighbor. It's a completely different household. As I stated pretty clearly... "My landlord recently told me about his other tenants"
So, I have been involved via the landlord asking my opinion due to my knowledge of technology being a bit better than his.
But here people are, acting like they are somehow 'teaching me a lesson' while acting holier than thou, when they aren't. lol
And for the record. I did offer to go do a quick check with him to help rectify the situation so that everyone is being treated fairly, since he isn't that great at calculating things like power usage.
But instead he said no, cause he figures he can do the job well enough on his own. But now that this news has come up, I thought about the situation again, and figured I would leave my comment.
Clearly a mistake. I clearly should have just done what I thought was right, and not ask a bunch of people on the internet their opinion. That was dumb of me.
I'm proposing the difference between the house full of 5 people potentially burning down, over the possibility of saving them all that trouble by letting the landlord know that the newest GPU's have a problem with their cabling if the owner of said GPU is not careful.
One of these things is called 'being a responsible person'
But what I do know is the landlord said the fella is having a hard time not tripping the breaker with his new rig, and that their power consumption went up ever since he got the rig. So I figure that maaaybe he got the newest Nvidia GPU.
You all who are reacting the way you are, are over reacting. I haven't done anything yet. I asked a simple question, and you're all losing your mind.
If you read correctly the first time, you would notice that I said my landlord had asked me about the situation once already, but in regards to power usage.
So I am already involved. I was not being nosey. That's just your attitude being put out of place, and your ego showing.
And at this point, I don't care if I am breaking the rules of this site, because if it harbors people like you, then its trash.
I believe it is entirely possible to implement a solid and safe 12VHPWR connector, and its reputation is being destroyed by a shoddy implementation... but that's very much nVidia shooting themselves in the foot.