Hacker News new | past | comments | ask | show | jobs | submit login
Attacking the Windows Nvidia Driver (googleprojectzero.blogspot.com)
205 points by spaceboy on Feb 16, 2017 | hide | past | favorite | 74 comments



A lot of people hated the decision, but back when Microsoft refused to support WebGL, one of the reasons were that the GPU drivers were awful and exposing them to the internet was dangerous.

The proper solution, of course, i for GPU drivers not to suck but it was still a legitimate point and this article seems to validate that.


They were right that it was dangerous, but it became pretty clear that IE was using them to provide cover for not implementing a feature when they changed their mind later.

Whereas Chrome implemented a WebGL compiler that reduced the amount of attack surface WebGL could reach and audited a bunch of popular drivers to fix the exposed bits.

I think the fact that this blog shows that these vulnerabilities are not reachable from WebGL is a validation of Chrome's approach there, though it clearly shows the issues these drivers present for escaping Chrome's sandbox.


I think there's probably still a fair bit of attack surface behind glReadPixels() and the like. All it takes in a single backbuffer/texture/surface/etc to not be memzero'd properly and you can start looking at parts of the system's memory.

GPUs are both really complex and highly secretive about their implementations. The incentive for GPU vendors is to write fast drivers. Security is pretty far down on the list, esp when it competes directly against performance.


>fair bit of attack surface behind glReadPixels() and the like...

>...you can start looking at parts of the system's memory

I thought the whole point of using glReadPixels(), as opposed to just dereferencing a pointer in the system's address space, was that the framebuffer memory it accesses (whether backed by a texture or a surface or whatever) is GPU memory, and not system memory?


That's true on split-memory architectures(desktop) but most mobile GPUs(and a couple consoles) use a unified memory model.

Also most browsers use the GPU to speed up rendering so you can pick out things from there too potentially.


I wonder, do you really have to write zero to all the memorycells? Or can you just stop the refresh cycle in hardware and let squares of memory drain and die in one or two cycles?


Even without refreshing the cells they actually last quite a lot longer than you would expect. This is an old research paper on the topic but unfortunately, it looks like all of the images and videos are broken which is a shame because it gave you a great mental picture of the "half-life" so to speak of RAM.

https://citp.princeton.edu/research/memory/

The bottom line is that you would need to stop refreshing it for minutes if not longer to be sure that there wasn't an information leak and if the memory is cooled down it'll last a great deal longer without being refreshed and even still keep the majority of the contents after hours of being removed from a running system if they are cooled using liquid nitrogen. Either way one or two cycles isn't going to really matter at all.


The latter would take far longer, even if it were reliable. So not really; it'd need hardware support to be reliable, and if chipmakers are willing to do that, then they could implement hardware zeroing rather than require the OS to wait several seconds for charges to fully dissipate.


Hence this case where someone's porn was exposed from a previously-closed incognito mode session:

https://www.extremetech.com/computing/221208-nvidia-blames-a...


There's a nonzero risk there, but this seems like a relatively narrow problem that would be much trickier to exploit than any memory corruption bug.


It's also worthwhile pointing out that at the time MS were pushing Silverlight heavily, which provided GPU access through XNA and made itself accessible through a browser plugin by default.

MS's security side may have been against WebGL and the IE team either agreeing or using it as an excuse (and really, from this point of view, it doesn't matter!), but other parts of the company were exposing the web to the exact same problems.


The post specifically mentions that these things are not vulnerable to attack from WebGL, because you can't reach the APIs in question just by drawing.


The parts now exposed to WebGL are pretty well fuzzed at this point; much of the low-hanging fruit has now been picked.


Microsoft changed their minds and this article validates that the fears were mostly unfounded. Browsers can do enough validation of their own to mitigate most driver issues.


The same with sound.


I cant prove this but my computer blue screened after a Nvidia driver update. It took my several hours to get everything working again because it wouldnt even launch in safe mode. Very frustrating. I wish they had a little more quality control on their drivers.


I ran into this a while back, there was a BSOD in the GTX 980Ti drivers that only got triggered by multi-monitor setups. Hilariously the solution was to use a newer beta driver that hadn't passed WHQL yet. (Rolling back would've worked as well, but I was updating specifically because there was a bug in the previous driver. WebMs in Chrome would start out mostly black w/ fuzzy blocks and after a loop of the content they'd work fine.)

They make cards w/ 4x DisplayPort connectors, but apparently multiple monitors isn't part of their quality assurance process? That seems a mite silly to me.


I've 4x GTX 690s 3x dedicated to GFX, 1 to physics. 3 to VFX. I'm seen a number of GSODs with bugcheck VIDEO_MEMORY_MANAGEMENT_INTERAL. Always happens when I've 1 monitor streaming news, primary monitor playing Civ VI.


Interesting. I have an AMD card and experience that too.


Oh trust me, they do tons of QA. If you can replicate what problem you have demonstrably with steps that QA can follow and you submit to the Nvforums or whatever, someone in the QA team will eventually try it out.


I once interviewed for a Software Engineering role with an Nvidia QA team in Austin 5 or 6 years ago(would have been working on maintaining the test software that validated new drivers on huge banks of test hardware, if memory serves).

Wasn't too terribly impressed with the team (and a couple of them were definitely giving off that "I hate my job/life" vibe, one disgruntled fellow was even trying to drop little thinly-veiled "run away!" hints at me). I figure either the cream of the crop at Nvidia doesn't work in QA, or they don't get proper support from upper management.

P.S. Didn't get an offer anyway, probably for the best - I was desperate for work at the time and would have taken it :)


After the "Tom" incident I'm not surprised to hear about anything that happens inside NVIDIA. That presentation, months of bad drivers (yes, I was using DDU) and the GFExperience login thing made me switch to another brand.


I didn't know what the "Tom" incident was, but assume it is this, which is deplorable:

https://www.youtube.com/watch?v=bGrvahez1H8&feature=youtu.be


There is one with better sound

https://youtu.be/iAueZ_VWBUU



Yeah, Nvidia rather sucks at cleaning up after themselves, for whatever reason.

The main reason I'm not buying another card from them is their stance towards Linux in general and their stance towards PCI virtualization in particular.

I'm currently not upgrading drivers due to the latter; they're trying to disable the capabilities (dedicating a card to a VM) that are the reason I bought a second card in the first place.

I can usually deal with either lazy or greedy, but both in conjunction is infuriating. Screw those guys.


If you're talking about GPU pass-through, they're just trying to make it hard, but they're not making it impossible.

If you're talking about virtual GPUs, where one card is split across multiple VMs, however, unfortunately that's Tesla only. That said, I worked (as the maintainer of KVM) with the nVidia driver people working on vGPU, and I was very impressed. They were very knowledgeable and professional, and they managed to contribute a generic Linux framework for virtualizing PCI devices rather than a one-off hack specific to nVidia. Intel is using the same framework now, in fact.


I know it is not currently impossible, but I don't trust that the current "policy" will continue.

Personally, not talking about virtual GPUs. (I mean, that's cool stuff, but that's not my use case.) I'm glad you found the driver engineers to be solid. I doubt, however, the engineers are driving the decisions on what passthrough functionality Nvidia feels like allowing consumers to have this week.

Or driving decisions like this: https://devtalk.nvidia.com/default/topic/579449/linux/basemo...

I personally consider removing functionality after I purchase something to be a form of fraud. And Nvidia doesn't seem very shy about doing it. Thus, I don't trust them, and don't do business with untrustworthy vendors.


I think they want to limit it to Quadros to be more precise.


Yes, exactly. But they don't try _that_ hard.


I had to revert the Windows 10 Anniversary Update to work around that.


why?


The crash-on-full-screen-video bug started on my laptop after installing the Anniversary Update (v. 1607). I have a hybrid Intel 4000 / NVIDIA NVS 5400M video card, similar to the setups described in realharo's links. There was no driver fix available (and apparently there still isn't), so I had to revert to Windows 10 v. 1511 to work around the bug.


Yeah - that sounds like fun - recreating a driver install issue after you finally have it working on your system. Heck, even if you set up a system -just- to recreate this problem, it would still be one of those "nightmare inducing" trials of will. I'm not saying it shouldn't be done, I just can't imagine doing it (especially on my personal workstation).


Plus you paid for the computer, graphic card and OS. Why in hell would you spend hours of work on your free time for this ? Do people try to find defects in their car then send a report to Ferrari ?


> Do people try to find defects in their car then send a report to Ferrari ?

Yes.

-- Ferruccio Lamborghini.


The problem is not with the car but with the driver!

-- Enzo Ferrari

[Closed. Will not fix.]


> Why in hell would you spend hours of work on your free time for this ?

Sure, you can either return the items, or report the bug, or hope that you've managed to find a workaround.

There's no need to discourage people who want to spend their own time on getting the bug fixed so that others benefit. Not everybody thinks the same way you do.


Oh I report bugs all the time. For open source projects. When I pay hundreds of euros for a product I expect it to work. Strangely it something we don't expect for anything related to computing.

We have such double standard.

If you buy a washing machine and it doesn't work, the brand sucks.

If you buy a graphic card that makes you OS crash, it's just nvidia needs a little help. Replace nvidia with any gadget or software.

Windows uses to crash all the time, and people found that normal. The same problem with a microwave would have issued a massive recall but microsoft got away with it.

Well, no sorry. You just sold me a non working product, wasting my money, my time, and preventing me from doing the task I was going to do with my computer.

I'm a dev, I understand perfectly WHY it happens. Complexity VS expectations VS cost. I get it. But the consumer is cheated here.


I know what you're saying, and I do agree, but as you know, not all problems will be Nvidia's fault.

I guess it depends on how obscure/serious the bug is, but the reason we can't have everything working always is because people want to be able to put whatever random hardware and software of their choice together and have it all work. It's not that simple.

There's a lot of stuff that has to go right for things to not have problems. PCs have become insanely complicated and there's almost unlimited chances for a particular combination of hardware/software to have bugs out there.

If you paid (a lot) more for a certified system, then maybe you could expect it to work flawlessly. Are there vendors that do that? I guess it would have to be for only a particular software setup as well.


I see your point. But most people don't want to put whatever and have it work. Most people don't even know what a graphic card is.

It's the vendors that want to be everywhere to make more money. But they can't provide the quality with the increasing complexity.

That's why Apple's setups are often considered good quality. They solved the problem by limiting the hardware and software combinations.

I dislike Apple but I can at least see that their strategy worked.

The problem I got with this is that proprietary software has the same order of magnitude of problems that free software has. But I paid for the first one, and people writing it are paid.

I can accept bugs in FOSS because of it's very nature. But if you can't give me a guaranty that my product will work, what did I pay for ?


I understand your view, I just don't see why others should also adopt it.


that's the Next Big Thing in software engineering. lay off your QA department and shovel the burden on the users, who work for less than minimum wage. Actually, they pay you to find your bugs.


I mean, if you do this, you're expediting the process, not "doing their work for them".


You're being down voted but I had the exact same behavior last week, in the end I gave up trying to fix it and reinstalled Windows to a new drive, at some point I'll wipe my old 'C' and move my new 'C' back to the fastest boot drive.


That is pretty weird. Safemode boot should still work if the installation was botched.

You can try to remove all the NV kernelmode driver files manually, at which point it should just fallback on the default VGA driver. Then you can use the DDU[1] tool to cleanup any remaining files, and do a clean install with a driver from nvidia.com.

Do you perhaps remember what the bugcheck code was, and which driver was listed as the offender? If you have a kernel minidump still available, that'd be helpful as well.

[1]: http://www.guru3d.com/files-details/display-driver-uninstall...


Have you tried using Display Driver Uninstaller ? This is something I do every time I update the driver. Run the program, tell it to start in safe mode, run the program again, tell it to uninstall the driver. Reboot. Install new driver package, run DDU again to re-enable automatic driver install (because you need it for other devices) and done. Works like a champ and always fixes any odd driver behaviors.

Link to DDU: http://www.guru3d.com/files-details/display-driver-uninstall...


So, yeah, that sucks for sure and I'm with you, but imagine you're a driver dev and every mistake you make can now potentially blue screen the OS. Life in user land is very different.


I often have the issue with their "clean install" option where it uninstall the current driver, reboot, fail to boot because there are no drivers and don't install the current version. Very frustrating.


What I'd like to see is a "slimmed down" version of the graphics drivers.

Right now my graphics driver carries a boatload of "utilities" of questionable utility - especially "hand written artisanal shaders" to improve quality in AAA titles, where NV or ATI optimize the game's original shaders. NVIDIA ships stuff for 3D glasses.

How about that the driver packages only load these when asked to do so, e.g. when a game that can be optimized is installed / launched, or when a 3D glass is added?


Hear hear. Can they also please be less than 100megs in size? I think nVidia's clock in at 300 right now. I just want the driver and recording/streaming software. Maybe you could open up the required api to write a third party Shadow Play...


Yeah, the NV mobile driver clocks in at 292 MB (W7 x64) right now.

One thing that certainly blows up the size is that the NV driver installer bundles support for everything from the old NV 8600M - which IIRC was released in 2007.

If there's one thing I certainly can't whine about in times where phones carry less than 2 years updateability, then it is that NVidia still provides up-to-date drivers for a GPU chipset nearing a decade of life time.


I agree, but I recently bought a gtx1060, which meant I had to update to the latest drivers, which unfortunately don't support the other card I have in that machine (I forget what model, it's either a gt520 or a gt210). I only wanted to use that other card to connect extra monitors.

You can't run two Nvidia cards off different driver versions.

Anyway, the gtx1060 supports four monitors by itself which is enough but I needed to buy some cables.

Nvidia drivers are super bloated though.


I wonder if AMD drivers have the same attack surfaces as the nvidia drivers

Edit: a quick google search turns up CVEs for the old catalyst driver, but none for the newer crimson drivers.


Too bad my laptop can't use those crimson drivers, as AMD no longer support the APU part of the setup that make things like external displays work (and will you please stop trying to be helpful and silently "upgrading" my drivers, Microsoft!).


You can disable driver updates through group policy.

https://technet.microsoft.com/en-us/library/cc730606(v=ws.10...


Could have sworn that MS had disabled GP access on "consumer" grade installs.


> and will you please stop trying to be helpful and silently "upgrading" my drivers, Microsoft!

Or, I finally got it to appear to work. Details on how it really works, the documentation didn't really say. Why it works now, I don't really know. I never understood it well and, instead, just threw it against the wall in various ways until it appeared to stick.

Another point is, after such a remote Microsoft change, I no longer really know what went into my system or relevant boot partition or how to rebuild it starting with what I already have. That is, when I built the system, I took careful notes on just what I did so that I could retrace if necessary; with remote changes from Microsoft, my notes are now inaccurate. My system is no longer reproducible. So, if next week I make a change that ruins my boot partition, then I can't rebuild to just before the change -- unless I have backed up that boot partition with, say, NTBACKUP which I do use. Indeed, with the system I am building now, I'm planning a boot drive with several bootable partitions and a second drive with backups via NTBACKUP (if that is still the best approach on Windows) of various increments of the boot partitions.

That is, in general I very much want to know just what, and quite generally and in as much detail as possible, (A) the heck is on my system and (B) how to get back to some earlier state.

All this skeptical caution is a special case of rules:

(1) If it ain't broke, don't fix it.

(2) If anything can go wrong, then it will.

(3) There is no independence; if you change one thing, then no telling what else may be affected.

(4) The fundamental perversity of material objects.

etc.

I very well remember, from when I was at IBM's Watson lab and we visited some high end customers, how when IBM came out with a fix, update, change, or new version, the site would run the change for months just on a test system before they permitted it on-line for production work. Part of this was, if the system crashed for an hour, then the CIO could lose his bonus. Two such in one year, and he could lose his job. They very much did not want systems that crashed. They were very clear about that.

Once, out of IBM, I visited the main NASDAQ site in Trumbull, CT: They were doing their core work on Non-Stop systems. IIRC, their attitude was that their Non-Stop systems didn't stop and didn't need updates.

Here is some irony: First the vendor sells their system as the greatest. Second, the vendor says they have an update.

Hmm .... If their greatest needs an update, then how about the update?

To borrow from the movie The Treasure of the Sierra Madre, "Updates? What updates? We don't need no stink'n updates."


Project Zero is awesome. I'm not normally a fan of the GOOG, but these guys are doing some seriously good work.


Getting these issues fixed in the drivers is great, no doubt. But it's moot if nobody actually updates to those fixed versions. How hard are nVidia/Microsoft trying to actually push these out over Windows Update so that end users will actually benefit from all of this work?


Anyone who's into gaming probably has GeForce Experience installed, which manages drivers and gives notifications.

I want to say Microsoft will push them out a little later, but I can't be entirely sure since I've always used the nVidia path.

EDIT: Unfortunately the GeForce Experience is getting, as is typical, super invasive. Access to even basic settings requires an account (nVidia or Facebook account, etc).


Recently removed GeForce Experience. Graphics drivers should not require an account.

Considering that computer configuration is unique enough to enable fingerprinting across different browsers, I can't even see why it's required.

At least Experience tells you it's being creepy so you can remove it.


notifications for new drivers are nice, but I'll get around to updating them anyway without notifications. Geforce Experience is terrible. The more invasive it gets, the less I want it installed.


Can someone give me a layman's definition of an "escape" and why they would be legitimately needed? Are they needed so callbacks can "escape" and be exposed to other classes? (does that make any sense?)


As the article indicates, they're similar to ioctl, which is a system call that, roughly speaking, allows arbitrary opaque blobs of data to be passed back and forth between a user-mode -process and a kernel-mode driver. This is intended as a generic mechanism allowing drivers to expose arbitrary functionality to user-space. This enables the implementation of functionality that would not otherwise be possible because it was not foreseen and enabled by the design of the operating system.

Bear in mind that the system call ABI changes slowly and with much difficulty: once a version of a kernel is in production, it can stay in use for a long time; it can take a long time for new functionality to be broadly available, and breaking back-compat with applications compiled against older ABIs is Not Done. Dynamically loadable kernel modules and ioctl-like system calls make it much easier to bring new functionality to all the various kernels running around in the real world.

Given the complexity and rate of change in graphics tech, it makes perfect sense for there to be a general-purpose arbitrary functional-call mechanism for interaction between user-mode and kernel-mode graphics driver components. Microsoft (or the linux graphics subsystem maintainers, etc) just doesn't know enough about Nvidia/AMD's current and future requirements to nail down a more rigorously defined API.


Thanks, I know nothing about this level of programming but this gives me a lot of good stuff to google.


It is very similar to an ioctl on Unix (DeviceIoControl on Windows), just specific to the Display/Video driver here. It is a way to send arbitrary data from usermode into kernelmode, for whatever reason.

You can imagine the usermode interface as:

    void escape(int command, void *param);
And then the kernelmode implementation would look something like:

    void escape(int command, void *param)
    {
        switch (command)
        {
            case COMMAND_FOO:
                do_foo((foo_param_t *)param);
                return;
            case COMMAND_BAR:
                do_bar((bar_param_t *)param);
                return;
        }
    }
The driver defines the params and what FOO and BAR are. This can be used to issue special commands that don't have an interface provided by MSFT. It is also used by any drivers that run in usermode (e.g. OpenGL, CUDA, etc) that communicate directly with the kernelmode ones. These interfaces are generally not public. The project zero researcher has disassembled the kernelmode driver and reverse engineered their format.

Does that help?


Definitely, thank you for the psuedocode examples!


It is pretty much just the name for their API.


Slightly off topic but with all the video expertise in this thread, I want to ask anyway! :-)

I'm currently selecting parts for my next computer, to be used for continued development of the Windows .NET software for the Web site for my startup and also for my first Web server available to beta testers and then to the public on the Internet.

So, sure, I need a video card. Of course, I will do some routine Web browsing, maybe watch a movie at YouTube or Netflix. But I have never played a video game and, trying to get my business going, have no intention of playing a video game.

So, looking at information on video cards, it appears that maybe the card should support hardware acceleration of Microsoft's DirectX version 12 and also maybe some recent version of OpenGL.

Question 1: Why should I move from just VGA, that is, get just a VGA card and not even get a graphics card? What will I get from a graphics card I really need and can't get from just VGA?

Question 2: If I get a graphics card, will DirectX 12 hardware acceleration on a graphics card help for some of Web browsing or movie watching?

Question 3: Same as Question 2 but for OpenGL?

Some people on this tread may have some good answers. As far as I can tell, good answers on the Internet are like hen's teeth -- it looks like everyone wants to sell graphics cards for the latest gaming experience.

Thanks!


Disclaimer: NVIDIA employee.

First of all, if your CPU has an integrated GPU, and you don't need more monitors than it supports (usually it's 3x1080p), that will be more than enough.

> Why should I move from just VGA, that is, get just a VGA card and not even get a graphics card?

I don't quite understand what you mean by VGA card. You mean something that has a VGA adapter and framebuffer(s), but the rendering is done on the CPU?

I wasn't aware those still exist outside some niche markets. I'd guess it'd cost about as much as an entry level GPU, which will take the load off your CPU.

My advice, if you don't have any iGPU on your CPU, is to just get the lowest tier graphics card. Those are <$100 new for the latest generation. You don't need latest, and probably don't need new.

When it comes to web browsing and watching videos, any remotely recent card will work fine. You may have issues with some fancy WebGL pages (i.e. browser games) but that hardly counts as everyday browsing.

Be sure to read a review of the card before purchasing!


Thanks. I was slowly beginning to conclude much of that.

The CPU I plan is the AMD FX-8350 with 8 cores running at 4.0 GHz and 125 Watts. So, no it has no integrated graphics support.

For a "VGA card", I just meant a video card supporting all the old VGA standards but without a graphics processor. So, there would be no "hardware acceleration" of OpenGL 4.5 (or some such) or DirectX 12 (some version of). Yes, there would be a standard VGA plug (socket, connector, etc.) for the signal connection to the monitor, but many high end graphics cards also have that.

Yes, looking, it's possible to find just a VGA card, that uses an old PCI slot, for about $20. But, a low end graphics card can go for about $30 or $36 with 1 GB of memory of its own, a graphics processor, and "support", likely hardware acceleration, of OpenGL and DirectX.

Apparently by Windows 10, DirectX 12 is regarded as a standard part of Windows.

In my old computer, I assembled in 2007, which apparently due to motherboard hardware problems, does blue screen of death (BSOD), really, the screen goes black instead of blue, about five times a day, has an old nVIDIA GX 4000 with 64 MB of memory. As far as I know, the card has been fine. I never knew that the card had any graphics capabilities until two weeks ago when I ran the standard Windows utility DXDIAG which showed that the card supports DirectX 9 and the card put up a nice rotating cube of the DirectX logo. Okay. So, maybe the graphics processor in the card can accept a gazillion triangles in 3D from the CPU, motherboard, and applications software, do rotations, hidden line removal, shading, maybe texturing, etc. Okay, but since 2007 that is the first time I ever saw such a thing!

I have been concerned about statements, e.g., that some graphics card needed for the PC's power supply to have capacity 300 Watts or more. Gads! That's a lot of power! Looking in more detail, apparently such graphics cards actually draw a maximum of only 25-40 Watts at 12 Volts, that is, <= 3.3 Amperes, which seems acceptable enough for the 650 Watt power supply I'm planning, the case cooling I'm planning, etc. I will be sure to use some of the standard ASUS software to monitor the 12 Volt lines from the power supply -- I doubt that the voltage will ever fall significantly below 12 Volts. The 12 Volts lines from the power supply are used for what, just the cooling fans, the hard disk drives, maybe the power on the USB ports, and, apparently, power to the PCI-Express slots? Gee, the pulse width modulation (PWM) of three of the cooling fans will put some fluctuation on the 12 Volt lines that will mess up a graphics card? I doubt that!

You are correct about WebGL -- I doubt I will be visiting Web sites that use that. I'm less clear about scalable vector graphics (SVG). I don't see even from 50,000 feet up how ordinary Web browsing, say, displaying JPG or PNG still images or playing MPG4, YouTube or Netflix, or DVD videos could be helped by having a graphics processor -- tough to find such explanations. Do graphics processors routinely help display fonts faster?

I will have a 2 TB hard drive for bootable partitions. I will install Windows 7 Professional 64 bit on two boot drive partitions, say, drive letters C and D, and use one of those for my remaining software development for my Web site. Using likely the standard Windows utility NTBACKUP, which I like (e.g., it will backup a bootable partition while it is running, likely much like how relational database does a backup of a database while it is executing transactions and I can save it to any disk drive I want just by an ordinary copy operation) I will save both bootable partitions to a second hard drive. Then if, say, partiton D gets sick and the usual Windows restore is not good enough, I will boot partition C and restore the sick partition D from one of my NTBACKUPs on the second drive.

Some years ago when I was trying to install an Express (free) version of Microsoft's SQL Server, my boot partition contents were corrupted, really, destroyed, and I had to reinstall everything starting with an empty partition. Bummer. I want NEVER to have to do that again: Before I do any possibly dangerous maintenance, installations, or upgrades to a bootable partition, I will just save the whole partition with NTBACKUP. Then, if the partition gets messed up, I will just boot another bootable partition and restore the backup from NTBACKUP and try again.

Then I will install, again on two partitions, some version of Windows Server, likely 2012, and SQL Server of about the same vintage, and that will be the basis of my Web site as I go for beta testing and live on the Internet.

The Web site HTML sent to my users will be only just dirt simple HTML, say, up to date as of about 10 years ago, with just a little, simple CSS and nearly no JavaScript, no pop-ups, roll-overs, pull-downs, over-lays, or icons and no HTML <div> elements (tags?) -- dirt simple. I will have a simple logo graphics PNG I developed with just Microsoft's PhotoDraw, and that will be the only use of graphics. Net, for the Web site, I see no need for any graphics hardware, for development, server, or clients.

I see from both nVIDIA and ATI graphics cards $30-$40 with 1 GB memory, OpenGL, DirectX 12 that use a PCI-Express x16 version 2.1 slot. The Asus motherboard I have in mind has a PCI Express x16 2.0 slot which I suspect one way or another will work well enough with a card that wants version 2.1. I suspect I will make a decision today.

I'm still not very clear on just why I need a graphics card instead of just an old VGA card, but for just another $16 I'm going to spend the money, accept whatever system management mud wrestling I have to do to get an appropriate device driver working, quit worrying about the card, and get on with the more important work.

Thanks for the info.


> Most of the vulnerabilities found ... were very basic mistakes, such as writing to user provided pointers blindly, disclosing uninitialised kernel memory to user mode, and incorrect bounds checking.

It's not just Windows drivers with these problems...


Some specific numbers:

The patches from Tuesday fixed a total of 16 CVEs. 11 are Windows only, 4 are Windows+Unix[1], 1 is Unix only. 3 of those (Windows) CVEs were reported externally.

http://nvidia.custhelp.com/app/answers/detail/a_id/4398

Older bulletins: http://www.nvidia.com/object/product-security.html

[1]: Unix in NVIDIA-speak generally means Linux+Solaris+FreeBSD




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: