Hacker News new | past | comments | ask | show | jobs | submit | NateyJay's comments login

Why not both?


Two way radios can often connect to the telephone network through a bridge at the radio repeater


What he said


Recommended practice is to timestamp windows drivers (and software) when they are signed. Without a timestamp, the driver is not trusted after the signing cert expires, which I guess is what happened here.

With a timestamp, as long as the signing date was within the signing cert's validity period, the signed driver continues to be trusted beyond the signing certificate expiration.


That seems silly. Presumably a cert has an expiration date after which we might assume its been compromised. If it has been compromised then it could have been used to backdate a driver signed with it. In other words, if you don't trust the cert you should not trust anything signed by it. Or is there another layer in this somewhere?


The timestamp server is a separate trusted entity that signs the signature asserting the date and time. It's not just metadata, it's effectively a separate signature.


Which just means the expiration date is meaningless.

If the driver was valid when it was signed, then revoking it will break the system. Not installing it is another story.


The expiration date is the fallback if you don't have confirmation from the timestamp server that it was signed prior to expiration.

Ideally it's not used except by the timestamp service, but it seems like a fairly reasonable fallback.


> The expiration date is the fallback if you don't have confirmation from the timestamp server that it was signed prior to expiration.

The fact that the driver was installed locally before the expiration should be taken as proof that the driver was signed before expiration.


Then you would need an internet connection just to install a driver. It would make getting your network driver installed pretty difficult.

You could look at the system clock but that was not designed to be secure for this purpose.


> Then you would need an internet connection just to install a driver.

If you think I'm proposing any changes to how drivers are installed, then you have misread me. I'm proposing a change to how already-installed drivers are handled: absent any new information, the code that was trusted yesterday should be trusted today, and be allowed to keep running.


Imagine a scenario where a driver is installed during a network outage and with an incorrect clock. Because you need to be able to install a network driver the system will allow this security flaw. However when the system knows better its reasonable to limit the damage by stopping the driver.

You could say that any damage has already been done which is most likely true. But I can't fault them from mitigating it as much as possible.

I suppose you could modify the system to get external attestation of the time while the driver is installed and use that as a sticky bit - but its a big complication and its much better if the driver is securely timestamped in the first place.


> Because you need to be able to install a network driver the system will allow this security flaw. However when the system knows better its reasonable to limit the damage by stopping the driver.

The only way that the system "knows better" is by acquiring something like a certificate revocation list. The system does not know whether it was powered down for five minutes while the network outage was fixed, or for five years. When the system is powered back on with a working internet connection, it does not have any reliable way to tell whether the offline installation of the network driver occurred prior to the expiration, or after the expiration with a properly backdated driver and backdated system clock. There is no way to justify suddenly de-trusting a driver that's already been running simply by observing that you're in the future.


Even then you only need to verify that once and can save a time stamp in case the cert is revoked afterwards. Breaking system that has already been verified is still unjustified.


Can’t Microsoft give you an error report when they do this, to let you know what you are doing is probably very dumb?

I guess I don’t know the time when Microsoft has their code and heir contact information and is doing some kind of preflight check, or if that ever actually happens, and there are already so many ways to be very dumb with drivers...


>Which just means the expiration date is meaningless.

how so? it merely limits which dates you can sign code, after which the code you signed remain valid, but you can't sign any more code.


> it merely limits which dates you can sign code, after which the code you signed remain valid,

The whole problem here is that the code that was signed is not being treated as valid code beyond the expiration date.


A physical memtransistor will be a lot faster than emulating it in software, but you'll also be limited by the original connections and design of the chip. Not as easy to reconfigure as writing new code.


I mean, ideally I think it would be written as an FPGA-like piece of hardware where the connections can be synthesized from a high level description language. A la [0].

——

[0] https://web.stanford.edu/group/brainsinsilicon/index.html


Then you will rapidly run into the same problems that FPGAs have, which is that all that connective tissue isn't free and has to pay for itself. That turns out to be a pretty steep bar to leap.


Well the problems of FPGA have more to do with synthesibility of the high level HDL and having to fit that generic representation in a predefined LUT model. its alleviated quite a bit when you are designing 'application specific' FPGA with different substrate. IIRC Mathstar tried to do this some time ago and so did Ambrics. at the time it suffered from a solution looking for problem syndrome & failed.

the big idea here is that IF your base element (memristor crossbar here) is suitable for such rapidly reconfigurable bus architecture (which it seem like it is) then you can use it to synthesis a single neuron directly. which is a huge leap over the next best GPU/TPU based architecture based on instruction fetch-decode-execute model. based on what I have read few years ago you can have a 20M neurons simulated with memristors in about a cm2 die. that is human level integration density even if you totally ignore the vast difference in switching rate (100Hz vs 1+GHz).


This does suck, but you can actually use From: aliases if you set them up first in the Gmail interface. You also have to enable two factor auth. Hard to be too mad about spam fighting measures.


Sounds like I need to update my article. Thanks for this information.

But note that they are still violating the RFPs, if they rewrite headers for people who fail to jump through these hoops. They can implement their spam protections (if that's the purpose) while remaining compliant by rejecting the emails in question.


Maybe I've misunderstood something, but if gmail didn't do this, what's to stop me from sending emails that look like they come from your address?


Nothing. The standard allows you to put anything you want in the From: header. Sometimes I send emails from Santa Claus or God. The envelope information is still there.


Actually, not "nothing." If the MSA doesn't like this, it is free to reject the email. But not to change its content.


Yes they do. A Faraday cage is equally good at blocking radio waves going out as coming in. They don't protect against conducted emissions though - along a power cable for example - so they're not a complete solution.


Perhaps the radiation wasn't mostly coming from the Antminer box itself - each 700Mhz clock the rig draws current, the transformer propagates it back to the wall, and the power lines in the flat become a big antenna.

Powerline Ethernet has this problem, but it uses only a 25MHz carrier wave, so it only bothers hams.




I think the popular perception of a learning curve is difficulty vs time.


The PWM backlight driver frequency should be much higher. This kind of effect is only perceptible at sub-kilohertz LED switching. The LEDs themselves are happy to be cycled into the megahertz, so the only reason for this on the Macbook is poor electronic design. A low frequency does provide reduced cost, slightly higher efficiency, and reduced auditory/RF noise emissions, but visual effect has to be top priority.


Yep. I wonder if you give up some small amount of power efficiency at higher frequency, but it shouldn’t be much.


The article abstract (https://www.ncbi.nlm.nih.gov/pubmed/29176609) says that the old "nyquist density" sensor arrays were designed to capture the theoretical maximum amount of data present. But it turns out those models were wrong.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: