Hacker News new | past | comments | ask | show | jobs | submit | smitherfield's comments login

> because you don't have a reason to reinvent the wheel [in Rust]

This is a somewhat comical statement in a thread about "Rewrite it in Rust."


Nah, the thread is about people redoing the whole car, adding airbags, swapping out the carburetor, installing power steering, and maybe throwing in a new stereo and a paint job.

sed vs. sd, for example https://github.com/chmln/sd The semi-lovable but reliable old jalopy overhauled to match modern practices.


That's if you look at major PUBLICIZED attacks on TLS endpoints. It's quite plausible that the people who've found (i.e. are looking for) attacks based on incorrect crypto aren't publicizing them.


Sure, but there's no evidence of that.


No evidence? We know for a fact that US, Russian, Chinese, British, Israeli etc. intelligence agencies are looking for crypto vulnerabilities, and we know for a fact that they do not publicize the vulnerabilities they find.


Yes, I'm aware of many, many people looking for crypto vulnerabilities. I'm not aware of many exploits in the wild.


Mondale proposed some interesting ideas during his Presidential campaign, in particular a national industrial policy, that I think we would have done well to take heed of.


I think a useful concept with this is the legal doctrine of adverse inference.[1] If one of the parties to a lawsuit conceals or destroys important evidence, it is assumed that that evidence would have been unfavorable to the party which concealed or destroyed it.

So, while we may not be able to know for sure how COVID-19 originated, we can certainly draw an adverse inference from the behavior of the Chinese government.

[1] https://en.wikipedia.org/wiki/Adverse_inference


And arguably the CCP has lost far more face by trying to cover everything up, than if they'd admitted there was a leak in the first place, if there was.

In any case, they should have sent out the alarm, and worked quickly and cooperatively to halt it. For example, they should have cancelled outbound flights from Wuhan while the virus was raging there and they were barricading apartment blocks. It looks, at best, negligent that they did not.


This may or may not be true, but its the same sort of reasoning that was used to justify the Iraq War and we all know how that turned out.

Before the war, Saddam was acting unquestionable suspect by not allowing inspectors free access to locations and constantly moving material and persons around. In truth, it was a presence. A chemical weapons program was expensive, and hard to keep safely hidden so Saddam simply pretended that it existed by acting like he had something to hide.

As for China, it could be something as simple as regional managers hiding poor response times and safety violations that would have their heads rolling. It doesn't mean that there was a leak intentionally or unintentionally.

Disclaimer: I'm playing the devil's advocate here. Having a corona virus outbreak in the same city as your military installation studying corona virus is suspect.


But it wasn't an adverse inference in that case, at least not from Saddam's perspective. He wanted the world to believe he had WMDs, because he believed, not unreasonably (c.f. North Korea), that this would deter military action by the U.S. and his other enemies (Iran, Syria, Israel, Saudi Arabia).


>"...If one of the parties to a lawsuit conceals or destroys important evidence..."

Then how do you treat this Anthrax archive destruction done with the blessing from FBI? https://en.wikipedia.org/wiki/2001_anthrax_attacks


I would (and do) consider it very suspicious.


You may be right about China.

But the author of this article is not an epidemiologist.

He is not an expert on infectious diseases.

He is a former Republican congressman and a retired plastic surgeon.

Since Republican leaders have consistently lied about the pandemic for their own political advantage, and this author has no particular expertise in the matter, I'll make the inference that he has nothing constructive to add to this conversation.

https://en.wikipedia.org/wiki/Greg_Ganske


Show me the incentives and I will show you the outcome.

> we can certainly draw an adverse inference from the behavior of the Chinese government.

> it is assumed that that evidence would have been unfavorable to the party which concealed or destroyed it.

This is a good model only within the US sphere of influence. Outside, you might as well be shooting darts as people play by other incentives.


Why would people outside the US sphere of influence conceal evidence that was favorable to them?


I think we can draw parallels from "Everyone wants to do the model work, not the data work"

https://research.google/pubs/pub49953/


"If you were innocent, you would let us search your home."

There is a very fine line between destroying evidence, and refusing an investigation.

...and, don't get me wrong - I believe it's entirely possible that covid-19 escaped from the Wuhan Lab - but a refusal to be party to a politically charged investigation should not infer guilt.

Your rational is similar to assuming someone is guilty because they refuse to answer police questions.


This is more like if the police come to search your house, and find that you've burned it down, or you've barricaded yourself inside with guns and hostages.

But, let's assume for the sake of argument that your analogy is the correct one. You know where they assume that if you don't let the police search your house, or if you don't answer police questions, you must be guilty? China.

So, we can just as easily judge the Chinese government by another heuristic: The Golden Rule


You have the golden rule almost 100% backwards. It isn't "do unto others what you think they'd do unto you".


> You know where ...if you don't answer police questions, you must be guilty? China.

Are they our role-model now? Should we destroy our own principles in this case because that's what they would do?


This has been my guiding heuristic - China has acted unmistakably suspect.

They've got a political and ideological character that requires controlling narratives and erasing any trace of incompetence(this threatens the illusion of supreme power).


If their political and ideological character requires controlling narrative, then surely any single instance means nothing.

If I have an espresso every morning, the fact that I have one tomorrow morning means nothing. If I don't have one tomorrow, if I deviate from my usual behaviour, that might mean something significant. But following one's normal routine is just… normal.


The difference is that the ethical implications of you buying your espresso daily don't make you inherently less trustworthy.

The "routine" of burying evidence, silencing dissidents, and constant propaganda does absolutely reduce China's trustworthiness. To the point where we can and probably should always assume the worst of that regime.


You misunderstand.

What you said (and which I agree with) is that they always want to control the narrative. Not when they've done anything wrong, but always. I agree with that part — it's aregime that wants to control the narrative always, just so the populace won't get used to the existence of anything beyond control of the regime.

However, since the regime always wants to control the narrative, you cannot assume that wanting to control a particular narrative means anything more than "they're acting as usual". You can assume that the regime is untrustworthy in general, but you cannot assume anything particular about this instance.

Your argument is tantamount to "any control freak can and probably be assumed guilty of everything".


> However, since the regime always wants to control the narrative, you cannot assume that wanting to control a particular narrative means anything more than "they're acting as usual".

When dealing with a compulsive liar, you don't approach any situation with "This time maybe they aren't lying". They make up lies constantly about everything in order to obscure when they are actually lying.

You're right that you cannot make assumptions about what the truth is in any particular instance, but in every instance it is reasonable to assume they are lying and then go about finding the truth.

And frankly I suspect that China lies all the time not just to obscure their real lies, but because they are actually engaged in shitty behavior all the time. It's not even a secret that they are currently involved in genocide. If that's not out of the question for them then I don't think anything is, including engineering a plague.


> I don't think anything is, including engineering a plague.

People seem to forget that China was claiming for a long while that the virus wasn't transmissible (I'm sure people will equivocate that they really claimed there was no evidence of transmission, but it's tantamount), only shortly after for all the videos to surface with people passing out in the streets and the police welding people in their houses, etc.


How long was that long time?

I read a timeline once, and from what I remember it was a considerable number of days. But not more days than I would expect from any big organisation. It takes time to push data and wording up and down an orgchart until all the necessary people have signed off.

I've waited longer for routine code reviews in supposedly agile teams.


You only control a narrative that you expect to work against you.


"Or have everything you do on the web be phoned to Google, to improve your advertising experience."

Chrome already does this.


Speaking of, why both -ansi and -std=99 [sic[1]]?

[1] should be -std=c99


I've used these comments[1] as source for writing the CFLAGS. Did I misunderstood?

[1] https://stackoverflow.com/a/2193647


All the discussion ITT so far has been about the concept or hardware implementation of full-memory encryption. I'm wondering if anyone has thoughts about the proposed API.


Yeah, that's one of my biggest pet peeves when looking at other people's code (along with unnecessary dynamic allocations in general). One of the reasons I perhaps irrationally still prefer C++ to Rust is the pervasive use of dynamic arrays of known static size in the latter's documentation, and how it makes fixed-size arrays much less ergonomic to use than dynamic arrays.


Why wouldn't an implementation along these lines be performant?

  template<typename... Ts>
  class SoA : public tuple<vector<Ts>...> {
          // ...
          template<size_t... Is>
          tuple<Ts&...> subscript(size_t i, index_sequence<Is...>) {
                  return {get<Is>(*this)[i]...};
          }
  public:
          // ...
          auto operator[](size_t i) {
                  return subscript(i, index_sequence_for<Ts...>{});
          }
  };


Prepending would be O(1) because it's creating a new array instead of prepending in place. Still bugs me (seems inelegant, even if not necessarily inefficient) so I wrote my own version: https://news.ycombinator.com/item?id=18988075#18998572


> Prepending would be O(1) because it's creating a new array instead of prepending in place.

No, you still need to copy the old array to the new array.

FWIW, Ruby may already be allocating some space before and after the array to accommodate a few pre-/appendages.


>No, you still need to copy the old array to the new array.

That's just a lock (nontrivial but O(1)) and a memcpy (technically O(n) but trivial, and O(1) for the common case if it's implemented with vector instructions), plus in any event the sums-of-neighbors method has to be at least O(n) on an idealized Von Neumann machine because it must read every element of the source array and also write every element of the destination.


In other words, O(n), not O(1).

> technically O(n) but trivial

"Technically" O(n) is the only O(n). There isn't some widespread colloquial use of Big O notation where O(n) means something else. Whether it's trivial is beside the point, but for a large n, O(n) in both time and space can be prohibitive, and it may become important that I don't use such an algorithm. For example, if I have 8 GB of data and 12 GB of working memory I can't satisfy those space requirements.

> and O(1) for the common case if it's implemented with vector instructions)

What is the common case in your view? memcpy in the general case is O(n). That you can perform multiple copies in parallel might affect the real time, but it doesn't affect the time complexity because O(kn) = O(n) for a constant k even if that k = 1/16 or however many copies you can perform at once.

> plus in any event the sums-of-neighbors method has to be at least O(n) on an idealized Von Neumann machine because it must read every element of the source array and also write every element of the destination.

O(3n) = O(2n) = O(n)


> "Technically" O(n) is the only O(n).

In idealized algorithmic analysis, but not necessarily real life. "Amortized O(1)," which I assume you concede is a commonly-used, meaningful and legitimate term, means "technically" an idealized O(>1) but O(1) in practice.

Calling memcpy inside a Ruby method call is amortized O(1) because for any "n" that fits within available memory, it will always be much faster than the other things in a Ruby method call, which involve dozens of locks, hash table lookups with string keys, dynamic type checks, additional Ruby method calls and so forth.

Likewise, computational complexity on an idealized Von Neumann machine isn't always the same on a real computer, in both directions. Dynamic allocations are theoretically O(n) but may be O(1) if the program never exceeds the preallocated space. Or suppose there were a loop over an array of pointers which dereferenced each pointer; the dereferences are theoretically O(1) but may be O(n) if they evict the parent array from the cache.

> What is the common case in your view?

Such as an array small enough that it can be copied with 10 or fewer vector load/stores.

> O(3n) = O(2n) = O(n)

Yes, that's my point. It's impossible to implement the example in less than idealized O(n) time, so O(n) and O(1) operations are equivalent complexity-wise WRT the entire method.


> In idealized algorithmic analysis, but not necessarily real life.

Big O notation is used for idealized algorithmic analysis. If you want to talk about real life, you can count cycles, seconds, watts etc.

> "Amortized O(1)," which I assume you concede is a commonly-used, meaningful and legitimate term, means "technically" an idealized O(>1) but O(1) in practice.

Yes, but I wouldn't take O(1) on its own to imply amortized complexity. Not that pretending that an array copy is O(1) in practice is particularly useful here since if you measure a copy operation in practice, you'll find that the time it takes scales roughly linearly with the size of the array. Not to mention that the space complexity is O(n) no matter how you put it.

> Such as an array small enough that it can be copied with 10 or fewer vector load/stores.

Are other cases conversely "uncommon"? My point here is that this is entirely your opinion and doesn't pertain to whether an array copy is O(1) or O(n) complex.

> Yes, that's my point. It's impossible to implement the example in less than idealized O(n) time, so O(n) and O(1) operations are equivalent complexity-wise WRT the entire method.

Not in terms of space.


Array copies are O(n), not O(1).



The one where you yourself even say it's O(n)? Big-O is concerned with asymptotic complexity. It's O(n) until you show us it's not (e.g. implementation code, not musings.)


Here's my second reply to him, where I myself point out that idealized Von Neumann machines don't exist in real life, and certain idealized O(n) operations (such as memcpy) may in real life for any possible "n" be cheaper than some baseline constant C (such as the cost of a Ruby method call): https://news.ycombinator.com/item?id=18988075#19001585


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: