Hacker News new | past | comments | ask | show | jobs | submit | annilt's comments login

Is there a way to detect AI generated content reliably? It feels like at some point most people will use AI to generate content and then alter it as they wish. (This is what I do when I'm writing code). Maybe this is already a common practice among the writers.


I think if there is no good solution, doing nothing might be the best thing. We generally forget this option, in many cases, there are always ways to side step and avoid the problem.


I'm not sure if they are aware of all the variables. %20 reduction only on one gender does not make much sense to me.


That was strange to me as well. More women than men get shingles, but I’m not sure whether that difference is strong enough to explain the difference in the study.


So after witnessing all these problems, more projects will start closed source or with restrictive licenses(GPLs) instead of OSS or more permissive licenses. This is how OSS community is being degraded. A few tech giants are disrupting entire OSS community. It may be legal/morally ok(or not I dont care) for AWS to do this but it is for sure we’ll have less OSS because of this.


Closed source software will not attract talented developers unless a lot of money is poured in to it. It is okay to have closed source, but its not okay to take the contributions of the open source and then later make it closed source.


The GPL is an open-source license.


Just want to add one thing, x86 has stronger memory “semantics”. So, it doesn’t have to work that way behind the scenes, just at the end of the block, it has to appear it worked that way. So, x86 does reordering, store combining etc a lot. IMHO, performance difference between arm vs x86 barely related with ISA, in M1 case, it’s definitely not, a lot more going on than just taking advantage of weaker memory model.


Having to appear worked that way does cause restrictions in multiprocessor case. ARM chips naturally do all of that too, with the memory model simply giving them way more freedom to reorder things.

One couldn’t do X86 version of M1, mostly because there is no way of making an instruction decoder that wide for it.

And the performance penalty of M1 when working in TSO mode strongly implies that yes the weaker memory model indeed plays a major role. Not the biggest, but definitely not insignificant. Tens of percents here and tens of percents there combined become a ridiculous perf boost.


Not a web developer but I feel like text only/small websites should be promoted. Unnecessary images on this page, weird backlink stuff between comments.. and this is a website about web development. Not looking good to me(I’m a dumb user) yet still better than many others.


The article is about that. Bell inequalities proved there isn’t ‘predestined’ effect(If you mean ‘hidden variables’ by that)


No local hidden variables. Unless there's been a new development?


Yes, you’re right. I meant local hidden variables.


Aww. I was hoping for a new development haha.


But isn't superdeterminism still possible? I thought that was implied by the article's first assumption that physics was loathe to abandon, that the experimenters had free will, so to speak.


Is superdeterminism even falsifiable?


None of the interpretations of quantum mechanics are. By design, they all predict exactly the same outcome for any conceivable experiment. The subject is entirely philosophical, not scientific.


They make predictions, which can be falsified. They even make different predictions---Scott Aaronson agrees that WF poses problems for the standard Copenhagen interpretation. Sean Carroll is on record somewhere saying that e.g. objective collapse models predict an in principle measurably different evolution of a system's entropy than many worlds.

I suppose I should have asked what predictions SD actually makes.


Copenhagen Interpretation and MWI don't make different predictions from each other, but there are other theories that do. For example, pilot-wave theory makes different predictions. There are also some superdeterminist theories that make new predictions (Gerard T'Hooft is one advocate of such theories, I believe). Unfortunately I don't know of any concrete examples. Here is some more information on the subject from Sabine Hossenfelder:

http://backreaction.blogspot.com/2019/07/the-forgotten-solut...


Objective-collapse theory is a modification of QM, not merely an interpretation.

Superdeterminism on the other hand is not even a theory. It cannot be falsified by design.


Global hidden variables are the predestined effect. The argument against global hidden variables is that it requires a godlike meddling in the whole universe to populate an infinite amount of arbitrary asymmetric details.


Accepted as instant.


That's just false.


Then explain it / give us a link please.


Why do I need to do that, to combat your unsupported assertion? Why don't you show some evidence instead that faster than light entanglement has been "accepted"?

The fact is, there are theories of QM that do not assume that entanglement happens faster than light. The Many Worlds theory is one that has no need for such hypothesis. And more generally, since you need to send the results via a classical (no faster than light) communication method, there's no way to be sure that the entanglement has happened faster than light.



Problem is these softwares can be made as fast as they were before but instead of investing time on performance/resource usage, developers see more value on extra features. So you end up with a fully featured but slower than ideal software. Even it is slower, still in acceptable limits to the majority of users.


> developers see more value on extra features.

Yet another post blaming developers when it's the product managers and business development people that decide the direction of the product.


I’m not blaming developers. Indeed extra features are more valuable(for users/marketing etc.) than performance most of the time. So, it’s completely normal. More hardware will lead to more features and bloat. There is no one to blame on that.


> Yet another post blaming developers when it's the product managers and business development people that decide the direction of the product.

Developers on this very site will happily deliver a slow running JS Electron turd if that means that they don't have to learn a new tool or use the process that isn't "fun" to them. Blaming just the business owners when the developers themselves refuse to do their job well is a bit strange.

Kinda reminds me of life in my socialist state - everyone did their minimum to tick the box they had to do and then just slacked off the rest of the time. Everything was kinda shit but it's not like anyone got paid more to do a job well, amirite?


Oh? Then can you explain why the same phenomenon happens in FOSS software?


He uses Chrome as an example. You can think of a website instead of it, its ‘start up’ time always matters and almost all websites are slower than 10 years ago


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: