Hacker News new | past | comments | ask | show | jobs | submit | gammarator's comments login

Most colleges don’t have billion dollar endowments.

Serious question: is there anything other than their own scruples keeping these guys from siphoning off a few billion dollars for themselves?

Criminal prosecution?

Why would they be prosecuted? The FBI has been gutted and they'd just be pardoned

Seems like a good wake-up call that all of these responsibilities shouldn't be the unilateral purview of the executive branch then.

Wake-up call? It’s a little late to close the barn doors.

Who else would they be?

The government has to appoint somebody to actually carry out law. There must always be an executive branch to execute the law.

The people running these agencies are all appointed by congress. If congress didn't want DOGE to have access to these systems then they wouldn't've confirmed the appointment of people who would give them access. Or conversely, they would impeach the appointees if the didn't like it.

This is the strength and weakness of a single-party system (grant US has multiple parties but one party is actually in control currently). The party does what the party wants and if it's not what you want then it's tough.


The US Constitution at its heart is based on a system of checks and balances, both between branches of government and the Federal government and States:

"Checks and Balances in the Constitution"

<https://www.usconstitution.net/checks-and-balances-in-the-co...>

That balance has been largely eviscerated presently.


> The US Constitution at its heart is based on a system of checks and balances, both between branches of government and the Federal government and States:

First, this is all a non-sequitur to my argument. When the 3 branches are all in agreement on something then there is no reason for any of them to attempt to stop another branch. This is the case when a party has control over all 3 branches. It's not like China, North Korea, or Russia don't have legislatures and judges; it's just they're in agreement with their president.

But to your point, the constitution is not a document of checks and balances. It's just the agreed upon manor that the government will execute and Congress really has no checks on it's power. Congress can impeach/remove the president and judges; it's supposed to be the supreme branch.

Control-f check [1] => 0 hits

Control-f balance [1] => 0 hits

Some of these things that people call "checks and balances" are just straight up not in the constitution. "Judicial review" is not in the constitution.

[1]: https://www.archives.gov/founding-docs/constitution-transcri...


I believe most people will understand that there is an extraordinarily long and vast tradition and literature on US Constitutional checks and balances, as my earlier link should have amply demonstrated. Google Scholar presently turns up another 359,000 results should that not have proven sufficiently persuasive:

<https://scholar.google.com/scholar?q=constitution%20checks%2...>

At the time the US Constitution was written, political parties did not exist, nor were they anticipated, though they did in fact develop rather quickly as the US political system evolved. As such, the idea that a party might control one or more branches of government was not anticipated, and might be considered variously a misfeature of politics-as-instituted, or a grievous oversight of the framers. Probably some of A, some of B.

Rather, states and branches were anticipated to have their own interests and act on their accordance. To some extent that's emerged, but the overwhelming power has resided with parties since the late 18th century.

As with other doctrines (e.g., judicial review, a concept fiercely wielded by so-called "originalists"), much if not most US legal and common law theory has evolved over time, occasionally through amendments but far more often through case law and simple convention.


Further: "checks and balances" is a concept originating with Montesque in his book The Spirit of the Law (1748), itself building on John Locke's Second Treatise of Government, and is explicitly referenced in the Federalist Papers, written by several of the framers of the US Constitution, most notably Federalist 51 (by Alexander Hamilton or James Madison), February 8, 1788, titled "The Structure of the Government Must Furnish the Proper Checks and Balances Between the Different Departments", and beginning:

TO WHAT expedient, then, shall we finally resort, for maintaining in practice the necessary partition of power among the several departments, as laid down in the Constitution? The only answer that can be given is, that as all these exterior provisions are found to be inadequate, the defect must be supplied, by so contriving the interior structure of the government as that its several constituent parts may, by their mutual relations, be the means of keeping each other in their proper places. Without presuming to undertake a full development of this important idea, I will hazard a few general observations, which may perhaps place it in a clearer light, and enable us to form a more correct judgment of the principles and structure of the government planned by the convention...

Spirit of the Law <https://en.wikipedia.org/wiki/The_Spirit_of_Law>

Federalist 51: <https://guides.loc.gov/federalist-papers/text-51-60>

Again, the concept is foundational and was well-established in the context in which the Constitution was drafted.


We do require a different constitutional order, but I wouldn't hold my breath on it being established any time soon.

They aren't unilateral. Trump is violating the law.

The problem is that the law doesn't spring off the page to enforce itself. Prior presidents haven't chosen to just ignore it to this degree.


Yes, they're too busy siphoning off a few trillion dollars.

Unless the systems are so fragile that they can remove all traces of it (and I want to believe the systems are so complex and redundant that no infiltrator like these people can see the whole picture), they would / should face severe consequences for defrauding the state. They are not above the law, and if Trump pardons them for it (assuming he's still in office by then), the pardon should not apply because he'd be in on it. I don't know what checks and balances are available for that case though.

> they would / should face severe consequences for defrauding the state. They are not above the law

I don't know how to reply to a statement this naive. What about the past 8 years makes you think these people are not above the law?

> I don't know what checks and balances are available for that case though.

SCOTUS declared the president immune to prosecution. The only check on a rogue president is a 60+ seat majority of the opposite party in the Senate, which hasn't happened since the 1980s.


There are other, extralegal, checks on a rogue president. I do not advocate them. But they do exist.

[flagged]


FYI: Cloudflare failure page.

"should not apply" - ha. Did you not just see the January 6th pardons? This is the new coup.

There are no remaining checks and balances; the Republicans control all three branches of government and will pardon everyone involved.

Imagine adding intentional rounding error for a system that handles trillions. How many transactions is that? Take 2 cents from every transaction?

I, too, saw Superman 3!

Or something like when the sub routine compounds the interest and uses all these extra decimal places that just get rounded off and so they round them all down, and drop the remainder into an account they opened

You know the answer.

Hmm. If its possible now I wonder if it was happening before?

Not an expert myself, but I don’t believe $200k/year counts as a “FAANG salary” for anyone with founder-level responsibility.


I'd argue there there aren't any single positions at large tech companies with founder level responsibility by design.

It's a pretty decent compensation though. The base salary of a mid-level position at Amazon with substantially better equity terms (no cliff, no one deciding your bonus will be cut without your input, etc). It's a far cry from the PG ideal of salaryless founders surviving on instant ramen in their garage.


I think the parent's point is that $200K would be the base in FANG, but you're missing on massive RSUs as the founder in a startup.


Yes, I got that. What I'm pointing out is that your upside isn't capped compared to a place like Amazon where your bonus will get cut if the RSUs do too well and you have some degree of control over any compensation shenanigans and layoffs.

It's not unequivocally better, but it's certainly a trade-off that some people would be willing to make.


I had no idea this could happen. How is the bonus capped if RSUs do too well?


Do you, Patrick, believe that “Chicago’s African American community is impoverished in part because of extractive practices of vice entrepreneurs?” Phrasing it this way winks that you don’t think so, but perhaps I am misreading you.

However, the conclusion of your article seems to be that Bally’s (a vice entrepreneur) is about to further impoverish them, this time under the guise of ownership but as usual with the support of the local political elites. So for consistency I think your answer should be “yes.”

You’d make a bigger difference in your hometown by conveying this message directly to the folks being targeted, rather than the HN crowd.


Patrick has mentioned several times lately that he writes for a broad audience, in a way that’s easy for people with influence to share. He didn’t write this essay for HN; it’s incidental that this article is posted here.


There's an amusing symmetry between the lines he quotes that carefully suggest without actually claiming that e.g. the operating company will pay out all its profits in distribution, and the way everything he himself writes suggests without actually claiming.


Patrick used to be better at obfuscating his true views with absurd language but I’ve noticed him going full mask off recently.


… Bluesky offers a clean, strictly chronological following feed as well as your choice of algorithmic feeds.


These aren’t “forests” like in other parts of the West so much as cliffs covered in dry, scrubby brush. I’m pessimistic that they could be systematically cleared or burned in a controlled way.


Burning in a dense residential area…no. Draconian clearing of all trees and brush except for selective fire-aware landscaping…yes, but you are paying significant money to make the residential area look uglier (in some eyes, it’s just a High Desert aesthetic for others), a hard sell.


The residential area being on fire looks significantly uglier to me.


Similar climates and geographies both either have the issue or manage it better.

Greece is a good example of also not managing this properly with its own regular massive fires, while national parks around Cape Town and other parts of South Africa do regular controlled burns in very rocky, hill-y terrain.


Here’s an extended comment by another astrophysicst: https://telescoper.blog/2025/01/02/timescape-versus-dark-ene...

The most important bit:

> The new papers under discussion focus entirely on supernovae measurements. It must be recognized that these provide just one of the pillars supporting the standard cosmology. Over the years, many alternative models have been suggested that claim to “fix” some alleged problem with cosmology only to find that it makes other issues worse. That’s not a reason to ignore departures from the standard framework, but it is an indication that we have a huge amount of data and we’re not allowed to cherry-pick what we want.


the thing is, this is not really an alternative model. it's rather actually bothering to do the hard math based on existing principles (GR) and existing observations, dropping the fairly convincingly invalidated assumption of large scale uniformity in the mass distribution of the universe.

if anything the standard model of cosmology should at this point be considered alternative as it introduces extra parameters that might be unnecessary.

so yeah it's one calculation. but give it time. the math is harder.


This has the same number of free parameters as LambdaCDM. Also this result only looks supernovae, i.e. low redshift sources. LambdaCDM is tested on cosmological scales.

Very interesting, but “more work is needed”.


thats not the case, if, as is increasingly speculated, the lambda is not constant over time. you figure two parameters for linear and three for a quadratic experience


> dropping the fairly convincingly invalidated assumption of large scale uniformity in the mass distribution of the universe.

The problem with that is then you need a mechanism that creates non-uniformly distributed mass.

Otherwise, you are simply invoking the anthropic principle: "The universe is the way it is because we are here."


You don’t need a mechanism to point out a fact contradicts an assumption, eg, our measurements show non-uniform mass at virtually all scales (including billions of light years). There simply is no observable scale with uniform mass.

Obviously there’s some mechanism which causes that, but the mere existence of multi-billion light year structures invalidates the modeling assumption — that assumption doesn’t correspond to reality.


yeah the ~1b ly nonuniformity is pretty much there. the ~10b ly uniformity is still early days but looking more and more likely as more data roll in (unless there is a systematic problem)


I think that can be mitigated in three ways: our understanding of inflation is flawed, there were more "nucleation" sites where our universe came to be, and there are the already theorized baryonic acoustic oscillations that could introduce heterogeneity in the universe.

Maybe is a combination of these, maybe something else. If nothing else, the uniformity is less probable than a mass distribution with variance (unless there is a phenomenon like inflation that smoothen things out, but also that was introduced to explain the assumption of a homogeneous universe). I concede that explaining the little variance in the CMB with our current understanding is hard when dropping homogeneity assumption however.


> The problem with that is then you need a mechanism that creates non-uniformly distributed mass.

you need no such thing. thats like saying "i refuse to acknowledge the pacific ocean to be so damn large without a mechanism". you dont need that. it just is. this doesnt preclude the existence of such a mechanism. but for any (legit) science, mechanistic consideration should be strictly downstream of observation.


> The problem with that is then you need a mechanism that creates non-uniformly distributed mass.

The mechanism is gravity; and we have good observational evidence that the mass distribution of the universe is not uniform, at least at the scales we can observe (we can see galaxy clusters and voids).


Calculation is harder in a world of functionally limitless compute is sort of interesting. Where do we go from here?


That sounds like regression.

If this problem of regression occurs as regularly as your quote implies then the fault is not in these proposed alternatives, or even in the likely faulty existing model, but in the gaping wide holes for testing these things quickly and objectively. That is why us dumb software guys have test automation.


You are oversimplifying science, especially theoretic physics. At the point where we are, there are neither any quick/cheap tests, and there is no objectivity. The space of possible correct theories is infinite, and humans are simply not smart enough to come up frameworks to objectively truncate the space. If we were, we would have made progress already.

There is a lot of subjectivity and art to designing good experiments, not to mention a lot of philosophical insight. I know a lot of scientists deny the role of philosophy in science, but I see all the top physicists in my fields liberally use philosophy - not philosopher type philosophy but physicist type philosophy - to guide their scientific exploration.


> and humans are simply not smart enough to come up frameworks to objectively truncate the space.

We are, but some people stubbornly resist such things. For instance, MOND reproducing the Tully-Fisher relation and being unexpectedly successful at making many other predictions suggests that any theories purporting to explain dark matter/gravitational anomalies should probably have MOND-like qualities in some limit. That would effectively prune the space of possible theories.

Instead, they've gone in the complete opposite direction, basically ignoring MOND and positing different matter distributions just to fit observations, while MOND, against all odds since it's not ultimately correct, continues to make successful predictions we're now seeing in JWST data.


Note that when I say humans, I mean how the whole worldwide social institution/network of scientists, with all its inherent hierarchies and politics. Because that is the social network that ultimately comes to some consensus on what the best theories at any given moment are.

Its indeed possible, in fact "necessary" that some individual scientists within this network by luck or brains come up with much better theories, but sometimes those theories are not accepted by others or they are. But ultimately all that matters is the consensus.


Holy fuck guy. Take a step back and do some self-reflection. Any time people post about physics on here its all emotions about how hard life is. With so much longing for sympathy its amazing anything in the field ever gets published.

Unless you are looking for research grants stop crying about consensus and instead return to evidence and proofs. There will always be a million sad tears in your big sad community. If that is your greatest concern its going to take you a million years to prove what you already know, because all the sad people you are showing it to are just as sad and self-loathing about social concerns as you are.


I am not. You are using bias as an excuse to qualify poor objectivity. I am fully aware that astrophysics contains a scale and diversity of data beyond my imagination, but that volume of data does not excuse an absence of common business practices.

> The space of possible correct theories is infinite

That is not unique to any form of science, engineering, or even software products.

> and humans are simply not smart enough to...

That is why test automation is a thing.


I think automated hypothesis testing against new data in science is itself an incredibly difficult problem. Every experiment has its own methodology and particular interpretation, often you need to custom build models for your experimental setup to test a given hypothesis, there are lots of data cleanup and aggregation steps that don't generalise, etc. My partner is in neuroscience, for instance, and merging another lab's data into their own workflows is a whole project unto itself.

Test automation in the software context is comparatively trivial. Formal systems make much better guarantees than the universe.

(not to say I think it's a bad idea - it would be incredible! - but perhaps the juice isn't worth the squeeze?)


> Every experiment has its own methodology

That is bias. Bias is always an implicit default in any initiative and requires a deliberate concerted effort to identify.

None of what you said is unique to any form of science or engineering. Perhaps the only thing about this unique to this field of science, as well as microbiology, is the shear size and diversity of the data.

From an objective perspective test automation is not more or less trivial to any given subject. The triviality of testing is directly determined by the tests written and their quality (speed and reproducibility).

The juice is always worth the squeeze. Its a business problem that can be answered with math in consideration of risk, velocity, and confidence.


Respectfully I disagree - the situation is far more complex in science than software engineering disciplines.

I agree that different tests require different amounts of effort (obviously), but even the simplest "unit tests" you could conceive of for scientific domains are very complex, as there's no standard (or even unique) way to translate a scientific problem into a formally checkable system. Theories are frameworks within which experiments can be judged, but this is rarely unambiguous, and often requires a great deal of domain-specific knowledge - in analogy to programming it would be like the semantics of your language changing with every program you write. On the other hand, any programmer in a modern language can add useful tests to a codebase with (relatively) little effort.

We are talking hours versus months or even years here!

The experiment informs the ontology which informs the experiment. I don't think this is reducible to bias, although that certainly exists. Rather to me it's inherent uncertainty in the domain that experiments seek to address.

Business practice, as you use the term, evolved to serve very different needs. Automated testing is useful for building software, but that effort may be better spent in science developing new experiments and hypotheses. It's very much an open problem whether the juice is worth the squeeze - in fact the lack of such efforts is (weak) evidence that it might not be. Scientists are not stupid.


> We are talking hours versus months or even years here!

That is why there are engineers that specialize in how to perform testing so that it doesn't take so long. For example long tests don't need to run at all if more critical short tests fail. The problems you describe for astrophysics are not unique to astrophysics even if the scale, size, and diversity of the data is so unique. Likewise, all the excuses I hear to avoid testing are the very same excuses software developers make.

The reality is that these are only 25% valid. On their face these excuses are complete garbage, but a validation cannot occur faster than the underlying system allows. If the system being tested is remarkably slow then any testing upon it will be, at best, just as remarkably slow. That is not a fault of the test or the testing, but is entirely the fault of that underlying system.


Uniqueness is not a requirement =) I still think you are over-generalising from experience in the software world. Science is about sense-making, finding parsimonious ontologies to describe the world. Software is about building reliable automation for various purposes.

They have orthogonal goals; why would you believe that automated testing would work the same way in both domains? I just don't see it.

Maybe you can elaborate on what you mean by automated testing of scientific hypotheses? I get the feeling we are talking past each other because we're both repeating the same points. Maybe we should focus on the 25% of excuses you've agreed are valid!


> I just don't see it.

That’s because you haven’t tried. You only test what you know, what’s provable. The goal isn’t 100% validation of everything. The only goal is error identification. Surely you know something in science, like the distance to Proxima Centauri. Start with what you know.

Then when something new comes along the only goal is to see which of those known things are challenged.

Testing doesn’t buy certainty. It’s more like insurance, because it successfully lowers risks in much shorter time. Like with insurance there aren’t wild expectations it’s going to prevent a house fire, but it will prevent unexpected homelessness.


The wide range of preferences expressed here makes me think that the best apple is one grown near you—and different varieties are better suited for different climate regions.


Keep a look out for https://www.collinsfamilyorchards.com/ at your market, or try their CSA!


They also sell apples on their website for local delivery in the Seattle area (in semi-large quantities only). Their honeycrisps are very good in my opinion, I usually buy 40 pounds a couple times a year. They also have other good fruit in season (e.g. peaches, nectarines; cherries are okay).


The SweeTango I had was cloyingly, almost artificially sweet. Can’t tell if it was an unlucky pick or I just have different preferences—I like an Ambrosia.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: