Hacker News new | past | comments | ask | show | jobs | submit | pbnjay's comments login

Re: the grid connection backlog - much of the challenge of turning on new generation is simulating the increasingly complex ways the grid can fail due to all the interconnections. It’s a huge computational challenge and there’s really not much incentive to speed it up


API tool to automate all the stuff Postman makes painful: https://callosum.dev

Spec generation from request logs, automatic schema generation and validation, test generation (eventually), totally offline, no accounts or cloud sync necessary!

Been taking longer than I hoped but should be released soon (next week or two)


Sounds excellent, joined the wait list.


How granular are the durations? If days for example you could bucket into N-week intervals (eg intervals with 7N consecutive days) to reduce the search scope. Then you only need to store start times, and can join or use a set to combine other ranges with the same availability.


The duration granularity is days right now, but the total duration of the map might extend out for tens of years or so. I did try bucketing them into hundreds of day long intervals but it didn't seem to help very much.


Yeah... They are using a single-core 13W measurement to project out. For a 64x parallelization - no mention of any overhead due to parallelization or power needs of the supporting hardware. This is a key quote for me (page 12 of the PDF):

> The 1.3B parameter model, where L = 24 and d = 2048, has a projected runtime of 42ms, and a throughput of 23.8 tokens per second.

e.g. 64 x 13.67W = 874 Watts to run a 1.3B model at 23.8 t/s... I'm pretty sure my phone can do way better than that! Even half that power given their assertions in the table are still overpowered for such a small model.


When you multiply by 64 you also get 64 times more tokens per second!! Your math is wrong.


That's their math, the 23.8t/s is already the 64x but they didn't 64x the other stats.


When you multiply by 64 you also get 64 times more tokens per second!! Your math is wrong.


Published in 2010? Curious how much of it has survived since then?

I like “Design It” because of some of the workshop/activities that are nice for technical folks who need to interact with stakeholders/clients (I’m in a consulting role so this is more relevant). Also it doesn’t lean hard on specific technical architectural styles which change so frequently…


I can't think of many things that have changed in architecture since 2010. I'm not talking about fads but about actual principles.


Since 1970, to be fair... The people at the NATO Software Engineering conferences of '68 and '69 knew quite a bit about architecture. Parnas, my house-god in the area, published his best stuff in the 1970s.


probably containerisation is a big one, and also serverless computing

they aren't principles as such, but they certainly play into what is important and how you apply them


I mean shared hosting certainly existed but "the cloud" as we think of it today was much simpler and not nearly as ubiquitous. It doesn't really change the principles themselves but it certainly affects aspects of the risk calculus that dominates the table of contents.


Things are changing now, pretty fast. The architecture that is optimal for humans is not the same architecture that is optimal for AI. AI wants shallow monoliths built around function composition using a library of helper modules and simple services with dependency injection.


> I'm not talking about fads but about actual principles.

Most problems are not well-addressed by shallow monoliths made of glue code. It's irrelevant what "AI wants", just as it's irrelevant what blockchain "wants".


This response is entirely tribalist and ignores the differences between LLMs and ‘blockchain’ as actual technologies. To be blunt, I find it hard to professionally respect anyone that buys into these culture wars to the point where it completely overtakes their ability to objectively evaluate technologies. This isn’t me saying that anyone that has written off LLMs is an idiot. But to equate these two technologies in this context makes absolutely no sense to me just from a logical perspective. I.e. not involving a value judgment toward either blockchain or LLMs. The only reason you’re invoking blockchain here is because the blockchain and LLM fads are often compared / equated in these conversations. Nobody has suggested that blockchain technology be used to assist with the development of software in the way that LLMs are. It simply doesn’t make sense. These are two entirely separate technologies setting out to solve two entirely orthogonal problems. The argument is completely nonsensical.


(A less snarky reply:) LLMs and blockchains are both special-purpose tools that are almost completely useless for their best-known applications (virtual-assistants and cryptocurrency, respectively). The social behaviour surrounding them is way more relevant than the actual technologies, and I don't think it's tribalistic to acknowledge that.

People tried to use both as databases, put both in cars, invest in both. The vast majority of claims people make about them are just not evidenced, yet their hypist-adherents are so confident that they're willing to show you evidence that contradicts their claims, and call it "proof".

Yes, the actual technologies are very different. But nobody is actually paying attention to the technologies (an ignorance that my other comment snarkily accuses you of displaying here – I probably should've been kinder).


> Nobody has suggested that blockchain technology be used to assist with the development of software in the way that LLMs are. It simply doesn’t make sense.

Linus Torvalds is a strong advocate. He even wrote a blockchain-based source code management system, which he dubbed “the information manager from hell”[0], spending over three months on it (six months, by his own account) before handing it over to others to maintain.

People complain that this “information manager” system is hard to understand, but it's actively used (alongside email) for coordinating the Linux kernel project. Some say it's crucial to Linux's continued success, or even that it's more important than Linux.

[0]: see commit e83c5163316f89bfbde7d9ab23ca2e25604af290


If you think development velocity doesn't matter, you should talk to the people who employ you.


If you think AI helps speed up development…


AI does help speed up development! It lets you completely skip the "begin to understand the requirements" and "work out what's sensible to build" steps, and you can get through the "type out some code" parts even faster than copy-pasting from Stack Overflow (at only 10× the resource expenditure, if we ignore training costs!).

It does make the last step ("have a piece of software that's fit-for-purpose") a bit harder; but that's a price you should be willing to pay for velocity.


Poor code monkeys. I am in a industry where software bugs can severely harm people since over 20 years and the fastest code never survived. It always only solved some cheap and easy 70% of the job and the remaining errors almost killed the project and everything had to be reworked peoperly. Slow is smooth and smooth is fast. "Fast" code costs you four times: write it, discuss why it is broken, remove it, rewrite it.


Velocity is good for impact, typically.


I don't have to think, people have done research.


Only time will tell. Right now this sounds like everything that was once claimed, each technological cycle, only to be forgotten about after some time. Only after some time we come to our senses, some things simply stick while other ‘evolve’ in other directions (for the lack of a better word).

Maybe this time it’s different, maybe it’s not. Time will tell.


While I don't disagree with you (and tend to be more of an AI skeptic than enthusiast, especially when it comes to being used for programming), this does weaken the earlier assertion that AI was brought up in response to; "things that have changed in architecture since 2010" is a lot more narrow if you rule out anything that's only come about in the past couple of years by definition due to not having been around long enough to prove longevity.


It is sad, and confusing, to read comments like this on HN.

I mean, you're not even wrong.


Please tell me about how you once asked ChatGPT to write something for you, saw a mistake in its output, and immediately made your mind up.

I’ve been writing code professionally for a decade. I’ve led the development of production-grade systems. I understand architecture. I’m no idiot. I use Copilot. It’s regularly helpful. and saves time. Do you have a counter-argument that doesn’t involve some sort of thinly veiled “but you’re an idiot and I’m just a better developer than you”?

I don’t by any means think that a current generation LLM can do everything that a software developer can. Far from it. But that’s not what we are talking about.


We'll need some well researched study on how much LLMs actually help vs not. I know they can be useful in some situations, but it also sometimes takes a few days away from it to realise the negative impacts. Like the "copilot pause" coined by Primogen - you know the completion is coming, so you pause when writing the trivial thing you knew how to do anyway and wait for the completion (which may or may not be correct, wasting both time and opportunity to practice on your own). Self-reported improvement will be biased by impression and facts other than the actual outcome.

It's not that I don't believe your experience specifically. I don't believe either side in this case knows the real industry-wide average improvement until someone really measures it.


Unfortunately, we still don't have great metrics for developer productivity, other than the hilari-bad lines of code metric. Jira tickets, sprints, points, t-shirt sizes; all of that is to try and bring something measurable to the table, but everyone knows it's really fuzzy.

What I do know though, is that ChatGPT can finish a leetcode problem before I've even fully parsed the question.

There are definitely ratholes to get stuck and lose time in when trying to get the LLM to give the right answer, but LLM-unassisted programming has the same problem. When using an LLM to help, there's a bunch of different contexts I don't have to load in because the LLM is handling it giving me more head space to think about the bigger problems at hand.

No matter what a study says, as soon as it comes out, it's going to get picked apart because people aren't going to believe the results, no matter what the results say.

This shit's not properly measurable like in a hard science so you're going to have to settle for subjective opinions. If you want to make it a competition, how would you rank John Carmack, Linus Torvalds, Grace Hopper, and Fabrice Bellard? How do you even try and make that comparison? How do you measure and compare something you don't have a ruler for?


> that ChatGPT can finish a leetcode problem before I've even fully parsed the question.

This is an interesting case for two reasons. One is that leetcode is for distilled elementary problems known in CS - given all CS papers or even blogs at disposal, you should be able to solve them all by pattern matching the solution. Real work is anything but that - the elementary problems have solutions in libraries, but everything in between is complicated and messy and requires handling the unexpected/underdefined cases. The second reason is that leetcode problems are fully specified in a concise description with an example and no outside parameters. Just spending the time to define your problem to that level for the LLM is likely getting you more than halfway to the solution. And that kind of detailed spec really takes time to create.


"What I do know though, is that ChatGPT can finish a leetcode problem before I've even fully parsed the question."

You have to watch out for that, that's an AI evaluation trap. Leetcode problems are in the training set.

I'm reminded of people excitedly discussing how GPT-2 "solved" the 10 pounds of feathers versus 10 pounds of lead problem... of course, it did, that's literally in the training set. GPT-2 could be easily fooled by changing any aspect of the problem to something it did not expect. Later ones less so though last I tried a few months ago while they got it right more often then wrong they could still be pretty easily tripped up.


What that is though, is an LLM-usefulness trap. Yeah, the leetcode problem is only solved by the LLM because it's in the training data, and you can trick the LLM with some logic puzzle that's also difficult for dumb humans. But that doesn't stop it from being useful and outputting code that seems to save time.


Even if it works and saves time, it may make us pay that time back when it doesn’t work. Then we to actually think for ourselves, but we’ve been dulled. Best case, we lose time on those cases. More realistically we let bugs through. Worst case, our minds, dulled by the lack of daily training, are no longer capable of solving the problem at all, and we have to train all over again until we can… possibly until we’re fired or the project is cancelled.

Most likely though, code quality will suffer. I have a friend who observes what people commit every day, and some of them (apparently plural) copy & paste answers from an LLM and commit it before checking that it even compiles. And even when it works, it’s often so convoluted there’s no way it could pass any code review. Sure if you’re not an idiot you wouldn’t do that, but some idiots use LLMs to get through interviews (it sometimes works for remote assignments or quizzes), and spotting them on the job sometimes takes some time.

LLMs for coding are definitely useful. And harmful. How much I don’t know, though I doubt right now that the pros outweigh the cons. Good news is though, as we figure out the good uses and avoid the bad ones, it should gradually shift towards "more useful than not" over time. Or at least, "less harmful than it was".


That's one possibility. The other direction is that it takes the dull parts out of the job, so I'm no longer spending cycles on dumbass shit like formatting json properly, so that my mind can stay focused on problems bigger than if there should be a comma at the end of a line or not. Best case, our minds, freed from the drugony of tabs vs spaces, are sharpened by being able to focus on the important parts of the problem rather than than dumb parts.


> some of them (apparently plural) copy & paste answers from an LLM and commit it before checking that it even compiles.

If I were using one of these things, that's what I'd do. (Preferably rewriting the commit to read Author: AcmeBot, Committer: wizzwizz4) It's important that commit history accurately reflect the development process.

Now, pushing an untested commit? No no no. (Well, maybe, but only ever for backup purposes: never in a branch I shared with others.)


> Do you have a counter-argument that doesn’t involve some sort of thinly veiled “but you’re an idiot and I’m just a better developer than you”?

Requiring that the counter-argument reaches a higher bar than both your and the original argument is...definitely a look!


loup-vaillant could probably optimize their talking point with help from this blog post: https://rachelbythebay.com/w/2018/04/28/meta/


Nobody has ten years of experience with a code base "optimized for AI" to be able to state such a thing so confidently.

And nobody ever will, because in 10 years, coding AIs will not look like they do now. Right now they are just incapable of architecture, which your supposed optimal approach seems to be optimizing for, but I wouldn't care to guarantee that's the case in 10 years. If nothing else, there will certainly be other relevant changes. And you'll need experience to determine how best to use those, unless they just get so good they take care of that too.


I don’t know about this architecture for AI, but your description sounds like the explanations I’ve heard of the Ruby on Rails philosophy, which is clearly considered optimal by at least some humans.


Maybe this post wasn't the right one for your comment, hence the downvotes.

But I find it intriguing. Do you mean architecting software to allow LLMs to be able to modify and extend it? Having more of the overall picture in one place (shallow monoliths) and lots of helper funtions and modules to keep code length down? Ie, optimising for the input and output context windows?


LLMs are very good at first order coding. So, writing a function, either from scratch or by composing functions given their names/definitions. When you start to ask it to do second or higher order coding (crossing service boundaries, deep code structures, recursive functions) it falls over pretty hard. Additionally, you have to consider the time it takes an engineer to populate the context when using the LLM and the time it takes them to verify the output.

LLMs can unlock incredibly development velocity. For things like creating utility or helper functions and their unit tests at the same time, an engineer using a LLM will easily 10x an equally skilled engineer not using a LLM. The key is to architect your system so that as much of it as possible can be treated this way, while not making it indecipherable for humans.


>while not making it indecipherable for humans

This is a temporary constraint. Soon the maintenance programmers will use an AI to tell them what the code says.

The AI might not reliably be able to do that unless it is in the same "family" of AIs that wrote the code. In other words, analogous to the situation today where choice of programming language has strategic consequences, choice of AI "family" with which to start a project will tend to have strategic consequences.


This whole part sounds like BS mumbo jumbo. AI isn’t developing any system anytime soon and people surely aren’t going to design systems that cater to the current versions of LLMs.


Have you heard of modular, mojo, and max?


They're designed for fast math and python similarity in general. Llama.cpp on the other hand is designed for LLM as we use it right now. But Mojo is general purpose enough to support many other "fast Python" use cases and if we completely change the architecture of LLMs, it's still going to be great for them.

It's more of a generic system with attention on performance of specific application rather than a system designed to cater to current LLMs.


No. Max is an entire compute platform designed around deploying LLMs at scale. And Mojo takes a Python syntax (it’s a superset) but reimplements the entire compiler so you (or the compiler on your behalf) can target all the new AI compute hardware that’s almost literally popped up overnight. Modular is the company that raised 130MM dollars in under 2 years to make these two plays happen. And Nvidia is on fire right now. I can assure you without a sliver of a doubt that humans are most certainly redesigning entire computing hardware and the systems atop to accommodate AI. Look at the WWDC Keynote this year if you need more evidence.


Sure it's made to accommodate AI or more generally fast vector/matrix math. But the original claim was about "people surely aren’t going to design systems that cater to the current versions of LLMs." Those solutions are way more generic than current or future versions of LLMs. Once LLMs die down a bit, the same setups will be used for large scale ML/research unrelated to languages.


What? The entire point of the comment you’re replying to is that the LLM isn’t designing the system. That’s why it’s being discussed in the first place. LLMs certainly currently play a PART in the ongoing development of myriad projects, as made evident by Copilot’s popularity to say the least. That doesn’t mean that an LLM can do everything a software developer can, or whatever other moving goalpost arguments people tend to use. They simply play a part. It doesn’t seem outside of the realm of reason for a particularly ‘innovative’ large-scale software shop to at least consider taking LLMs into account in their architecture.


The skeptics in this thread have watched LLMS flail trying to produce correct code with their long imperative functions, microservices and magic variables and assumed that their architecture is good and LLMs are bad. They don't realize that there are people 5xing their velocity _with unit tests and documentation_ because they designed their systems to play to the strengths of LLMs.


AI wanted you to write code in GoLang so that it could absorb your skills more faster. kthanksbai


Process at my work is heavily influenced by this book, and I think it gives a pretty good overview of architecture and development processes. Author spends a lot of time in prose talking about mindset, and it's light on concrete skills, but it does provide references for further reading.


Keeling's Design It book is great [1]. It helps teams engage with architecture ideas with concrete activities that end up illuminating what's important. My book tries to address those big ideas head-on, which turns out to be difficult, pedagogically, because it's such an abstract topic.

Which ideas have survived since 2010?

Some operating systems are microkernels, others are monolithic. Some databases are relational, others are document-centric. Some applications are client-server, others are peer-to-peer. These distinctions are probably eternal and if you come back in 100 years you may find systems with those designs even though Windows, Oracle, and Salesforce are long-gone examples. And we'll still be talking about qualities like modifiability and latency.

The field of software architecture is about identifying these eternal abstractions. See [2] for a compact description.

"ABSTRACT: Software architecture is a set of abstractions that helps you reason about the software you plan to build, or have already built. Our field has had small abstractions for a long time now, but it has taken decades to accumulate larger abstractions, including quality attributes, information hiding, components and connectors, multiple views, and architectural styles. When we design systems, we weave these abstractions together, preserving a chain of intentionality, so that the systems we design do what we want. Twenty years ago, in this magazine, Martin Fowler published the influential essay “Who Needs an Architect?” It’s time for developers to take another look at software architecture and see it as a set of abstractions that helps them reason about software."

[1] Michael Keeling, Design It: From Programmer to Software Architect, https://pragprog.com/titles/mkdsa/design-it/

[2] George Fairbanks, Software Architecture is a Set of Abstractions Jul 2023. https://www.computer.org/csdl/magazine/so/2023/04/10176187/1...


Wouldn’t be too hard to add a secondary “file” in the zip with an extra index


As someone living in NC and paying only $0.09/kwh, that $0.55/kwh in SF is just nuts to me! This setup has a 12 year payoff here...


The rate schedules are complex. I'm paying ~$0.43/kWh, itemized as $0.13/kWh generation (Clean Power SF) and $0.30/kWh distribution (PG&E).

If you use very little on the residential schedule I think it can drop pretty low, probably around $0.15/kWh.


My power is resold by the city so they don’t break it out, but a nearby county is 4.55c per kWh distribution charge and 5.96c per kWh generation charge.


> If you use very little on the residential schedule I think it can drop pretty low, probably around $0.15/kWh.

The lowest price on the PG&E tiered rate plan E1 (implied by the words "If you use very little ...") is $0.42/kWh. Even the most variable rate EV2A plan has an off-peak price of $0.35/kWh. I don't see any way to get a price of $0.15 kWh.


I stand corrected, it's been a while since I last looked at rates. I think the ~$0.15/kWh I remembered was from SVP in Santa Clara, not PG&E. https://www.siliconvalleypower.com/residents/rates-and-fees


BTW, the actual lowest rates for SF are closer to ~$0.26/kWh, and may be lower during the winter months at the lowest usage tier ("below baseline.")

https://www.pge.com/assets/pge/docs/account/alternate-energy...

I can't work it out exactly without doing more research than is worth. As I said, the schedules are complex, maybe someone more versed in this can chime in.

Hopefully SF will finally manage to force PG&E to sell it the city's grid. I doubt it will lead to rates as low as Santa Clara's, though.

https://sfpuc.org/about-us/news/its-high-time-san-francisco-...


> I can't work it out exactly without doing more research than is worth. As I said, the schedules are complex, maybe someone more versed in this can chime in.

The $0.26/kWh is for the CARE program, which heavily discounts energy (30-35%) for low income people and families. It's not a rate available to most people.

Whereas the rates paid by those who have a municipally-owned utility (MOU) in CA (i.e. LADWP, SMUD, SVP) are actually 60% lower than those paid to investor-owned utilities (IOU) [1].

The municipal utility rates are lower because:

a) being owned by their municipalities, their primary incentive structure is to lower costs of reliable electricity for ratepayers, not to return a profit to shareholders.

b) IOUs generate profits for shareholders, and this profit is only generated - per regulation - as a percentage of capital expenditures.

c) PG&E and other IOUs must reduce the wildfire risk of their huge transmission and distribution networks, which operate in much more wildfire-prone parts of California than the municipal utilities (which are mostly urban). To achieve this, they tend to choose very expensive solutions (i.e. under-grounding transmission lines) since (b).

This results in all ratepayers in IOU territories paying much higher electricity rates, regardless of whether they are in a high fire-risk zone. If electricity rates were set based on highly localized wildfire risk, the rates would be even more divergent, sometimes in areas just a few miles apart (due to California's micro-climates). There would probably be tremendous outcry and anger, as nobody wants to pay for the risks associated with where they have chosen (or can afford to) live.

1. https://www.siliconvalleypower.com/residents/rates-and-fees


I agree with everything you posted. In SF's case, the city is hoping to turn CPSF into a full MOU.

For the time being, the lowest per-kWh rate in SF is based upon the CPSF discount, time-of-usage discount, below-baseline discount, winter discount, income discount, EV adjustment, and exact schedule chosen, plus the overhead effect of fixed fees and minus the special twice-yearly climate credits. Did I miss anything? :)

I have no idea what it is, but it's somewhere between my initial figure ($0.15) and yours ($0.35.)


Lifestyle differences in rural areas are much more sedentary now that broadband internet is more available. In urban areas you have social factors and other reasons to walk around or stand up for longer periods.

Couple that with the mentioned shrinking of hospital and healthcare access and yeah you’ve got a double whammy.


I think this is right. For counterpoint though, there will of course be variance, but an opposite hypothesis is a larger proportion have labor jobs (men moreso than women).

Grew up in rural PA (mostly farming, some factory work). They exact a significant toll (part of the reason opioid dependence grew IMO).

But most of the metrics I have been seeing in terms of death rates show it is both for men and women, so I don't think labor jobs can explain it.


it's crazy how something seemingly minor, the necessity to walk some real distance for meeting daily needs, has such a wide range of benefits. Unconsciously, it makes you eat better and less, otherwise the walking would be very uncomfortable. It also makes you care about your immediate environment more, because you experience it a lot more directly.

When you don't have that, it's an uphill battle. You have to carve out dedicated time to exercise, and you have to be very conscious of your diet.

Walking makes for a healthier life, it really does, rather than only the healthy opting in for walking.


Every time I’m in Europe I’m just delighted at how much I walk without having to use any other form of transport. Virtually impossible at home (LA).


I miss it after moving back here (midwest).


Hold up a sec, are you proposing that life in the country is more sedentary than in urban areas? I'm guessing you've never bailed hay.


A cursory search indicates that the giant marshmallow looking haybales that modern farming produces are formed by a machine which you tow behind a tractor. I'm just a city gal, but how much of a workout is that, really?

Edit: also, why are we speculating on stereotypes and not consulting actual data collected on exactly this question? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7182355/

It seems that people living in rural areas are, on average, more sedentary than their urban counterparts, despite the overwhelmingly popular stereotype that rural folk are svelte and outdoorsy.


It's a pretty significant workout. Hooking up, detaching, cleaning and maintaining a bailer is a lot more effort than even a good distance walk. The statistics you're quoting are more so due to the fact that most people in rural areas don't operate a bailer or really any farm machinery. Which showcases one of the problems with relying on "actual data" without having a good understanding of the situation.


Yes "actual data " is the problem. You even point out yourself

>most people in rural areas don't operate...really any farm machinery"

Which would be consistent with that damn actual data showing that rural folk are more sedentary than urban folk, in aggregate or on average.


Except it really wouldn't, because it doesn't take into account any of the bog standard daily lived realities of being rural that include hunting and fishing, managing livestock, working in the trades, dealing with equipment, or hell even just keeping several acres mowed and maintained, all of which are dirt common activities for rural Americans. What, you think you get outside the city and everyone's sitting on their ass in a trailer park collecting welfare and working on their diabetes or something?


> What, you think you get outside the city and everyone's sitting on their ass in a trailer park collecting welfare and working on their diabetes or something?

You're the one slinging data-defying stereotypes around. Believe it or not, "working in the trades" is something lotsa urban folk get up to, as with hunting and fishing, and boy howdy do urbanites love some huge lawns in their green spaces. Do you think urban grass just mows itself? No, cities hire urbanites for such jobs.

And, real talk. You made haybaling sound tough; I'll grant you that for lack of experience, but I've ridden a damn lawnmower and it's no great workout. And if my rural family is at all typical, mowing a lawn is a net positive in calories because riding a mower is occasion for a beer or six.

But the data says y'all are more sedentary and more obese on average than city dwellers. Now, I'm inclined to blame DDT exposure for the latter statistic, and reliance on cars for the former, but if you want to make some weird judgements about how folks are spending their time, that's on you.


> most people in rural areas don't operate a bailer or really any farm machinery.

This was effectively the claim made in the top-level comment; I responded to somebody countering that with weird claims about baling as if that's a typical activity for rural residents to engage in. But it's not, according to you and according to the data. What problem do you think this is showcasing?


The comment you made questions how much excersise operating a bailer involves, which is what I responded to. I don't see anything in your comment or the parent talking about how common that excersise is, which is why I offered it as an explanation of the incongruence between your dataset and your implication that bailing is effortless because it involves a machine.


Well, I'll agree on one thing, weighing in

> without having a good understanding of the situation

does seem problematic.


And yet here you are.


>The statistics you're quoting are more so due to the fact that most people in rural areas don't operate a bailer or really any farm machinery.

This is exactly the problem with some of these rural vs. urban debates. The pro-rural people will make claims about how important farmers are, etc., and seem to have some kind of romantic idea about what rural life is like, but the reality is that the vast majority of rural dwellers are not farmers, do not live any kind of "outdoorsy" life, and basically are people who are too poor or too anti-city to live in or closer to a city, and generally have a very sedentary and car-based lifestyle.


If by "romantic idea" you mean actual lived experience then ok.


No, it's not actual lived experience. I came from the rural South, I know what it's like there. The people there are NOT farmers.


I'll be sure to tell my three uncles who raise pigs, tobacco, corn, soybeans, and peanuts the next time I see them that they aren't farmers. I should probably ask my cousins how they manage to find time to sit on their asses given their employment in the timber industry, ask my father what a sedentary ironworker even looks like, and then there's the minor issue of all the time I've spent working in the trades during the week and on heavy equipment on the weekends...


So by your logic, because a handful of people are farmers, then everyone in rural areas is a farmer? Brilliant.


Clearly you've also never seen bailed straw, pine straw, or any of a number of other square baled products. Hell, you had to google round bales and still don't know what they're called. And yeah, not everyone has livestock, that was just tossed off the cuff as an example that the utterly uninitiated would maybe kinda grasp based on experience with lawn care. So anyway, tell us more about how folks in the country are living based on your obvious deep personal experience...


As someone who grew up and lived in a very rural area, most rural people don’t bale hay.

Most rural people live like those in the suburbs but are just physically farther away from other people.


Neither have most people living in rural areas.


> I'm guessing you've never bailed hay

Never had my boat fill up with hay so...no. I'd also guess most country people don't bale hay every day.


As a counterpoint/opportunity… what are some great open source projects (e.g well-used/adopted) that do NOT have great docs?


Guile Scheme: https://www.gnu.org/software/guile/manual/html_node/index.ht...

It looks like it should be good. There is a lot written. However, it's extremely disjointed and unfairlt assumes readers know things. It uses terms not defined yet, or even at all. As a taste, assume you are new to lisp and scheme. Try reading the Chapter 3: Hello Scheme![1] It contains so much mind bogglingly useless information presented in the most obtuse way possible.

Okay, you might say, that's the Reference Manual, not the Tutorial[2]. The tutorial is better...except it literally doesn't explain how to run the code. Instead, it tells you to not only to get Emacs, but to also configure it with Geiser. It doesn't show you how to do that. It passes you off to other manuals. Or, to set up Dr. Racket. To be clear, running guile code is as simple as typing 'guile' which starts the interpreter.

It's very common for the documentation to hand wave away major ideas by linking elsewhere and assuming that the linked references actually explain things (they rarely do).

Anyway, I could go on. It's simply the worst documentation I've seen because it continually leads you to believe it's good. Yet, it rarely delivers the information you need.

[1] https://www.gnu.org/software/guile/manual/html_node/Hello-Sc...

[2] https://spritely.institute/static/papers/scheme-primer.html


SDL2 docs are not amazing, but the project itself is an incredible achievement of cross platform development.


Agreed to both.

That brings Three.js to mind, that's another terrific project with patchwork docs.


The bespoke SDL docs aren’t too good but the header files have great documentation.


React, especially React Native.

There are lots of docs, and in most cases the quality at some point was good, but they're often out of date, and old versions hang around for a long time. Real world projects often use wildly different versions of React (because it's hard to keep up with the version churn) so I guess all those different doc versions are necessary.

For React Native, the docs are missing detail so you have to look at the code, and the code comments were pretty spotty and inconsistent the last time I looked.


Ansible documentation is nightmarish.


Gradle for sure


If you could also incorporate precedence rules you could get some more reduction. e.g. for the h1,h2 example, you'd have a selector for `h1,h2` (which is essentially the full h2 rules) and another for `h1` that overrides the font-size. Then when needing to "reduce" rules you could select for "smaller" rules and the only loss would be that h1 and h2 have the same size.

To do that I think you'd need to do the factorization on the CSS properties alone, and then apply the values in a predetermined order. But would be cool/fun to test out!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: