Hacker News new | past | comments | ask | show | jobs | submit | sanity's comments login

The most meaningful and useful diversity would be that of social class, and yet that's the one kind of diversity the DEI bureaucracy isn't interested in diversifying.

Depends on the industry, 20 years is a lifetime in software.


I just picked Dioxus to build a decentralized homepage for Freenet[1], it will be the first decentralized website people see when they get Freenet set up. It reminds me a bit of my Kotlin web framework called Kweb[2] that I've been working on on-and-off for a few years now, particularly the way it handles state and the DSL that maps from code to HTML. So far I like what I see.

[1] https://freenet.org/

[2] https://kweb.io/


That's awesome!! I must've ran into kweb when designing the DSL at first - they're so similar. I'm secretly a huge kotlin fan and love the kotlin DSLs and concurrency models.


Hah, yes - I've been a big fan of Kotlin for the past decade or so, although Rust is growing on me rapidly. In particular, I think Rust's tooling is a lot better - too many headaches with Gradle.

I've pondered the idea of doing something like Kweb in Rust but looks like you've already done it with Dioxus so thank you for saving me the time :)


So true, today I was trying to decide whether to use Google's Vertex vector search in a project - until I remembered Google's track record with pulling the rug on services. Made it an easy "no".


Ah, I wasn't aware that you should wait a year between reposts - although we did just release a new version that incorporated a bunch of feedback from the attention we got last time.


It's fairly loose - you can shave a month or two off and/or stretch the definition of 'major rework' a bit but 3 months and typical updates is probably too close.


Fair enough, I'll remember that for the future.


My progression was Atari 800XL, Atari 520STfm, then Atari Falcon 030 - then switched to a Linux PC in the late 90s. My parents most likely sold my old computers but a few years ago I reacquired an Atari 800XL for old time's sake.


Interesting, I found Semantic Merge [1] years ago but it was never open source.

This just does diff but not merge, but at least it's open source - and the diffs look a lot nicer, I've already made it my default.

Any plans to extend it to merging?

[1] https://docs.plasticscm.com/semanticmerge


> Any plans to extend it to merging?

The GitHub readme:

> Can difftastic do merges?

> No. AST merging is a hard problem that difftastic does not address.

> AST diffing is a also lossy process from the perspective of a text diff. Difftastic will ignore whitespace that isn't syntactically significant, but merging requires tracking whitespace.


Was going to suggest this myself, this was a godsend when I was working with a big team on a C# project going through a messy refactor.


That's if your agreement with the contract company allows you to recruit their people.


I’ve had that happen a few times. Generally there may be a signing fee paid to the contractor.

Though in one case I got the job first and the contracting house second. BigCo had approved vendors but was having trouble finding people with the skill sets they needed.


This is a follow-up to this ShowHN from December: https://news.ycombinator.com/item?id=38669781

We've just released version 0.2 of NowDo, including a variety of new features, many of which were suggested by HN users:

* Batch task management: Select and modify multiple tasks simultaneously.

* Task status colors: Use color coding to quickly identify the status of your tasks.

* Time tracking: Record the time when tasks are marked as Done.

* Task sorting: Organize your tasks efficiently by sorting them in various columns.

* Quick task deletion: Easily remove all completed tasks with a single click.

* Shortcut customization: Tailor keyboard shortcuts to fit your workflow.

The next version of NowDo will do everything the current version does for free, but will add additional features like cloud storage integration for a reasonable one-time purchase (no subscription!).

Happy to answer any questions!


I remember back in the 80s I had friends who enjoyed coding in assembly and felt that using higher-level languages was "cheating" - isn't this just a continuation of that?


Yeah, that's a good way of looking at it. We gradually remove technical constraints and move to a higher level of abstraction, much closer to the level of the user and the business rather than the individual machine. But what's the endpoint of this? There will probably always be a need for expert-level troubleshooters and optimizers who understand all the layers, but for the rest of us, I'm wondering if the job wouldn't generally become more product management than engineering.


I'm not sure that there is an endpoint, only a continuation of the transitions we've always been making.

What we've seen as we transitioned to higher and higher level languages (e.g., machine code → macro assembly → C → Java → Python) on unimaginably more powerful machines (and clusters of machines) is that we took on more complex applications and got much more work done faster. The complexity we manage shifts from the language and optimizing for machine constraints (speed, memory, etc.) to the application domain and optimizing for broader constraints (profit, user happiness, etc.).

I think LLMs also revive hope that natural languages (e.g., English) are the future of software development (COBOL's dream finally be realized!). But a core problem with that has always been that natural languages are too ambiguous. To the extent we're just writing prompts and the models are the implementers, I suspect we'll come up with more precise "prompt languages". At that point, it's just the next generation of even higher level languages.

So, I think you're right that we'll spend more of our time thinking like product managers. But also more of our time thinking about higher level, hard, technical problems (e.g., how do we use math to build a system that dynamically optimizes itself for whatever metric we care about?). I don't think these are new trends, but continuing (maybe accelerating?) ones.


I don't think COBOL's dream was to generate enormous amounts of assembly code that users would then have to maintain (in assembly!) and producing differently wrong results every time you ran it.


It may not have been the dream, but the reality is many COBOL systems have been binary-patched to fix issues so many times that the original source may not be a useful guide to how the thing actually works.


Can you share any more info on this?


> But also more of our time thinking about higher level, hard, technical problems (e.g., how do we use math to build a system that dynamically optimizes itself for whatever metric we care about?).

It’s likely that a near-future AI system can suggest suitable math and implement it in an algorithm for the problem the user wants solved. An expert who understands it might be able to critique and ask for a better solution, but many users could be satisfied with it.

Professionals who can deliver added value are those who understand the user better than the user themselves.


This kind of optimization is what I did for the last few years of my career, so I might be biased / limited in my thinking about what AI is capable of. But a lot of this area is still being figured out by humans, and there are a lot of tradeoffs between the math/software/business sides that limits what we can do. I'm not sure many business decision makers would give free rein to AI (they don't give it to engineers today). And I don't think we're close to AI ensuring a principled approach to the application of mathematical concepts.

When these optimization systems (I'm referring to mathematical optimization here) are unleashed, they will crush many metrics that are not a part of their objective function and/or constraints. Want to optimize this quarter's revenue and don't have time to put in a constraint around user happiness? Revenue might be awesome this quarter, but gone in a year because the users are gone.

The system I worked on kept our company in business through the pandemic by automatically adapting to frequently changing market conditions. But we had to quickly add constraints (within hours of the first US stay-at-home orders) to prevent gouging our customers. We had gouging prevention in before, but it suddenly changed in both shape and magnitude - increasing prices significantly in certain areas and making them free in others.

AI is trained on the past, but there was no precedent for such a system in a pandemic. Or in this decade's wars, or under new regulations, etc. What we call AI today does not use reason. So it's left to humans to figure out how to adapt in new situations. But if AI is creating a black-box optimization system, the human operators will not know what to do or how to do it. And if the system isn't constructed in a mathematically sound way, it won't even be possible to constrain it without significant negative implications.

Gains from such systems are also heavily resistant to measurement, which we need to do if we want to know if they are breaking our business. This is because such systems typically involve feedback loops that invalidate the assumption of independence between cohorts in A/B tests. That means advanced experiment designs must be found that are often custom for every use case. So, maybe in addition to thinking more like product managers, engineers will need to be thinking more like data scientists.

This is all just in the area where I have some expertise. I imagine there are many other such areas. Some of which we haven't even found yet because we've been stuck doing the drudgery that AI can actually help with. [cue the song Code Monkey]


> AI is trained on the past.

Yes, unless models are being live fine-tuned, but generally yes.

>What we call AI today does not use reason

I don’t think this is correct- I think it’s more accurate to say it reasons on its priors rather than from first principles.

For the most part, I agree with the rest of your post.


> machine code → macro assembly → C → Java → Python

This made me laugh out loud. Python is not a step up from Java in my opinion. Python is more of a step up from BASIC. It's a different evolutionary path. Like LISP.


> machine code → macro assembly → C → Java → Python

The increase in productivity, we can all agree on, but a non-negligible portion of HN users would say that each one of those new languages made programming progressively less fun.


I think where people will disagree is how much productivity those steps brought.

For instance I think the step from machine code to macro assembler is bigger than the step from a macro assembler to C (although still substantial), but the step from C to anything higher level is essentially negligible compared to the massive jump from machine code to a 'low level high level' language like C.


So many other things happened at the same too, so it's sometimes hard to untangle what is what.

For instance, say that C had namespaces, and a solid package system with a global repo of packages like Python, C# and Java have.

Then you'd be able to throw together things pretty easily.

Things easily cobbled together with Python often aren't attributable to Python the language per se, but rather Python, the language and its neat packages.


Python is a step backwards in productivity for me compared with typed languages. So no I don't think we all agree on this. You might be more productive in Python but that's you not me.


The endpoint is that being a programmer becomes as obsolete as being a human "calculator" for a career.

Millions, perhaps billions of times more lines of code will be written, and automated programming will be taken for granted as just how computers work.

Painstakingly writing static source code will be seen the same way as we see doing hundreds of pages of tedious calculations using paper, pencil, and a slide rule. Why would you do that, when the computer can design and develop such a program hundreds of times in the blink of an eye to arrive at the optimal human interface for your particular needs at the moment?

It'll be a tremendous boon in every other technical field, such as science and engineering. It'll also make computers so much more useful and accessible for regular people. However, programming as we know it will fade into irrelevance.

This change might take 50 years, but that's where I believe we're headed.


Yet, we still have programmers writing assembly code and hand-optimizing it. I believe that for most software engineers, this will be the future. However, experts and hobbyists will still experiment with different ways of doing things, just like people experiment with different ways of creating chairs.

An AI can only do what it is taught to do. Sure, it can offer unique insights from time to time, but I doubt it will get to the point where it can craft entirely new paradigms and ways of building software.


You might be underestimating the potential of an automated evolutionary programming system at discovering novel and surprising ways to do computation—ways that no human would ever invent. Humans may have a better distribution of entropy generation (i.e. life experience as an embodied human being), but compared to the rate at which a computer can iterate, I don't think that advantage will be maintained.

(Humans will still have to set the goals and objectives, unless we unleash an ASI and render even that moot.)


AI, even in its current form can provide some interesting results. I wouldn’t underestimate an AI, but I think you might be underestimating the ingenuity of a bored human.


Humans aren't bored any more [0]. In the past the US the US had 250 million people who were bored. Today it has far more than than scrolling through instagram and tiktok, responding to reddit and hacker news, and generally not having time to be bored

Maybe we'll start to evolve as a species to avoid that, but AI will be used to ensure we don't, optimising far faster than we can evolve to keep our attention

[0] https://bigthink.com/neuropsych/social-media-profound-boredo...


I’m bored all the time and I’m a human. Last I checked anyway.


I agree, it's definitely still possible to get bored.

If I stop making progress on my personal projects, sinking my free time into games or online interaction is very unsatisfying.


Perhaps, but evolutionary results are difficult to test. They tend to fail in bizarre, unpredictable ways in production. That may be good enough for some use cases but I think it will never be very applicable to mission critical or safety critical domains.

Of course, code written by human programmers on the lower end of the skill spectrum sometimes has similar problems...


It doesn't seem like a completely different thing to generate specifications and formally verified programs for those specifications (though I'm not familiar with how those are done today).


I mean, I don’t even like programming with Spring because what all of those annotations are doing is horribly opaque. Let alone mountains of AI generated code doing God knows what.

I mean Ken Thompson put a back door into the C compiler no one ever found. Can you imagine what an AI could be capable of?


ASI?


Artificial Super-Intelligence


It will equally eliminate the need for all scientists and engineers. And every other human occupation.


I don't believe that's going to happen. If it were, humans would have stopped playing chess. But not only do lots of people still play chess, people making a living playing chess. There are YT channels devoted to chess. The same thing will be true of almost all sports, lots of entertainment, and lots of occupations where people prefer human interaction. Bar tenders and servers could be automated away, but plenty like to sit at a bar or table and be served by someone they can talk to. I have a hard time seeing nurses being replaced. Are people going to want the majority of their care automated?

I also don't know what it means to completely remove humans from all work. Who is deciding what we want done? What we want to investigate or build? The machines are just gong to make all work-related decisions for us? I don't believe that. It would cease being our society at that point.

Which brings up the heart of the matter. Why are we trying to replace ourselves? It's our civilization, automation are just tools we use to be more productive. It should make our lives better, not remove us from the equation.

My guess is the real answer is it will make some people obscenely rich, and give some governments a significant technical advantage over others.


Chess is not something you make new discoveries in anymore, not something that results in a product that people pay for. Poor analogy.


Ahaha, no. Every time period has its own distinct style. you can tell the difference between Magnus, Kasparov, Capablanca etc. Lots of innovation in chess in fact, almost uninfluenced by machines.


“It will cease being our society” is the most likely outcome. Current politics demonstrates we have lost the ability to collaborate for our common good. So the processes accelerating AI capabilities will be largely unchecked until it’s too late and the AIs will optimize whatever inscrutable function they have evolved to prioritize.


> The endpoint is that being a programmer becomes as obsolete as being a human "calculator" for a career.

Yeah, the same time the singularity happens, and then your smallest problem will be eons bigger than your job.

But LLMs can’t solve a sudoku, so I wouldn’t be too afraid.


They are pretty close. LLMs can write the code the solve a sudoku, or leverage an existing solver, and execute the code. Agent frameworks are going to push the boundaries here over the next few years.


> LLMs can write the code the solve a sudoku

It’s literally part of its training data. The same way it knows how to solve leetcode, etc.


>There will probably always be a need for expert-level troubleshooters and optimizers who understand all the layers

There's already so many layers that essentially no one knows them all at even a basic level, let alone expert. A few more layers and no one in the field will even know of all the layers.


Is a more generic version of this argument be that there will always be a need for smart/experienced people?


Seems so. Those friends did have to contend with the enjoyable part of their job disappearing. Whether they called it cheating or not is doesn't diminish their loss.


It didn't; there are still many roles for skilled assembly programmers in performance-critical or embedded systems. It's just their market share in the overall world of programming has decreased due to high-level programming languages; although better technology has increased the size of the market that might have demands for assembly.


I am not skilled in these areas so I am very scared. I am going to go back to school to get a nursing degree because it is guaranteed to not be disrupted by the disrupters like now where the disrupters are disrupting themselves. Despite the personal risks of a healthcare job, it will bring me so much more peace of mind.


I'm afraid it's naive to think that nursing is not going to get disrupted by AI. Seems like robotics is going to massively impact medical caregiving in the near future.


> robotics is going to massively impact medical caregiving in the near future

Not in the near near future. Do you know anything about nursing? The field will require some hard changes for robots to replace nurses, and the robots will need licenses


Even without robotics, many jobs like nursing (or construction) that require training will be able to be accomplished with much less training + a live computer coach that can give context-specific directions.


This is how we get to Idiocracy. Everyone is now relying on AI and become stupid because of it.


Definitely a risk. There's also an upside to having a 1-on-1 tutor with limitless patience and knowledge.


They could move into compilers or VMs or low level performance profiling, where those skills are still very important.


I think it's a fundamentally different thing, because AI is a leaky abstraction. I know how to write c code but I actually don't know how to write assembly at all. I don't really need to know about assembly to do my job. On the other hand, if I need to inspect the output of the AI to know that it worked, I still need to have a strong understanding of the underlying thing it's generating. That is fundamentally not true of deterministic tools like compilers.


Boiler plate being eliminated by syntactic sugar or runtime is not the same thing. Sure that made diving in easier but it didn't abstract away the logic design part - the actual programming part. Now the AI spits out code for you without thinking about the logic.


The difference is that your friend has a negative view of others than the OP is not presenting. They’re just stating their subjective enjoyment of an activity.


Not commenting on the mindset of earlier programmers, rather the analogy you offer: language level abstraction is entirely unlike process specialization.

For example, when moving up to C from assembler, the task of the "programmer" remains invariant and the language tool affords broader accessibility to the profession since not everyone likes to flip bytes. There is no subdivision of overall task of "coding a software product".

With AI coders, there is task specialization, and, as pointed out, what's left on the table is the least appetizing of the software tasks: being a patch monkey.

This is the issue.


David Parnas has a great take on this:

"Automatic programming always has been a euphemism for programming with a higher level language than was then available to the programmer. Research in automatic programming is simply research in the implementation of higher-level languages.

Of course automatic programming is feasible. We have known for years that we can implement higher-level programming languages. The only real question was the efficiency of the resulting programs. Usually, if the input 'specification' is not a description of an algorithm, the resulting program is woefully inefficient. I do not believe that the use of nonalgorithmic specifications as a programming language will prove practical for systems with limited computer capacity and hard real-time deadlines. When the input specification is a description of an algorithm, writing the specification is really writing a program. There will be no substantial change from out present capacity.

The use of improved languages has led to a reduction in the amount of detail that a programmer must handle and hence to an improvement in reliability. However, extant programming languages, while far from perfect, are not that bad. Unless we move to nonalgorithmic specifications as an input to those systems, I do not expect a drastic improvement to result from this research.

On the other hand, our experience in writing nonalgorithmic specifications has shown that people make mistakes in writing them just as they do in writing algorithms."

Programming with AI, so far, tries to specify something precise, algorithms, in a less precise language than what we have.

If AI programming can find a better way to express the problems we're trying to solve, then yes, it could work. It would become a matter of "how well the compiler works". The current proposals, with AI and prompting, is to use natural language as the notation. That's not better than what we have.

It's the difference between Euclid and modern notation, with AI programming being like Euclidean notation and current programming languages being the modern notation:

"if a first magnitude and a third are equal multiples of a second and a fourth, and a fifth and a sixth are equal multiples of the second and fourth, then the first magnitude and fifth, being added together, and the third and sixth, being added together, will also be equal multiples of the second and the fourth, respectively."

a(x + y) = ax + by

You can't make something simpler by making it more complex.

https://web.stanford.edu/class/cs99r/readings/parnas1.pdf


I do think it’s basically the same. Its further on the same continuum of: Natural language/Machine code.


I don't really think it's a continuum. There is a continuum of abstraction among programming languages, from machine code to Java/Python/Haskell or whatever, but natural language is fundamentally different: it's ambiguous, ill-defined. Even if LLMs generate a lot of our code in the future, somebody is going to have to understand it, verify its correctness, and maintain it.


Natural language, python, c, assembly

The distance isn’t the same between them, but each one is more abstracted than the next.

Natural language can be ambiguous and ill defined. Because the compiler is smarter. Just like you don’t have to manage memory in Python, except it abstracts a lot more.

The fact is that this very instant you can compile from natural language.


There is a vast gulf between natural language and the other 3, which are fundamentally very similar to each other.


LLMs can generate code, but they still need to be prompted correctly, which requires someone who knows how to program beyond toy examples, since the code is going to have to be tested and integrated into running code. The person will need to understand what kind of code they're trying to generate, and whether that meets the business requirements.

Python is closer to C (third generation programming language). Excel is a higher level example. It still takes someone who knows how to use Excel to do anything meaningful.


Great point.

I think this will weed out the people doing tech purely for the sake of tech and will bring more creative minds who see the technology as a tool to achieve a goal.


Indeed, can't wait for the day when technical people can stop relishing in the moments of intimate problem solving between stamping out widgets, and instead spend all day constantly stamping out widgets while thinking about the incredible bullshit they'll be producing for pennies. Thanks boss!


It feels that people commenting on this post are forgetting that tools have evolved since the times of punch cards or writing only in pure assembly.

I personally wouldn't have enjoyed being that kind of programmer as it was a tedious and very slow process, where the creativity of the developer was rather low as the complexities of development would not allow for just anyone to be part of it (my own assumption).

Today we have IDEs, autocomplete, quick visual feedback (inspectors, advanced debuggers, etc.) which allow people who enjoy creating and seeing the results of their work as opposed to purely be typing code for someone else.

So, I don't get why people jump straight to thinking that adding yet another efficiency tool would destroy everything. To me it seems to make developing simpler applications something which doesn't require a computer science degree, that's all.


> I personally wouldn't have enjoyed being that kind of programmer as it was a tedious and very slow process, where the creativity of the developer was rather low as the complexities of development would not allow for just anyone to be part of it (my own assumption).

I think your assumption is incorrect. I remember programming using punched cards and low-level languages, and the amount of creativity involved was no less than is involved now.


That’s like saying Shakespeare couldn’t be productive as a writer because he didn’t have a word processor.


The few people at the rightmost edge of the bell curve shouldn't be used as an example in this case

The average attorney became much more productive after the introduction of the word processor.


So are you saying that you would rather live in a society where only lucky people could participate in a given field than making it accessible to more people?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: