I've followed the gitlab migration and every package and distribution change that warranted community notification for more than a decade.
It's such an empowering feeling to have tracked all the changes to the distribution over a decade. The Arch maintainer culture has managed to provide consistent high quality communication and documentation.
Most of the news doesn't require action on my part, being a subsystem or package I don't use. They use the news channel sparingly and the distribution is minimal and clean. News arrives only every other week or so and is succinctly written in one or two paragraphs.
It's a distribution for those who love precision and professionalism.
That's one thing (of many things) I've always appreciated about Arch: for those rare instances where I run `pacman -Syu` and it fails, I instinctively go to their homepage and look at the "Latest News", which invariably tells me what I need to do.
I bet there is huge potential in physical computation but it seems like we'll need post-silicon or massively-3D circuitry to scale it out sufficiently.
Bio-electronics and organically grown components are possible but also in pure R&D stages.
Imagine growing a gigantic organic 3D circuit with trillions of electrically charged connections, capable of trillion dimension matrix operations needing a fraction of current silicon based power requirements.
We could give it a great name like Synaptic Architecture, or Neural Fibre or ...
This has been tried and has both worked and failed. Failed in some cases where governments built large clusters of apartment towers without much heed to community or spaces for urban renewal and local business.
And succeeded when there were more comprehensive plans than "just build housing". See for example many European countries and Singapore.
It needs be an intelligent holistic approach or it can lead to fractured failed communities.
Ridiculous. That list of tremendous people are some founders that found success. There are plenty of tremendous founders we don't hear about that didn't do as well. There are plenty of tremendous people in regular jobs that can't act like iconoclastic dickheads because they'd be rightfully fired. More importantly hugely successful organizations have found a way to reproduce their own success despite specific people not because of them.
You want to build lasting success on good process, good organization and responsible management. People come and go. Organizations stick around.
If you're feeding the specific nodes to scrape, iterating and bug fixing what is Chatgpt doing other than giving you someone to talk with while you code?
Call me when I can ask a LLM to pull structured data in CSV form from website X and deliver it to me each morning. And it does it.
This is a hilarious comment. It's exactly the comment that was made in the 70s when workers had a larger share of national income and when average salaries could afford a house.
Indeed, soon thereafter in the late 70s we saw rounds of union busting, neoliberalism, Thatcherism and Regan and drastic cuts to taxes and employment security.
Inequality spiked, wages stagnated, many millionaires and billionaires produced and the growing frustration and malaise in the working classes generating the rise of nationalist and populist movements the world over.
It's becoming a bipartisan realization we went too far with both Republicans and Democrats calling for reshoring, investment in industry and yes, worker protection. The most minimal watered down corporate friendly worker protection. This is 10% of what any radical would wish for.
And already this comment. The reactionary spirit has been embedded deeply.
Those things were reactions in many cases to overreach and an oversized role of the state, and a status quo that failed to innovate. Now we're (for good reason) going to go the other way.
I think its a natural cyclic cycle, the market is now out of good new ideas, and now we need innovation from someplace else, and to rebalance social equity too
A REPL isn't just a REPL. You are comparing modern day Toyota Corollas to a Spaceship sent from the future to the 80s. One is just different level radical.
At least when it's baked by SLY or SLIME
and i'm still wondering which of these things i can't do in a python repl? note macroexpansion doesn't count because that's not a dimension of the repl.
I don’t think i can patch a function at runtime without losing state either in python - the act of redefining the function causes the variables to be reset but in lisp the bindings are untouched.
I just did it - it works perfectly fine. Debug-run your code, an exception will be thrown at the call site, step up one frame from the exception (ie module level), define the missing function, call again and it succeeds - all without leaving the same repl instance. Don't believe me? Try it.
I'll say it again: you guys are in plain denial not about python or lisp as languages but about how interpreters work. There's just nothing more to be said about this dimension of it.
What's being asked is, after defining the missing function, whether it's possible to clear the exception and continue the execution without having to restart from the beginning. This is very useful when you hit an exception after 10 minutes of execution. (This is a real usecase which would have saved me untold hours.)
I hope it's possible somehow, but if you just load pdb (e.g. with %pdb in ipython), pdb is entered in post-mortem mode, from which it's impossible to modify code/data and resume execution. Setting a breakpoint (or pdb.set_trace()) would requiring knowing about the bug ahead of time. Does it only work when interrupting with a remote debugger rather than on exception?
However, it wouldn't be possible if the interpreter unwinds the stack looking for exception handlers before finding that there is none? In other languages/VMs such as SBCL the runtime can look up the stack for handlers, and invoke the debugger before destructively unwinding.
The other guy up above claims this is a feature unique to calling functions, rather than all error states, and that the lisp runtime specifically guards against this. If that's the case then my answer is very simple: it would be trivial to guard function calls (all function calls) to achieve the exact same functionality in python. I'm in bed but it would literally take me 5 minutes (I would hook eval of the CALL_FUNCTION opcode). Now it would be asinine because it's a six-sigma event that I call a function that isn't defined. On the other hand, setting a breakpoint and redefining functions as you go works perfectly well and is the common case and simultaneously the kind of "repl driven development" discussed all up and down this thread.
Thank you, you're very helpful despite this raging flame war. I'm glad to hear you can hook opcodes like that, then you really can do anything. And I really need to give "set a defensive breakpoint and then step through the function" an honest go. Now that you say it, I realise I haven't.
>I'm glad to hear you can hook opcodes like that, then you really can do anything
Just in case someone comes around calls me a liar: the way to do this is to spread the bytecodes out one per line and set a line trace. Then when your bytecode of choice pops up, do what you want (including manipulate the stack) and advance the line number (cpython let's you advance the manipulate the line number).
Calling again and continuing are not the same thing. Sure, with the above trivial example it is. But if the parent function has non idempotent code before calling the missing function (like doing some global change / side effects), then calling again will give a different result than just continuing from the current state.
So is it possible to define the missing function and continue from the same state in Python? I don't think so, but I'm not a heavy Python user (just for small/medium scripts).
>So is it possible to define the missing function and continue from the same state in Python? I don't think so, but I'm not a heavy Python user
This is a pointless debate - someone has to catch the exception, save caller registers, handle the exception (if there's a handler) or reraise. Either you have to do it (by putting a try except there) or your runtime has to be always defensively saving registers or something. Lisp isn't magic, it's just a point on trade-off curve and I have without a shadow of a doubt proven that that point is very close to python (wrt the repl). So okay maybe clisp has made some design decisions that make it a hair more effective at resuming than python. Cool I guess I'll just ignore all the other python features where there's parity or advantage because of this one thing /s.
I'll take this as an answer to my sibling comment that the answer is "No". I'm really sad CPython can't do that, but maybe some other Python can. It shouldn't necessarily be any slower for the interpreter to figure out where to jump to before saving the execution trace and jumping.
It's not "pointless", I was tearing out my hair and losing days because I couldn't do this in CPython. Yes, I'd much rather use Python than Common Lisp regardless.
It works in code compiled from c++ too: define and associate a signal handler for sigkill, call a function whose symbol can't be runtime resolved by the linker, sigkill is sent and caught, define your function (in your asm dejure), patch the GOT to point from the original symbol to wherever the bytearray is with your asm, and voila.
I'll say it again: what exactly do you think your magical lisp is doing that defies the laws of physics/computing?
> It works in code compiled from c++ too: define and associate a signal handler for sigkill, call a function whose symbol can't be runtime resolved by the linker, sigkill is sent and caught, define your function (in your asm dejure), patch the GOT to point from the original symbol to wherever the bytearray is with your asm, and voila.
I don't need to do anything like that in Lisp. I just define the function and RESUME THE COMPUTATION WHERE IT STANDS in my read eval print loop. << important parts in uppercase.
> My point is very simple: I can do it too, in any language I want, and so there's nothing special about lisp.
The big difference is: "I can do it too" means YOU need to do it. Lisp does it for me already, I have not to do anything. I don't want to know what you claim you can do with C++, show me where C++ does it for you.
Telling me "I can do it too" is not a good answer. Show me where the language implementation (!) does it for you.
Maybe because your entire lived experience has been reordered based on your skin color and how it isn't a light yellow color. Powerful cultural strands exist in your community to push back against constraints placed upon you because of your skin color.
Maybe seeing the only option for skin color in tech be much more representative of white skin color only serves to further remove your lived reality from public debate and understanding.
Nobody has a light yellow colour. It was chosen so as to not to refer to any ethnicity in particular.
BTW. If your skin actually has turned yellow then you should seek the nearest hospital emergency room ASAP, because it can be a sign of acute liver failure.
Like another commenter said in a sibling thread, yellow is still a non-dark color, and yellow characters represent white people in cartoons like The Simpsons.
https://archlinux.org/
I've followed the gitlab migration and every package and distribution change that warranted community notification for more than a decade.
It's such an empowering feeling to have tracked all the changes to the distribution over a decade. The Arch maintainer culture has managed to provide consistent high quality communication and documentation.
Most of the news doesn't require action on my part, being a subsystem or package I don't use. They use the news channel sparingly and the distribution is minimal and clean. News arrives only every other week or so and is succinctly written in one or two paragraphs.
It's a distribution for those who love precision and professionalism.