Hacker News new | past | comments | ask | show | jobs | submit | geocar's comments login

> It's very very difficult to imagine that K developers can really read a mess like this as easily as one might read Go or whatever.

水落石出。

> Has anyone tested this? Take a K program and ask a K developer to explain it?

I am not sure what you're asking. Do you want me to read it to you?

Here is me reading some other people's code:

https://news.ycombinator.com/item?id=8476633

https://news.ycombinator.com/item?id=22010223

Do you want me to read to you the JSON encoder (written twice) and the decoder in this way?

> Or maybe introduce a deliberate bug and see how long they take to fix it compared to other languages.

https://news.ycombinator.com/item?id=27209093#27223086

> You could normalise the results based on how long it takes them to write some other code.

https://news.ycombinator.com/item?id=22459661#22467866

https://news.ycombinator.com/item?id=31361023#31364262


Thanks that was very interesting.

It seems to me that it has a lot of the same properties as regex. Looks like gobbledygook at first glance, but after learning it I can write them, and read them with some effort (depending on the complexity). However nobody would describe regexes as "readable", and they're quite error-prone. I definitely wouldn't want to write a whole program in regex.

Regexes shine most when they're used interactively, e.g. in one-off greps, or editors. There readability doesn't matter at all, error-proneness doesn't really matter, and terseness is important. The problems start when people put those grep commands in scripts where the output isn't supervised by humans.

I wonder if the same is true for K - it started as a query language for one-off queries & investigations, and then people started saving those queries and making bigger programs?


> However nobody would describe regexes as "readable"

I would, and do, and I urge you to be less judgemental about things you do not know anything about because you will never learn anything new with that attitude.

> I wonder if the same is true for K - it started as a query language for one-off queries & investigations,

Why do you wonder this? I don't think it's true, but so what? Did you not read what I wrote? Seriously: Why do you put so much effort trying to talk yourself out of learning how to do something that is obviously amazing to you?

I am telling you I can read this. I like this. I am not nobody, just someone you did not think existed. And I am telling you it is possible for you too.


> I would, and do, and I urge you to be less judgemental about things you do not know anything about because you will never learn anything new with that attitude.

The thing is, I am extremely familiar with regexes (I've even written a regex engine), so I know exactly how readable they are - even after knowing them really well. So the fact that you think they are still readable suggests to me that your judgement of K's readability is also suspect.

> Why do you wonder this?

It would be a reasonable explanation of why K exists.


> The thing is, I am extremely familiar with regexes (I've even written a regex engine), so I know exactly how readable they are - even after knowing them really well.

Your experience writing a "regex engine" once upon a time led you to believe regular expressions are difficult to read.

My experience maintaining a few million lines of perl over a couple of decades has led me to believe that I can read regular expressions with no discomfort.

The Real™ thing is you can get better at anything with practice, even this, but listen I also think K is more useful than regular expressions and I would have used less perl had I learned K sooner.

> So the fact that you think they are still readable suggests to me that your judgement of K's readability is also suspect.

It should make you suspect whether or not you have any idea what an expert actually is. I mean, the inventor of regular expressions tinkered with them for decades, and new advancements are still happening sixty years later!

You don't know what you don't know, and there is very little you can do about that except pay attention to people who can do things you do not know how to do yet, and reserve your judgement about how they do it until you can do it better.

The only thing k has in common with regular expressions is your claim they are both difficult, a claim I disagree with.

> > Why do you wonder this?

> It would be a reasonable explanation of why K exists.

You misunderstand me, perhaps on purpose, but I hope you and others will think about this: Why do you care why it exists when I have shown you something so much more amazing than an opinionated history lesson?

I think k exists to make programs that make money. Forever. Because a little bit of money from a lot of programs over a long time is worth a lot, k is fast to write it. Because sometimes getting the answer faster makes more money, k runs fast too. Because people are trusting their money with it, k runs very predictably. Because sometimes your vendor just changes the input format on a Friday night, it's important that it is easy to read and make changes to k programs.

Arthur said it was the keys to the kingdom.


In refining 8AAC36:4EF23C (Le Mars) in 00h 04m 45s 714ms I have brought glory to the company. Praise Kier. 7⃣0⃣2⃣3⃣8⃣ 0⃣3⃣7⃣4⃣0⃣ 5⃣6⃣5⃣8⃣4⃣ 3⃣7⃣7⃣0⃣0⃣ 1⃣6⃣6⃣0⃣0⃣ #mdrlumon #severance lumon-industries.com

:)


> What is objectively worse than buggy networking in systemd is having 26 different incompatible ways to configure networking in Linux.

False.

One of those 26 incompatible ways works just fine in the situation where systemd is clearly not working.

It is only your opinion that it is better to have "buggy" networking than to have "not buggy" networking that you think is difficult to configure. That is the exact opposite of "objectively."


> It is only your opinion that it is better to have "buggy" networking than to have "not buggy" networking that you think is difficult to configure. That is the exact opposite of "objectively."

I never said this.

Fix the bugs. Don't fill the well with yet another equally shite solution.


> > It is only your opinion that it is better to have "buggy" networking than to have "not buggy" networking that you think is difficult to configure. That is the exact opposite of "objectively."

> I never said this.

I know you didn't say it was your opinion. You said it was objectively worse. You said:

> What is objectively worse than buggy networking in systemd is having 26 different incompatible ways to configure networking in Linux.

and I think it's pretty clear people have experiences where systemd-network malfunctions in places where a rc.local (one of those 26 incompatible ways) works fine.

That is not objectively worse. So it's only your opinion.

> Fix the bugs.

You can fix your own bugs.

> Don't fill the well with yet another equally shite solution.

Do not confuse some fantasy future version of systemd that has had people like me work on it to make it better, despite its many many issues, with the rc.local script that exists today: Only one of these has a chance of existing.


> Only one of these has a chance of existing.

Fascist


> systemd already has encrypted boot support.

It has all of those words next to a bullet point, but the implementation is quite different, and I (like the presenter and probably many many others who are clearly not you) have more confidence in a simple fuse than with systemd[1].

[1]: https://app.opencve.io/cve/?vendor=systemd_project&product=s...

> At that point, any rational person would question the reasons for doing so.

That is excellent advice. The presenter has done something you clearly cannot. You should be rational, follow your own advice, and try to figure out what those reasons are (hint: some of them are actually in the video). That might take a few hours to a few weeks of reading depending on your own experiences, and that's just how life is sometimes.

> Like removing the transmission from a car out of spite then realizing you need a way to switch gears.

When I have a new gadget I want to produce, I'm responsible for all of the code in it, so productivity, reliability, performance, and size are important to me whether I have written the code or I have merely used it in my product. I do not understand the way these things are important to the systemd people (or even if they are important at all), so for me, systemd is off-the-table to begin with.

Or to put it another way: I never needed a car in the first place, because I'm making boats, and I'm not starting with a car just because I like the people who made the engine. Ignoring "solved problems" can just make everything more expensive, and maybe I only know that because I've seen enough people make that mistake, but sometimes this is true.

Keeping an open mind means allowing yourself to be convinced too.


Hear hear!, It's like flogging packages and frameworks at a problem without ever considering if it was easier/more efficient to roll your own.


I hope this lesson has become clear more clear with the recent deepseek upset. That proved that simply throwing more money and hard to wear at a problem doesn't guarantee a better outcome, innovation seems to favor the opposite. As as Germans say the German saying goes: "


NixOS checks/enforces its own reproducibility via systemd in various ways. It seems unlikely to me that replacing something battle-tested with a bunch of self-rolled brittle scripts will make it more reliable.


This is somewhat ironic, given that it was systemd that replaced the battle-tested systems that came before it, and variants of your comment were used to argue against it.


Sysvinit was brittle as hell. It was bad at preventing races (hence no parallelization), in part because it couldn't give guarantees on which services had started when, and if they started at all.

At this point Systemd its unit files are in a really nice place, to the point where Systemd can often guarantee correctness now.


"can often guarantee correctness"? So you're saying it can not at times?

"Battle-tested systems"? Are you talking about those bash scripts that weren't reliable? lmaoo


Is this a CSP thing? Can you get away with https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Re... and window.onerror?

Also, do you actually need the HAR file? or just a log of your servers' inputs/outputs from the clients' perspective? You can get that The Boring Way if you don't have a CSP issue, so maybe solve that issue?


> I still never use 301s for that reason. Things may have changed, but I dare not try!

I use 301 for http:->https: redirects because (a) I doubt we're going back, (b) it prevents some cleartext leaks (like the Host header), and (c) it is slightly cheaper.

> we never figured out how to get the browser to re-learn the responses for those pages without drastic measures.

If you control the target URL it is easy, just redirect back. Seriously: The browser won't loop, it'll just fetch the content again and now not seeing a 301 will forget that nonsense ever happened. This is why 301 is usually a fine default for same-site redirects, or if the redirect target is encoded in the URL (such as in tracking URLs).

The big no-no is don't 301 to a URL you can't control unless you have the appropriate Cache-Control headers on the redirect.


Isn't there a https upgrade header specifically for this kind of thing?


Not to my knowledge. How exactly do you think it works?


426 Upgrade Required


> If you control the target URL it is easy, just redirect back. Seriously: The browser won't loop

Just uh... don't do this if you have a CDN infront of your site. We had an incident where Cloudfront cached the 301's in both directions


Yeah that's a good point, but one way to think about a CDN is like a web browser that you control, so I say do it even with a CDN and remember you can always just flush the "browser" cache! (or in cloudfront's case: create an invalidation and wait a few seconds)


Right. Grindr puts IP location and and userid information in the ad exchanges so anyone programmatically buying knows which politician/public person is gay and where they are.

We also know who is fat because myfitnesspal does the same thing.

We also know who is pregnant, who has recently been raped, who feels vulnerable. And so on. You see an ad? We know a thing. We know if you like boobs even if you don’t.

Without trying to speak to what American governments and corporations have done with that knowledge, the “security” point is that the Chinese government has this knowledge as well, and the fear they can do something with it.

That being said, what Cambridge Analytica did (a British company) with this kind of knowledge is well-documented, so I can agree the fear is warranted by both those who seek to monopolise those powers, and those who seek to escape them.


It doesn't have to be large: My wristwatch does it.


i don't think the above comment is saying it has to be large, just that it's less obvious on a smaller clock


> Why shouldn’t an array at the smallest possible index correspond to the beginning of the array?

Because then there is no good way to refer to the index before that point: You are stuck using -1 (which means you can't use it to refer to the end of the array), or null (which isn't great either).

> every programming language I know of that supports the concept of unsigned integer

Surely you know Python which uses a signed integer as an index into their arrays: list[-1] is the last element of a list. If they only used one-based indexing then list[1] would be the first and that would be nicely symmetrical. It would also mean that list[i-1] would NEVER refer to a value after ‹i› eliminating a whole class of bugs.

> It’s also very natural to think of arr[i] as “i steps past the beginning of arr.”

I think it's more natural to think of arr[i] as “the ‹i›th element of arr” because it doesn't require explaining what a step is or what the beginning is.

The exact value of ‹i› matters very little until you try to manipulate it: Starting array indexes at one and using signed indexes instead of unsigned means less manipulation overall.

> find the convention used in many countries of numbering building floors starting with zero to be more logical

In Europe, we typically mark the ground-floor as floor-zero, but there are often floors below it just as there are often floors above it, so the floors might be numbered "from" -2 for example in a building with two below-ground floors. None of this has anything to do with arrays, it's just using things like "LG" or "B" for "lower ground" or "basement" don't translate very well to the many different languages used in Europe.

The software in the elevator absolutely doesn't "start" its array of sense-switches in the middle (at zero).


> In Europe, we typically mark the ground-floor as floor-zero,

_Western_ Europe. Eastern Europe prefers 1-based numbering. The reason, typically assumed, is that thermal isolation, required due to colder winters, causes at least one stair segment between entrance and the sequentially first floor.


Python might have used array[~0] instead, where ~ is required, to indicate end-of-list 0-based indexing.

But I guess they wanted to iterate from the end back [-1] to the start [0], making it easy to implement a rotating buffer.


> Python might have used array[~0] instead

This is what was once added to C#: arr[^idx], when this ^idx is mapped to a special object, typically optimized then out. arr[^0] means the last element.


[^n] indexing is mapped to an 'Index' struct by Roslyn which can then be applied to any array or list-shaped type (it either has to expose Index-accepting indexer, or a plain integer-based indexer and Count or Length property. There really isn't much to optimize away besides the bounds check since there are no object allocations involved.

A similar approach also works for slicing the types with range operator e.g. span[start..end].


> I think it's more natural to think of arr[i] as “the ‹i›th element of arr” because it doesn't require explaining what a step is or what the beginning is.

Yes, but if you will eventually need to do steps on your array, you better opt for the framework that handles them better. I agree, that if your only task is to name them, then 1 based indexing makes more sense: you do that since diapers, and you do that with less errors.


Off by one issues are also an argument given in favour of no indexing.

    groups=new Array(3).fill([])
    items.reduce(function(a,x,y){y=a.shift();y.push(x);a.push(y);return a},groups)
Array languages typically have a reshaping operator so that you can just do something like:

    groups:3 0N#items
Does that seem so strange? 0N is just null. numpy has ...reshape([3,-1]) which wouldn't be so bad in a hypothetical numjs or numlu; I think null is better, so surely this would be nice:

    groups = table.reshape(items,{3,nil})   -- numlu?
    groups = items.reshape([3,null])        // numjs?
Such a function could hide an ugly iteration if it were performant to do so. No reason for the programmer to see it every day. Working at rank is better.

On the other hand, Erlang is also 1-based, and there's no numerl I know of, so I might write:

    f(N,Items) -> f_(erlang:make_tuple(N,[]),Items,0,N).
    f_(Groups,[],_,_N) -> Groups;
    f_(G,Items,0,N) -> f_(G,Items,N,N);
    f_(G,[X|XS],I,N) -> f_(setelement(I,G,X),XS,I-1,N).
I don't think that's too bad either, and it seems straightforward to translate to lua. Working backwards maybe makes the 1-based indexing a little more natural.

    n = 0
    for i = 1,#items do
      if n < 1 then n = #groups end
      table.insert(groups[n],items[i])
      n = n - 1
    end
Does that seem right? I don't program in lua very much these days, but the ugly thing to me is the for-loop and how much typing it is (a complaint I also have about Erlang), not the one-based nature of the index I have in exactly one place in the program.

The cool thing about one-based indexes is that 0 meaningfully represents the position before the first element or not-an-element. If you use zero-based indexes, you're forced to either use -1 which precludes its use for referring to the end of the list, or null which isn't great for complicated reasons. There are other mathematical reasons for preferring 1-based indexes, but I don't think they're as cool as that.


Yes, that is what is so frustrating about this argument every single time it comes up, because both sides in the debate can be equally false, or equally true, and its really only a convention and awareness issue, not a language fault.

It’s such a binary, polarizing issue too, because .. here we are again as always, discussing reasons to love/hate Lua for its [0-,non-0] based capabilities/braindeadednesses..

In any case, I for one will be kicking some Luon tires soon, as this looks to be a delightful way to write code .. and if I can get something like TurboLua going on in TurboLuon, I’ll also be quite chuffed ..


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: