Hacker News new | past | comments | ask | show | jobs | submit | astroanax's comments login

That doesn't seem to be an issue now, tried it with Grapejuice and didn't get banned. From the Grapejuice Discord, this also seems to be the case for other Roblox on Linux users.


Disabling js doesn't make it hang anymore for me.


The original keywords - https://github.com/mortdeus/legacy-cc/blob/master/last1120c/...

Interestingly, long was commented


It makes sense long would have needed a comment. It needs a comment because “long” and “double” are terrible names for data types. Long what? Double length what? Those type names could easily have opposite meanings and meant long floating point / double length integer. WORD/DWORD are nearly as bad - calling something a "word" incorrectly implies the data type has something to do with strings.

If you don't believe me, ask a non programmer friend what kind of thing an "integer" is in a computer program. Then ask them to guess what kind of thing a "long" is.

The only saving grace of these terms is they’re relatively easy to memorise. int_16/int_32/int_64 and float_32/float_64 (or i32/i64/f32/f64/...) are much better names, and I'm relieved that’s the direction most modern languages are taking.

(Edit: Oops I thought Microsoft came up with the names WORD / DWORD. Thanks for the correction!)


> ask a non programmer

Why should a non programmer understand programming terms? Words have different meanings in different contexts. That's how words work. There is no need to make these terms understandable to anyone. The layman does not need to understand the meaning of long or word in C source code.

Ask a non-golf player what is an eagle or ask a physicist, a mathematician and a politic the meaning of power.

Word and long may have been poor word choices, but asking a non-programmer is not a good way to test it.


> Ask a non-golf player what is an eagle or ask a physicist, a mathematician and a politic (sic) the meaning of power.

All those fields could do with less jargon. Especially since in many cases there is a common word.

Lawyers especially give me the impression that they use jargon to obscure their field from regular folks.

Our field, being new, should not make the same mistakes, but yet, here we are, where the default "file viewer" on Unix is cat, the pager is called "more", etc.

I don't see any reason why those types could not have had descriptive names from the start. Well, there was one, lack of experience, so ¯\_(ツ)_/¯


Dejargonification is great; I agree that jargon is a real barrier to entry. However, after having dealt with lawyers (and doctors of all stripes: MDs, PhDs, ...), I've come to see jargon for what it is: a weakly typed pointer. It's exactly analogous to a struct of function pointers with a user cookie in C.

Jargon lets you summarize whole concepts in one word. A lot of jargon is functional in the math/programming sense: you can pass 'arguments' to the jargon. The jargon adds levels of abstraction that let the users communicate faster, with less error, and higher precision.

For instance, the distinction between civil and criminal matters; or the distinction between malfeasance, misdemeanor, and felony.


> Lawyers especially give me the impression that they use jargon to obscure their field from regular folks.

Really? You think that lawyers (and physicians) use Latin/Greek words in order to confuse regular folks?

These fields are very old, and changing the meaning of something Mens Rea or lateral malleolus is going to require A) an exact drop-in replacement which will probably just as obscure, B) retraining of an entire set of people.

Our field is similar in that we have a mountain of jargon that's largely inaccessible to regular folks. The point of those words are not to converse with regular folks but to convey information to others in the field with as little ambiguity as possible.


In England many Latin terms were replaced with English terms in 1998 with the adoption of the new Civil Procedure Rules in an effort to make justice "more accessible" (alongside other reforms). These arguably actually added confusion - for example it's much more obvious that a "writ" is a specialist term versus "claim" which replaced it.


Or ask a non-mathematician what a ring, a field, and a group are.


It's about cognitive load. The more jargon diverges from concepts you already have learned and used for years, the more difficult it is to adapt. As an experiment, try replacing all of your variable names and types with numbered variables: v1, v2, v3, etc. Then compare to replacing them with words: not completely random words, but with completely misleading words, like the complete opposite of what they represent, length/height, mass/speed, etc. You'll find the latter is maddeningly hard to deal with, because your brain keeps bringing a whole lot of context that is just wrong, so you are constantly fighting your own brain.

Intuitive things come from prior experience. They are a kind of inertia that you just have to work with.


Jargon that builds on intuition can be its own problem. Jargon, by its very definition, has explicit technical meaning in a specific domain. Intuition in words is based on vernacular usage. It is vanishingly unlikely that the vernacular usage aligns with the domain-specific usage of a given term. This leads to plenty of false assumptions, and forces people to disambiguate jargon-usage vs vernacular-usage, which may both be found in a single piece of writing.

See economics for a great example of jargon-vernacular crossover.


> Jargon that builds on intuition can be its own problem.

Sure it can, which is why you gotta be double-careful naming things and not try to take metaphors too far. A jargon term needs to crisply identify the crux of the concept and not confuse with irrelevant or misleading details.


> A jargon term needs to crisply identify the crux of the concept and not confuse with irrelevant or misleading details.

Agreed. I've never seen a vernacular term fill this role well.

If you need to learn the technical concepts either way to be effective, might as well give them a name that doesn't conflict with another definition most people know.


I think "binary tree" is a decent example. It's not only a visual depiction of how the data structure is laid out, but the metaphor of "leaves" does also transfer over. It is possible to take it too far, trying to fit "bark" into the concept, which is of course, silly. Calling it a "dubranchion" or some such would be a disaster, IMHO.


"Binary tree" is not a vernacular term.

In economics, "cost" is a good example. This is a distinct concept from "price". "Comparative advantage" is another term in economics; this is perhaps not used in vernacular conversation, but I can tell you from personal experience that it certainly doesn't convey to most people the definition understood by someone with an education in economics -- the vernacular reading doesn't imply the jargon definition.

It seems to me that the difference is how the jargon is used. I imagine that someone without a CS background would quickly realize, when overhearing a conversation about binary trees, that the subject is something other than a type of flora.

I can tell you with confidence borne from frustrating experience that using economics jargon, such as that I mentioned above, with a lay audience gives the audience no such impression that the terms mean anything other than what they perceive them to mean.


Good variable names (and type names) matter for legibility. They should be clear, unambiguous, short, memorable and suggestive. Unambiguous is usually more important than short and memorable.

The word 'long' is ambiguous and unmemorable. And the type "word" is actively misleading. If you called a variable or class 'long', it wouldn't pass code review. And for good reason.

'Power' is an excellent example of what good technical terms look like. "Power" has a specific technical meaning in each of those fields, but in each case the technical meaning is suggested by the everyday meaning of the word "power". Ask a non-physicist to guess what "power" means in a physics context and they'll probably get pretty close. Ask a non-programmer to guess what "integer" means in programming and they'll get close. Similarly computing words like "panic", "signal", "file", "output", "stream", "motherboard" etc are great terms because they all suggest their technical meaning. You don't have to understand anything about operating systems to have an intuition about what "the computer panicked" means.

Some technical terms you just have to learn - like "GPU" or "CRDT". But at least those terms aren't misleading. I have no problem with the term "double precision floatingpoint" because at least its specific.

"long", "double", "short" and "word" are bad terms because they sound like you should be able to guess what they mean, but that is a game you will lose. And to make matters worse, 'long', 'double' and 'short' are all adjectives in the english language, but used in C as nouns[1]. They're the worst.

[1] All well named types are named after a noun. This is true in every language.


"long" and "short" are adjectives in C. The types are "long int" and "short int" and the "int" is implied if it's not present. A declaration like

    auto long sum;
Declares a variable named "sum" of type "long int" and of automatic storage duration.

The "double" comes from double-precision floating point. In the 1970s and 1980s anyone who came near a computer would know what that meant. Anyone who ever had to use a log table or slide rule (which was anyone doing math professionally) would know exactly what that meant.

There are good sound reasons for the well-chosen keywords. Just because one is ignorant of those reasons does not mean they were not good choices.


Well, you can certainly think of "long" and "short" as adjectives, but the grammar doesn't treat them that way.

"long", "short", "signed", "unsigned", "int", "float", and "double" are all type specifiers. The language specification includes an exhaustive list of the valid combinations of type specifiers, including which ones refer to the same type. The specifiers can legally appear in an order. For example, "short", "signed short", and "int short signed", among others, all refer to the same type. (They're referred to as "multisets" because "long" and "long long" are distinct.)


You really need to consider the context before making broad statements like this.

By your own standards, "word" and "long (word)" are actually excellent terms, because they're conventions used by the underlying hardware, and using the same conventions absolutely makes sense for a programming language that's close to the hardware:

https://en.wikipedia.org/wiki/Word_(computer_architecture)


> Microsoft’s WORD/DWORD are nearly as bad. Don’t call something a “word” if it doesn’t store characters.

"Word" as a term has been in the wide use since at least 50s-60s, you can't really blame MS for that

https://en.wikipedia.org/wiki/Word_(computer_architecture)


A 'word' on an x86 is 32-bit, but a WORD is 16 bits.

You can _absolutely_ blame Microsoft for using it to mean something that it isn't.


Yes, int_16/int_32 or something like that makes a lot more sense. Today, not when this compiler was written.

The PDP-9, PDP-10, and PDP-18 have 18 bits registers. The world had not settled on 16/32/64 bits at all.

Even the intel 80286 far/fat pointers are 24 bits.


Unsure why you’re being downvoted, IIRC the original C programmers reference spoke explicitly about how “int” meant the most efficient unit of storage on the target machine.

Admittedly I read that more than 30 years ago :-O


C’s type naming had real value when it was first designed, at a time when the industry hadn’t yet fully standardised on the 8-bit byte, 32/64-bit words, IEEE floating point, etc.

The fact that “int” could be 16-bits on a PDP-11, 32 on an IBM 370, 36 on a PDP-10 or Honeywell 6000 - that was a real aid for portability in those days.

But nowadays, that’s really historical baggage that causes more problems than it solves, yet we are stuck with it. I think if one was designing C today, one would probably use something like i8,i16,i32,i64,u8,u16,u32,u64,f32,f64,etc instead.

When I write C code, I use stdint.h a lot. I think that’s the best option.


An int in C was 16 bits until about 1980 when Unix started being ported to larger machines. C and Unix were originally just for the PDP11.


As this paper [0] explains, the initial version of the C compiler for PDP-11 Unix was finished in 1972. And less than a year later (1973), people had ported the C compiler (but not Unix) to Honeywell 6000 series mainframes, and shortly thereafter to IBM 370 series mainframes as well. (Note the text of the paper says "IBM 310" in a couple of places – that's a typo/transcription error for "370".) Both were "larger machines" – the Honeywell 6000 had 36 bit integer arithmetic with 18 bit addressing; the IBM 370 had 32 bit integer arithmetic with 24 bit addressing.

Alan Snyder's 1974 masters thesis [1] describes the Honeywell 6000 GCOS port in some detail. In 1977, there were three different ports of Unix underway – Interdata 7/32 port at Wollongong University in Australia, Interdata 8/32 port at Bell Labs, and IBM 370 mainframe port at Princeton University – and those three had C compilers too.

[0] https://www.bell-labs.com/usr/dmr/www/portpap.pdf

[1] https://apps.dtic.mil/dtic/tr/fulltext/u2/a010218.pdf (his actual thesis was submitted to MIT in 1974; this PDF is a 1975 republication of his thesis as an MIT Project MAC technical report)


Here’s something cool: the source code to Snyder’s compiler: https://github.com/PDP-10/Snyder-C-compiler


Unix was originally written for an 18-bit machine.


That was before C which is what we are talking about.


If you need an 18-bit int it would be called int18 under this scheme. And encountering things like int18 in code, at least you would never wonder what kind of int it was.


It could not be clearer that you didn't read the page referenced by the comment you replied to.

"long" was commented out.


No, the names are fine and self evident after glancing through K&R for 15 min.

The real mistake in retrospect is that int and long are platform dependent. This is an amazing time sink when writing portable programs.

For some reason C programmers looked down on the exact width integer types for a long time.

The base types should have been exact width from the start, and the cool sounding names like int and long should have been typedefs.

In practice, I consider this a larger problem than the often cited NULL.


It made more sense in an era when computers hadn't settled on 8-bit bytes yet. A better idea (not mine) is to separate the variable type from the storage type. There should be only one integer variable type with the same width as a CPU register (e.g. always 64-bit on today's CPUs), and storage types should be more flexible and explicit (e.g. 8, 16, 32, 64 bits, or even any bit-width).


And that creat has no e on the end.


It's a perfectly good word in Romanian.


int is neither 32 nor 64. Its width corresponded to a platform’s register width, which is now even more blurred because today’s 64 bit is often too big for practical use and CPUs have separate instructions for 16/32/64 bit operations, and we agreed that int should be likely 32 and long 64, but the latter depends on a platform ABI. So ints with modifiers may be 16, 32, 64 or even 128 on some future platform. intN_t are different fixed-width types (see also int_leastN_t, int_mostN_t, etc in stdint.h; see also limits.h).

Also, don’t forget about short, it feels so sad and alone!


> It makes sense long would have needed a comment.

I think you misunderstood. There's no explanatory comment. The "long" keyword is commented out, meaning that it was planned but not yet implemented.

    ...
            init("int", 0);
            init("char", 1);
            init("float", 2);
            init("double", 3);
    /*      init("long", 4);  */
            init("auto", 5);
            init("extern", 6);
            init("static", 7);
    ...


Hmm, given its position in that sequence, it looks like it means a floating-point type larger than a double.


(a) FORTRAN used ‘DOUBLE PRECISION’ since the '50s, so ‘double’ would be immediately obvious.

(b) Many important machines had word sizes that were not a multiple of 8.


Fortran has been using `int8`, `int16`, `int32`, `int64`, and `real32`, `real64`, `real128` kinds officially for at least 2 decades. `double precision` has been long declared obsolescent, although still supported by many compilers. Regarding types and kind, (modern) Fortran is tremendously more accurate and explicit about the kinds of types an values.


In C, long is not the name of a data type, it is a modifier. It turns out that C standard type is integer, so if you say long without another data type (such as double, for example), this means long int.


That made me curious. From section 2.2 of the 2nd edition of K&R, 'long' is not a type but a qualifier that applies to integers (not floats though), so you can declare a 'long int' type if you prefer.


Float doesn't make sense either. What is floating?

(I know it's floating point, but it's the same as long/double).


String doesn’t make sense either. But that’s how words acquire new meanings.


It's short for "Hollerith string". Nobody wants to type out that man's name every time they want to deal with the data type. Also, most programmers know zero about computer science.


That seems to be inaccurate. According to Wikipedia, string constants in FORTRAN 66 were named in honor of Hollerith, and the actual wording in the standard is: "4.2.6 Hollerith Type. A Hollerith datum is a string of characters. This string may consist of any characters capable of representation in the processor. The blank character is a valid and significant character in a Hollerith datum." Apparently the term "string of characters" is assumed to be self-explanatory here, and independent of the "Hollerith" nomenclature. The connection to Hollerith is via punched cards, for which the _encoding_ of characters as bit patterns (hole patterns) was defined; but Hollerith doesn't seem to be directly related to the concept of character strings as such.

It is probably rather by chance that we ended up with the term "string (of characters)", as opposed to for example "sequence of characters". In a different universe we might be talking about charseqs (rhymes with parsecs) instead of strings.


> Also, most programmers know zero about computer science.

What made you say that?


I have known a very good programmer, I'd say one of the best I have met, and he had extremely, surprisingly little knowledge of computer science and math (not having a formal education may have been a contributing factor). He coded in JS and Ruby.


Well, 'character string' could, too, make a strange impression on an uninitiated.


There is no short or unsigned either. Maybe int modifiers were pending work?

for is missing too.


Git might not be the best way to manage recipes, especially when you're targeting a section of users who might have never heard of what it is. Sure, the idea of 'viewable history' is nice, but the average cook would find trouble with it. A wiki seems much better suited for this.

To add a recipe, you need to a learn markdown and create a PR. Learning markdown might not be hard, but learning to create a pull request, after creating a github account might put them off.


I don't think "people who are good at cooking and bad at technology" is the target audience here. The target audience is tech people tired of ads and nonsense.


Horcruxes are similar to what emmanueloga_ has mentioned. Horcruxes were special things in which Harry Potter's lead antagonist, Voldemort stored parts of his 'soul', so that even if he died, someone cpuld revive him using the horcruxes. I haven't kept up with Harry Potter for a year now, so I might be wrong with respect to the exact definition.


A horcrux is a plot device where the protagonists need 2fa to send a HUP or TERM to the misbehaving process.


> A horcrux is a plot device where the protagonists need 2fa to send a HUP or TERM to the misbehaving process.

Okay, I didn't literally LOL, but you did earn a really big grin and even a chortle. Well done.

BTW, I would totally read "Harry Potter and the Protocols of Security". Some of the "Methods of Rationality" fan fiction by Eliezer Yudkowsky nods in that direction (eg. the Death Eaters' opsec).


I've heard good things about "Methods of Rationality". Worth reading?


I think so, particularly if you've read Rowling's books and were annoyed by many of the protagonists and supporting characters for a variety of reasons.

If nothing else, "Methods" succeeds in giving agency to more characters, including the villains (not necessarily to their, or Harry's, benefit), and explores/tests the "system" of magic in more depth.


>> and explores/tests the "system" of magic in more depth.

I particularly liked the section where Harry is trying to find out how magic "works". He starts with the gross physicalities: the materials the wands are made of, the sounds of the recited spells. He ends up with the mathematics underlying physics, learning how to create new spells. He uses his new found knowledge to create a very powerful weapon spell he uses to kill a mountain troll. If you want to know how the spell works, you'll have to read the book. I highly recommend it. (I've read it more than once.)


I liked the exposure of the DWIMian (rather than strictly Newtonian) physics of flying broomsticks, although it should be noted that the DWIMian behavior is based on the physics of low-speed high-friction ground transportation rather than a sourceless 'intuition' the author blames.


Definitely, even without the original.


Older and weaker hashing algorithms are probably better for this, sha384 and upwards produce large hashes that might be too big for passwords for some websites. Protonmail trims anything more than 72 characters. See - https://www.reddit.com/r/ProtonMail/comments/khrzhe/pm_ignor...


This isn't good security advice. Taking trunc(32, hex(sha512)) will still give you a result that is stronger cryptographically than taking the 32 characters hex(md5sum) would give you.

For more security, you of course can encode the sha512 hash in a format other than hex in order to let those 64 bytes be fewer characters. The hex encoding is only one of many encodings.

But the main point is that the solution to needing to store a shorter value is not to use a weaker hashing algorithm, but to truncate the result.

This is the reason that sha-512/256 exists as a truncated sha-512 even though sha-256 already existed.


This is even worse advice! If you have a small input, you want a hashing function that matches that size[1] of the input as closely as possible. Collisions aren't important here (so "strength" isn't important), just the randomness. MD5 and SHA families give you (mostly) random distributions. The bias in the results are practically meaningless.

[1] https://crypto.stackexchange.com/questions/12822/are-the-sha...


I was only trying to point out the apparent effect of randomness that hashes give. Randomness is the key here, since probably no one is going to brute force a unhashed password, since the password would already be known. Not all websites automatically truncate a password, although yes, using the first 'n' letters would be a good idea. Some websites straight up say the password is too long, and you might have to try and guess the limit.

I don't think the algorithm matters here, but only the length.


The problem is some websites have very old password rules like uppercase characters and symbols. So you would need a hashing function which produces these, and change the algorithm per website depending on what they allow and disallow for password characters...


Just take the first n characters of the hash then.


Oh, sorry for that. Didn't realize. I live in India and thought it doesn't make a difference. Changing the title


It's all good.

The difference is subtle and depends on the country, state/province, and even down to the local municipality. I'd have not idea how to correctly title this for, say, South Sudan or East Timor.


Did you use machine learning ?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: