Hacker News new | past | comments | ask | show | jobs | submit | markoutso's comments login

Quit then. Do what you want.


I keep wondering how articles make it up to the first page. This one, for example, an article hosted by substack. A service with terrible user interface, that tries to make you think that someone is doing you a favour for letting you read their literature without paying, whereas... the opposite is true.

The article itself is irrelevant but since I read it, I can tell that it is a classical nonsensical piece on the benefits of listening to children. The author lacks any formal qualification on teaching from what I could also find.

In other terms a complete time waster.


Does anyone find this interesting?

I respect Peter Norvig as a programmer and a problem solver. I've taken a course taught by him in the early mooc days that I really enjoyed.

What I don't understand how does something like that makes it to the top of Hacker News.

I used to visit HN to get smarter, lately I feel that I am getting dumber.


I find it mildly amusing as a reminder about some of the irrational exuberance floating around the ether.


I just thought it was an amusing little break between more in-depth HN articles.


I have the same feeling, that's why I decided to filter out all that noise for myself, and created this blog: https://rubberduck.so I read the posts every day and filter out "amusing" topics and keep the ones that I actually come back to read again later. It is published on Mondays and updated every day with the articles from the day before https://news.ycombinator.com/front


> I've taken a course taught by him in the early mooc days that I really enjoyed.

I think I took the same one, but I remember Sebastian Thrun being the better instructor.


Probably people from Google want to make some positive spin after the company killed another product.


How is this low level exactly?


quoting the github readme:

> In general, ante is low-level (no GC, values aren't boxed by default) while also trying to be as readable as possible by encouraging high-level approaches that can be optimized with low-level details later on.


with no GC, is there compile time segfault checking like rust?


Author here, that is the purpose lifetime inference is meant to serve. It automatically extends lifetimes of `ref`s so that they are long enough. A key advantage of these is avoiding lifetime annotations in code, at the cost of some lack of control since the lifetime is increasingly handled for you. You can also opt out by using raw pointer types though and easily segfault with those.


Pointers and manual memory management apparently.


low is relative :-D my thought exactly.

IMO the only low level language is assembly. Everything else is some form of abstraction. C/C++ and the likes I tend to call lower, since in 2022 it is closer to the hardware, and then sugar languages like python, c#, js, and the likes I call high level.


Even assembly languages are abstractions.


Strangely the language "below" assembly, Verilog, is a lot more abstract and tries to pretend to look like C while generating hardware, so writing it is more like imagining how to trick it into doing what you want.


That's because a HDL is not a lower level machine language, but instead they are languages used to implement a machine that consumes a machine language.

Consider what happens when you implement a x86 emulator in python: you're using a high level language to implement a machine using a particular substrate (a simulation inside another machine). This simulated x86 CPU executed machine code and you'd call that machine code to be the "native" or "lowest level" language with respect to that particular machine.

You can see how that choice of machine language bears no relationship with the language used to implement the underlying machine.


It's not because of anything intentional, it's because Verilog is poorly designed.

https://danluu.com/why-hardware-development-is-hard/

https://danluu.com/pl-troll/


It's not, or only partially. "To try to bridge the gap between high and low level languages, ante adapts a high-level approach by default, maintaining the ability to drop into low-level code when needed." says the website.


Author here, to me the lack of a pervasive tracing GC and values not being boxed by default are important for low level languages. That and maintaining the ability to drop down and use low level constructs like raw pointers for optimization or primitives for new abstractions are essential.


I am really not getting this. If you run Typescript on both ends, can't you just share the source code that defines the types?


You kinda can share function signatures with TypeScript's import type.

And that's what tRPC does but adds some generics in the middle to make things seamless.

Explanation here: https://colinhacks.com/essays/painless-typesafety


You could argue that this is actually adding a lot of (nicely knit) seams.


I think that's a good characterization!


Basically tRPC lets you define a router containing all your endpoints in a single structure. Then you can import the type definition of your router and make typed API calls without needing to wire everything together with types.


How hard can it be for Firefox to embed its own recursive resolver that talks only to the root servers? If you are really concerned about privacy that’s the only way to go. Other than that it makes little sense to me to trust one company over another.


Why do I want an application doing its own DNS resolution at all when that's actually the job of the OS?


Ideally you wouldn't. But until operating systems default to using DoH with a trusted resolver themselves, this approach is the lesser of the evils.


If you think that users can setup-configure their dns servers then this feature (DOH) is completely useless.

I guess the point is valid for users that don't want / are not allowed / cannot configure the dns server of their operating system.


I'd have less issue with this is Mozilla ran the servers themselves. I already put a lot of trust in Firefox, I have zero reason to trust cloudflare.


This wouldn't solve any of the problems that DoH does, because DNS queries issued by a recursive resolver are themselves in cleartext and so vulnerable to a hostile network.


This was superseded by Sonic Pi https://sonic-pi.net/.

The author has said that if he would do it in Erlang if he was starting again now.

Here's a talk with him and the late Joe Armstrong. https://www.youtube.com/watch?v=4SUdnOUKGmo


One of my favourite talks has a demonstration of FizzBuzz in Sonic Pi. It still blows me away

https://www.youtube.com/watch?v=6avJHaC3C2U - whole talk is worth a watch

https://www.youtube.com/watch?v=6avJHaC3C2U#t=42m25s - timestamp


That's a shitty solution. The whole point is to keep the website public.


I think it depends on who your audience is.

For "general" audience, I would use a proof-of-work puzzle 10 seconds long and a basic question captcha with human review.

But for a site which is primarily for tech-savvy people, with an accent on retro-compatibility (HTTP auth is supported almost universally, even by Mosaic), I can't think of a better option. Not that interested in SEO, since the software is my main target.


What was the problem with the Ap


>When a new designer joins Facebook, they go through several days of what’s called “Design Camp”

Maybe the author can also tell us why Facebook UI is so hostile to the end user.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: