Hacker News new | past | comments | ask | show | jobs | submit | AlexeyBrin's comments login

I get the same results. Google search, at least the first page, is pretty bad these days.


Same here (for getting those results). I've now installed a browser extension that stops Google's AI-generated answers from appearing.

Google's results are like pushing a button that says "Lie to me." Except it's worse. It's "Lie to me sometimes" -- just to make it harder to spot.


Same here.

I wonder how long it'll take Google to fix this...


> 2) I have yet to encounter a software development kit like Visual Basic that makes it easy to write executable files with a GUI.

Lazarus with FreePascal is pretty close with its drag and drop way of creating GUIs and it generates executables.


its a lot harder to learn. I just commented on another discussion agreeing with someone who made the same point.

If you already know Pascal its probably great, but if you do not its going to take a lot longer to to learn to use it.


also Gambas3 (https://gambas.sourceforge.net), among others.


does not support the OS with the most users.


Not really comparable, Apple Silicon is not an embeddable device as the Jetson series is meant to be. And the Jetson will never replace an Apple laptop or even a MacMini for regular users.


Sure, but it’s a small desktop device, popular for development


The price is interesting. What I don't like about these devices is that they are supported for a few years and after you are on your on. I have the original Jetson Nano and they stopped providing any updates for it, if you can use it without an internet connection it will work just fine for years.


Yup and trying to hack your OG jetson Nano to a later linux is very hard due to proprietary linux / nvidia firmware/drivers. Its not a matter of your are on your own for OS upgrades, it straight up wont even be able to use the GPU for cuda. Pretty annoying.


Apparently they differentiate between their regular modules and the dev kits: The Jetson Nano you mentioned is supported until 2027, but their dev kit is EOL.

For the Jetson Orin Nano the module is supported until 2032, but the dev kit likely not.

Source: https://developer.nvidia.com/embedded/lifecycle


I think the latest Orin series supports both UEFI and there is "Bring Your Own Kernel" support for more recent releases[1], but you still need to apply the patches. Also, the Tegra GPU driver "nvgpu" has been open source in this form for quite some time in the Linux4Tegra tree, but I don't see it in the patch list. So, it's still a bit of a mixed bag, I think. But UEFI and 6.x kernel support is at least an improvement...

[1] https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide...


Well, that's how embedded computers are meant to be used, just like everything before the Internet.


Hopefully someone from Google reads HN and can do something or talk to someone.


> Hopefully someone from Google reads HN and can do something or talk to someone.

IIRC, that's pretty much the only way you can get real help from them: have your problem goes viral on social media, and hope one of the elect sees it and has pity.

As they say: you're not their customer, you're their product.


I wonder how can you implement such a law without forcing people to identify online ? Will they enforce a digital ID that you need to use to access the web or social media ?


No comment on the implementation, but I wonder if there's some value in just allowing parents to be able to point to this and say "No, little Fred, you're not allowed to have an Instagram account until you're 16. It's the actual rule."


Yep, the "everyone else has BLAH" argument is a strong one. If we collectively take action through government to set a standard it is MUCH easier to shut down those self-fulfilling claims.


I'm listening to Australian radio right now and a group of mothers just made this exact point.


Seems to me that the better solution is to give parents the ability to observe their kids' activities, and for <16 accounts to be able to operate only when tied with an adult account, which can observe activity...

Of course many will say this can be abused, but all technology can be abused and the reason we're in this mess in the first place is because OS designers haven't figured out that the relationship between parent and child is an important one which should be strengthened, not made weaker ..


Ah yes, teenagers, people with famously little time on their hands and a penchant for following rules.


> allowing parents

What? Why would parents need permission from their government to forbid their kid from having an Instagram account? They're parents, so they can engage in parenting.


Not allowing as in "giving them permission", but "allowing" as in enabling them to do so.

Right now if a parent says "You can't have instagram. Because I say so." the kids answer will be "But I will be a looser noob if I can't. All my friend are on it. Jenny has 5k followers!"

Vs after the ban: "You can't have instagram. This is the law." "But mom! Some of my friends are on it. Jenny has 1k followers!" "Is that so? I will ask Jenny's mom if she knows about that."

It is not going to stop absolutely everything. (Same as prohibiting underage alcohol drinking is not stopping teens from drinking any). But it will put a serious damper on it and fracture the social networks into smaller more underground ones.


Allowing as in “enabling”, obviously.


Because not everyone has a computer science degree.

They probably want to allow their kids to use a computer, so it would be very easy for the kid to go to instagram when they dont look.


I’m guessing you don’t have kids


I have three kids. They have access to devices they use primarily for reading and language/music lessons. They don't use social media and would likely pay a decent level of attention if (in addition to us having explained concerns about social media for children) we indicated that there was government advice/ruling around this.


> we indicated that there was government advice/ruling around this.

Why would any pre-teen / teen care about what government thinks? They care what their friends think about that TikTok they saw during recess, though.


The government currently tendering for providers of different systems. See here [1] and here [2]:

Tender documents released on Monday show the technical trial is slated to begin “on or around 28 October”, with the provider also expected to assess the “effectiveness, maturity, and readiness” of technologies in Australia.

Biometric age estimation, email verification processes, account confirmation processes, device or operating-level interventions are among the technologies that will be assessed for social media (13-16 years age band).

In the context of age-restricted online content (18 years or over), the Communication department has asked that double-blind tokenised attribution exchange models, as per the age verification roadmap, and hard identifiers such as credit cards be considered.

[1] https://www.innovationaus.com/govt-readies-age-verification-...

[2] https://www.biometricupdate.com/202409/australia-launches-te...


The source for "double-blind tokenised attribution exchange models" is this report from July 2024, from the Australian eSafety Commissioner: https://www.esafety.gov.au/sites/default/files/2024-07/Age-A...

They note that existing age verification setups largely either rely on providing ID, or on a combination of manual and automated behavior profiling (face recognition, text classification, reports from other users), both of which have obvious privacy and/or accuracy issues. The "double-blind tokens" point to a summary by LINC explaining how they _could_ be implemented with zero-knowledge proofs, but I could not find an article or a practical implementation (could just be a mistake on my part, admittedly)

At _best_ you end up with a solution in the vein of Privacy Pass - https://petsymposium.org/popets/2018/popets-2018-0026.pdf - but that requires a browser extension, a functioning digital ID solution you can build on top of, and buy-in from the websites. Personally, I also suspect the strongest sign a company is going to screw up the cryptographic side of it is if they agree to implement it...


> "a functioning digital ID solution"

A functioning digital ID solutions seems like table stakes for anything in 2024.


The operative part being "that you can build on top of", because the "ID token" approach means it now has to act as essentially a mini-OAuth-provider for many other websites, not just government services


Yes. But I don’t think this type of regulation should care about whether it’s technically possible to do today. You could make a New York proposal type ban e.g banning algorithmic feeds to minors, even if it’s difficult to tell who is a minor and who isn’t. Social media companies would solve the ID problem or simply stop using those algorithms outright.


Sales of stick on mustaches will skyrocket


It's a bit wild that instead of parents just being responsible and teaching their children properly, we'll resort to neutering privacy and freedom on the Internet.


Making it illegal could make it taboo, kids are less likely to talk about it in fear of "getting caught" and less talking means less usage.

This is not like porn which is a solitary activity: on social media you have to be social and let everyone know… at least for traditional actually-social media, not content-consuming apps like TikTok.

It's similar to alcohol usage: you can't stop it completely, but also you don't have 50% of kids bringing it to school.


> Making it illegal could make it taboo, kids are less likely to talk about it in fear of "getting caught" and less talking means less usage

So like piracy?

> This is not like porn which is a solitary activity: on social media you have to be social and let everyone know… at least for traditional actually-social media, not content-consuming apps like TikTok

You don't? You can stay perfectly anonymous.


A drop down list of birth dates/years "works" for most age restricted sites - I guess the logic is that if a user is lying about their age, it's not the sites problem.

Article states that sites must demonstrate they are taking reasonable measures to enforce this though - a lot will come down as to how courts interpret that. If they go to the extremes of the KYC laws in australia I imagine a significant fraction of adults will not want to verify their age.


> I guess the logic is that if a user is lying about their age, it's not the sites problem.

If the law is to have any teeth at all, it should be the problem of the service provider.

Say for example that a banned feature for minors is having media feeds based on past watching behavior. Lacking a reliable age verification it's simple for social media companies to remove the feature entirely for all users, if it's unreasonable or impossible for them to implement age verification.


    > extremes of the KYC laws in australia
Can you provide more details about this statement? I never heard anything about it on HN discussions.


I’m not sure how unusual it is internationally but KYC laws in Australia will generally require 100 points of identification, usually satisfied by showing your passport and drivers license. Other options include recent utility bills, your birth certificate, medicare card etc.

The system wasn’t really designed for the internet era and I think a lot of people would not be happy about handing all the personal info over to TikTok or Facebook


As much as I am grateful for most of GDPR, it has shown that leaving the implementation of anything to websites is a recipe for disaster.

It's gonna be cookie banners 2.0.

I bet a lot will just ask for a credit card number, like in those old scam fake-porn websites from the late 90s/early 2000s.


There’s a near-zero chance of getting caught driving without a license. Despite not having a drivers license checking mechanism, people generally don’t drive without a license because of the mere fact that it’s illegal.

Societal signaling is pretty powerful.


RE ".....how can you implement such a law..."

Request the social media platform to implement the restriction. The large social media platform have billions $ cash , so if that "really want to implement it" it should not be a problem.

However, I expect social media companies to "drag out every reason , why they can ot implement it..." - since it does not benefit the social media company. ... and would reduce its user base ...


> Request the social media platform to implement the restriction. The large social media platform have billions $ cash , so if that "really want to implement it" it should not be a problem.

I'm sure that Facebook, Google and TikTok will be delighted to make it mandatory that Australians send in photos of their face, passport and driving license.

But is it good for Australia to have their citizens hand such mountains of PII to unaccountable foreign megacorporations?


RE "...is it good for Australia to have their citizens hand such mountains of PII to unaccountable foreign megacorporations...." NOT NEEDED , Mega Corp needs to have office in Australia where such documents are checked. Document never leaves Australia.


You don't!

That's exactly what they're aspiring to here, following on from a well-established pedigree of Australian lawmakers and their dysfunctional relationship with the Internet.


You do! It already happens - just not for everyone.

Example:

https://m.facebook.com/help/582999911881572


I don't think lawmakers should describe HOW things are done necessarily. Here it's enough to say that "unless you can be 100% sure your user is above age X, then you can't provide them service Y or feature Z".

It might not even be the desired outcome to have identification, the better outcome could be to have feature Z stripped for all users (for example video feeds based on past watching behavior).


It’s happening on porn sites in some states in the US right now. When you visit the site, they ask you to validate with your ID.


Hell of a time to run a VPN or a blackmail service... Porn site profiles with activity history + real traceable identities will make the Ashley Madison leak look quaint.


How long are VPN services for consumers like that going to be viable? All the 5 eyes countries are trending in the same direction and they US isn't shy to press other countries to follow their regulations with the threat of being sanctioned.


How so? Ashley Madison was a service for cheating. This would be for people watching porn - how is that worse?

The history is extremely unlikely to be available to the id validator (beyond the domain at most). VPNs can't see the actual history either.


They're probably referring to the scope. Very few people were directly impacted by Ashley Madison (though there was at least one reported suicide due to the leaks), but lots of people watch porn and most of those people would not be too keen on their browsing history being leaked even if it's relatively tame, and especially if it's not.


The funny thing these days is that all porn is tailored to appear as far from "tame" as imaginable.

The average PornHub user's history will be full of weird incest shit at the very least, not because of any specific interest in the genre but because so much generic heterosexual porn is labeled as such. Looks really bad for you if it makes the newspaper.

So even "tame" leakage is 100x more embarrassing than it ought to be, and thus snooping on bf/husband's devices to humiliate them over their porn usage is normalized on relationship subreddits. Same goes for them plugging your email address into the password reset form to try to verify whether you have an account on any given site.


>weird incest shit

Frankly, Stephen King stories are much weirder than any porn. Imagine enjoying watching monsters eating people.


[flagged]


I'm not sure if the incest is being consciously pushed. In Japanese animated, drawn, and live action porn for example, a popular yet totally seperate ecosystem from western pornography, the same kind of incest stuff, hell even worse incest pornography between blood relatives and even involving children, has been extremely popular. It seems like most people who watch pornography move on to riskier genres, and incest pornography is a very easy step down from relatively normal genres.


I was about to call you on this apparently you’re right—the CEO of PornHub is indeed a Rabbi. I’d love to be a fly on the wall in strategic planning for PornHub’s moderation team. The addiction strategy is working of course but it’s probably not just about the money—I imagine every decision in that industry has a moral justification. I don’t buy that they’re just amoral, I think they just have a different set of morals and it would be interesting to see what those were.


You can still call someone out for blatant antisemitism when they're being blatantly antisemitic whether the thing they're saying is true or not. Do you think someone's being a Rabbi or being Jewish has anything to do with running a large adult content site?

I mean come on they literally created a throwaway account named "hey rabbi" to make that comment.


If FBI didn't shut it down, it's all tame, no?


"The domain at most" can be quite sufficient for blackmail


You don't necessarily need to actually attempt to globally enforce it. It's like speeding, right? Everybody knows the law, and a lot of people choose to break it. We can't check everybody's speed all the time, so instead we selectively enforce.

The real change though comes from parent's perceptions. Right now there's age limits of 14-years-old on most social media platforms, however most parents just see this as a ToS thing, and nobody cares about actually violating it. Once it becomes law, the parents are suddenly responsible (and liable) for ensuring their children are not breaking the law by accessing social media. It's not going to stop everybody, but it'll certainly move the needle on a lot of people who are currently apathetic to the ToS of social media platforms.


Not true. Only the social media companies will be liable. It’s an important part of the legislation.


Yeah, you're right about the liability part. But regardless, as a parent of teenagers, being able to justify an unpopular decision with "it's the law" instead of "research shows it's potentially bad for you in the medium to long term" is extremely valuable.


The government is being deliberately non-prescriptive about that, as they are about what qualifies as 'social media' (statement of fact - no comment on the approach itself). Ideally the legislation is accompanied by a government digital service that allows 3rd parties to verify age _without_ divulging full identity, but I don't see that side of things being discussed anywhere down here :(


They seem pretty clear [1] about what social media is:

Social networks, public media sharing networks, discussion forums, consumer review networks.

[1] https://www.esafety.gov.au/sites/default/files/2023-12/Phase...


They haven't got the competence to implement it even if they wanted to.


Well, funny you should mention that - the AU government ID system (used to access govt services like medicare and tax), has very recently been rebranded from MyGovID to MyID. Most states have already got digital drivers' licences.


Same as alcohol. If you supply your kids with alcohol, or even have it at home and they get drunk without your knowledge, you'll be in trouble.


Not sure that is the best example as many states have exceptions to allow parents to legally give their children alcohol so that scenario you devised could be completely legal.


Law should not be excessively prescriptive, especially in the case of rapidly-evolving technologies (and business sectors) for all the obvious reasons.

What's far more useful is to propose effective incentives and disincentives, and let the participants work this out for themselves. There are some useful principles and examples which come to mind:

- Business is profit-oriented. Attack the basis of profits, in a readily-identifiable and enforceable way, and activity which pursues those markets will tend to dry up.

- Business is profoundly risk-averse. Raise the risks of an activity, or remove protections or limitations on threads (e.g., Section 230 of the CDA in the US), and incentives to participate in that activity will be greatly reduced. Penetrating corporate and third-party veils would be particularly useful, in this case, of service providers (aiding and abetting in a proscripted commerce) and advertisers (profiting by same). Lifting any limitations on harms which might occur (bullying, induced suicides, addiction, or others) would similarly be crippling.

As to how age might be ascertained:

- Self-reporting. Not terribly reliable, but a decent first cut.

- Profiling. There are exceedingly strong indicia of age which can be made, including based on a particular account's social graph, interests, online activity, location data (is the profile spending ~6h daily at an elementary school, and not lunching in the teacher's lounge?), etc. One strong distinction is between legislation and regulation, where the latter is imposed (usually with rulemaking process) through the executive branch (SCOTUS's Loper v. Raimondo being a phenomenally stupid rejection of that principle). Such regulation could then on a more flexible basis identify specific technical means to be imposed, reviewed, and updated on a regular schedule.

- Access providers. Most people now access the Internet through either fixed-location (home, work, institutional) providers, or their own mobile access provider. Such accounts could well carry age (and other attestation) flags which online service providers could be obliged to respect as regards regulation.

Jumping in before a few obvious objections: no, these mechanisms are not perfect but I'll assert they can be practically effective; and yes, there are risks for authoritarian regimes to abuse such measures, but then, those are already abusing present mechanisms. I'd include extensive AdTech-based surveillance in that, which is itself ripe for abuse and has demonstrated much of this already.

(That said, I'd welcome rational "what could possibly go wrong" discussion.)


Remember the Silicon Valley episode where they would have been fined $21 billion for not verifying the age of PiperChat users? Same way. All companies are one slippery slope away from being fined by Missouri for not protecting children enough. Or Australia.


Maybe it is just me, but I find it strange (almost dehumanizing) to have to live or sleep in such a pod for months or years. It is like the plot of a SciFi dystopia. Also the price is outrageous $700/month for half a closet with a mat.


People are willing to be uncomfortable to achieve their goals.

The first person interviewed is from China and it reminded me of an article I read in the English paper in Shanghai.

A government official was chatting foreign busy Bodies by saying that the comparison isn't to Western wealth, it's to itinerant Farmers sleeping in the fields at night.

For some people having a roof over your head and for solid walls is an improvement


> the comparison isn't to Western wealth

It is built right in the middle of western wealth, it's crazy to say "this island of exploitation is fine because poor people exist elsewhere" with a straight face. Expensive, minimal standards "home", but think how it will make your dreams come true!


All right, compare it to sleeping on the streets of San Francisco at night. That exists right in the middle of western wealth, too.


It is fine because they consider their options to be worse.

The comparison is sane because it's trying to point out how entitled we are.


There are billions of people on earth whose living standards could be made better by making yours worse.


It’s straight out of Neuromancer, though Gibson may have borrowed the idea from Japan.


If you want to reduce IQ discrimination you would also need to forbid high IQ people the use of AI to augment their intelligence, or you will simply move the level of discrimination to a slightly higher IQ level for everyone.


But that would encourage students to intentionally do poorly on IQ tests so they can get more AI help.


It was a confusion between ** used as the rise to power operator in languages like Fortran or Python which just validated the grandparent point of confusing syntax for people not used with C that come from other programming languages.

In Python and Fortran for example:

    3 ** 2 = 9


Just curious, can you use GCC with an SDF account ? What C compilers are available and how recent are the available compilers ?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: