Hacker News new | past | comments | ask | show | jobs | submit | more somnic's comments login

I'm curious how you're using the word "eusocial" here, because I've only heard it used to describe the structure of bee or ant (or naked mole rat) colonies.


Really social, never bothered by anything, absurdly friendly, and cool peeps. And, of course, Scandinavian furniture and always smoking weed. :o)

Stipulative definitions since meanings of words vary from person-to-person:

"Eusocial" to me means "prone to socializing".

"Prosocial" to me means "aware of the interests of the group and seeks to positive contributions to the groups and individuals."

"Asocial" to me means "neural or indifferent towards others."

"Antisocial" to me has multiple meanings either "avoids people", "doesn't get along with people", or "is indifferent to or against the interests of others".


If I recall correctly, what's happening here is that a property is valued at a particular level based on the rent, and the owner can borrow against that to make other investments. Demand drops, and lower rents would affect the valuation of the property, the owner wouldn't be eligible to borrow as much against it any more and may not have the liquidity to readjust things. So it's more profitable and secure to keep the rents at a higher, but vacant, level. It kind of suggests the solution here is more regular and rigorous revaluations of property, or limits on how much it can be leveraged.


I think coordinating the norms of a foraging troupe all with a similar understanding of their environment and society through individual action is a little more tractable than coordinating the norms of an economy consisting of millions of people with wildly varying levels specialization and knowledge.


Yes it is more tractable. But cultures that have norms for what’s acceptable and not exist all over the world.

For example, the culture of data privacy exists in some European countries more than in the US. Laws like GDPR are a result of this culture, they are not dictatorial imperatives going against what people want and practice in their lives.

The people usually set the norms. The law only enshrines them.


And yet the people of Europe individually chose to use social media that abused their privacy. You're making the case for collective action.


Yes, they do individually chose to use social media. But fewer do. And those who do, use social media less. It is a cultural difference with very little government guidance. And it makes a difference in how profitable this business model is here, as well as what laws are passed in the EU.

Yes, I agree with naturally arising collective action through culture. I argue that this may be all we need and we can do just fine without government intervention. I don’t think government intervention can be as robust as cultural norms, anyways. Tech companies constantly find ways to dance around laws. They can’t dance around their primary source of income choosing not to be exploited.

And this doesn’t have to be all-or-nothing. As more people will value privacy, a culture will develop that will reduce exploitative business practices like surveillance capitalism. If fewer people will value privacy, a culture will develop that will enable their exploitation. It’s gradients.


I admit I'm a bit confused by the reward function, as given it seems to provide the same score independent of correctness due to the squaring? And I think even if that's a mistake and it's supposed to be negative for incorrect answers, a policy that optimizes for that reward is to output 1 for anything with less than a 50% chance of being true and 10 for anything over 50%. Is that how RL is typically done?


I have to assume that someone has run a trial on training these models to output answers to factual questions along with numerical probabilities, using a loss function based on a proper scoring rule of the output probabilities, and it didn't work well. That's an obvious starting point, right? All the "safety" stuff uses methods other than next-token prediction.


The safety stuff seems to be mostly trying to locate mechanisms (induction heads, etc) and isolating knowledge, in the pursuit of lobotomizing models to make them safe.

You could RLHF/whatever models on common factual questions to try to get them to answer those specific questions better, but I doubt there'd be much benefit outside of those specific questions.

There's a couple of fundamental problems related to factuality.

1) They don't know the sources, and source reliability, of their training data.

2) At inference time all they care about is word probabilities, with factuality only coming into it tangentially as a matter of context (e.g. factual continuations are more probable in a factual context, not in a fantasy context). They don't have any innate desire to generate factual responses, and don't introspect if what they are generating is factual (but that would be easy to fix).


My duckduckgo results are starting to have summaries that do not reflect the content of the associated site and contain plausible falsehoods, courtesy of bing, and the content-farming keyword-spamming AI generated SEO slop goes without saying at this point. It'd be very nice if these models weren't also polluting the resources that people use to try and verify things.


It might be sort of habitual. If I'm reaching out to support for a piece of software or hardware I've paid money for then it's actually fair to be pissed off if they're not providing what you paid for. The same doesn't apply to OSS, but a lot of people treat repo issues as if they're corporate tech support.


One thing about the nature of open source software development is that the positive feedback channels are kind of limited. Sure, if someone opens an issue or a pull request on your project that's potentially a positive signal, but it's never a purely positive one. In pretty much every other place people put things they've created online there's room for reviews and comments to help other people decide whether to try something, so if you make something good you can get a lot of people saying what they like about it. This doesn't solve the negativity and entitlement of others, but it likely makes things feel more balanced.

Open source software, culturally and mechanically, doesn't have much room for unqualified positivity.


It's a bit hard to imagine Apple of 10 or 20 years ago releasing a product like this without a clearer idea of how people would actually fit it into their daily lives. A lot of their successes have been about reducing friction, and while this is convenient relative to other VR products as far as I can tell, there still doesn't seem to be anything that VR is easier for than non-VR alternatives so far.


I'm not entirely convinced Apple even wants it to succeed. It's a stepping stone to augmented reality and a true 'forget you are wearing it' type of glasses product that can just naturally complement your surroundings.

I don't think anyone truely thinks people are going to all go to work, strap on a headset and be disconnected from their surroundings all day, that's never going to be a thing. Popping on a pair of glasses however is a much more approachable and user friendy experience, we're just not there yet.


There have been smart glasses made by Snapchat and Facebook, those didn't take off either.

Glasses are not visually subtle in the way smartwatches and phones are. It's one thing for everybody to be carrying the same iPhone, but people won't wear identical one-design-fits-all glasses when out and about.


I'm not sure I'd call those smart. They're basic little camera specs. We both know thats not what we mean when we discuss usable AR.


For glasses, people will care more about how it looks than what it does. The fashion barrier is something that designers have not put enough thought into and unsurprisingly, they haven't taken off.


It’s because there’s a bean counter at the top. It’s obvious this tech was all made for a glasses form factor that’s still at least 10 years away because the gamble that they’d figure it out didn’t pay off.

A visionary at the top just wouldn’t launch until the product was right, but a bean counter is counting how many beans have been spent and wants a return on the decade of dev time so rushed it out into a headset form factor instead.


>Instead of more gratuitous parametric modeling, we need to think about urban epistemologies that embrace memory and history; that recognize spatial intelligence as sensory and experiential; that consider other species’ ways of knowing; that appreciate the wisdom of local crowds and communities; that acknowledge the information embedded in the city’s facades, flora, statuary, and stairways; that aim to integrate forms of distributed cognition paralleling our brains’ own distributed cognitive processes.

I don't know if the authors of this article are substantially less guilty of abstractionism and impracticality than Sidewalk Labs or whoever. Knowing how pigeons understand cities might be interesting, but I'm not sure it's interesting enough to center urbanism around. The wisdom of local communities can be seen in angry letters to the council about how bike lanes are satanic, contemporary statuary seems to be about giving a well-connected artist a big chunk of cash to make something that the residents of a city find unpleasant to look at.


> The wisdom of local communities can be seen in angry letters to the council about how bike lanes are satanic

This is such a stupid dismissal and a provides a great example of what the article is trying to point out. This paternalistic ultramodernist and nearly authoritarian "we know better" mentality.


Okay, to put it differently, hyper-local community decision-making, in isolation, might make decisions that are good for that particular community at the cost of the rest of a city. People don't want bike lanes on their street because it means less on-street parking and more difficulty pulling out onto the road, even if they broadly think cycling is a good idea. People can make reasonable decisions that are in the interests of the "community", and it leads to dysfunction. Doesn't mean "we know better" it means coordination has to happen on a higher level.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: