Hacker News new | past | comments | ask | show | jobs | submit | 127361's comments login

What about a special reduced uplink power mode, with only 128Kb of bandwidth for example? Additionally, you can conceal the dish from terrestrial detectors, by putting it in a forest clearing for example. Those two things combined might make it a lot more difficult for the Iranian government to find a Starlink dish.


I wonder if SpaceX has enough clout to influence the ITU decisions somehow, through political lobbying, for example?


Maybe in the eyes of an Elon Musk fetishist. Realistically, no.


The USG does


From the article: "With current techniques, it will be hard for victims to identify who has assaulted them and build a case against that person."

The author is trying to claim it is as serious as assault. That is madness. It's defamation at most.

Some very primitive instinct is involved in anything to do with sex, that explains why we are so irrational about it.


The GNOME code of conduct is just so toxic, I encourage people to disobey it deliberately in protest, e.g. by making some offensive jokes.

" Influencing or encouraging inappropriate behavior. If you influence or encourage another person to violate the Code of Conduct, you may face the same consequences as if you had violated the Code of Conduct. " - I encourage other people to violate the Code of Conduct, in an act of disobedience, as long as they stay within the law.


What about reducing our usage of the Internet and using local resources instead? Personally I have local mirrors of various code repositories, and thousands of ebooks. If you want to nearly eliminate all surveillance, then you can air-gap your computer?

So we shift back from the collective (networked) systems to a more individualistic local information store? We already have local AI models, which is a step in the right direction.


Nobody should ever be prosecuted for viewing or reading anything, anywhere. Doing so is a hallmark of a de-facto police state. It's a fundamental liberty we have forgotten about as the frog has slowly boiled over the last 40 years.


Do you honestly believe that viewing CSAM shouldn't be a crime? Is this really the logical end-point of this style of techo-libertarianism?


Yes. Imagine how easy it is to get people you dislike arrested with a bit[1] of CSAM.

On the other hand, I feel that creating and/or distributing CSAM should result in instant death for the creator.

[1] https://en.wikipedia.org/wiki/United_States_v._Solon

EDIT: Creating and/or distributing.


This is just the tip of the iceberg:

Bluetooth keystroke-injection in Android, Linux macOS and iOS:

https://news.ycombinator.com/item?id=38661182

These laws which criminalize viewing or reading things need to go now.


Yes, absolutely. It is today's equivalent of moral puritanism. It is absolutey inconceivable to me that a 21st Century society could be prosecuting people for viewing images. Distributing and making those images should remain a crime, of course. But absolutely not viewing. And we can continue to shut down the sites hosting such material, and encourage those who come across them to report them.

It is not acceptable to prosecute someone for the act of viewing a web site, regardless of it's content. Doing so is absolute totalitarianism.

In a free society, people should be able to browse any part of the Web without fear of punishment, it is fundamental. That includes exploring the darknet, one of the most interesting parts of the Web, without having to fear prosecution for unintentionally coming across child pornography.


They've joined the Linux Foundation, does that mean the models are going to be eventually censored to satisfy the foundation's AI safety policies? That includes ensuring the models don't generate content that's non-inclusive or against diversity policies?


Currently the main policy is only around copyright - and nothing about AI safety: https://www.linuxfoundation.org/legal/generative-ai

Also in the full power of opensource, if LF really force something the group disagree with, we will just fork

All the other alignment policies, are optional for groups to opt-in

So I would not worry so much about that - the group already has a plan in event we need to leave the Linux Foundation - for example: If USA regulates AI training (since LF is registered under USA)


Downvoted, because it's a very trolly way to ask this. Especially given the foundation doesn't have an AI safety policy from what I've seen. Let's be better than this...


It is trivial to fine tune any model (whether a base model or an aligned model) to your preferred output preferences as long as you have access to the model weights.


Not trivial for the general public at all, and furthermore, you need much more memory for finetuning than for inference, often making it infeasible for many machine/model combinations.


If you are running a local LLM already (which no one in the "general public is") then the bar is really not that much higher for fine-tuning (either for an individual or community member to do).

And you don't need any additional equipment at all. When I say trivial, I really do mean it - you can go to https://www.together.ai/pricing and see for yourself - a 10M token 3 epoch fine tune on a 7B model will cost you about $10-15 right now. Upload your dataset, download your fine tune weights (or serve via their infrastructure). This is only going to get easier (compare how difficult it was to inference local models last year to what you can do with plug and play solutions like Ollama, LM Studio, or Jan today).

Note also that tuning is a one-time outlay, and merges are even less resource intensive/easier to do.

To put things in perspective, tell me how much cost and effort it would be to tune a model where you don't have the weights at all in comparison.


Running a local LLM - downloading LM studio, installing on Windows, using the search function to search for a popular LLM, click "download", click the button to load the model, chat.

Fine-tuning - obtaining a dataset for your task (this in itself is not trivial), figuring out how the service you linked works (after figuring out that it exists at all), uploading the dataset, paying, downloading the weights - OK, now how do you load them into LM studio?

It's all subjective, of course, but for me there's a considerable difficulty jump there.



Roundup being used as "drying agent", even I didn't know about that one. So much for me complaining about Sucralose and Aspartame, now we have glyphosate added to our foods, in large quantities. Shit.

https://en.wikipedia.org/wiki/Crop_desiccation#Systemic_desi...

And I wonder what the combined effects of all these chemicals in our foods are on our health. Something which the studies don't usually cover?

https://www.theguardian.com/commentisfree/2018/dec/06/the-we...


The wikipedia link says that its use as a drying agent in the States is uncommon...though not in Canada.


The police keep track of this, and it can be used to validate audio recording evidence.

https://en.wikipedia.org/wiki/Electrical_network_frequency_a...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: