Hacker News new | past | comments | ask | show | jobs | submit | joefkelley's comments login

The evidence presented in the article does not match the claims.

The emails they include show that there were meetings between DHS and Twitter and between DHS and Stanford, on the topic of election integrity. And that there was a Signal chat (I guess this is kind of sketchy).

But there's no evidence of censorship or anything politically-motivated that I can discern.


How long can people pretend censorship is not becoming more abundant? How can democracy exist with an abundance of subjective censorship? There was an entire ticketing portal for government agents to request take downs of content for subjective reasoning. How is that not censorship?


I guess I can join in decrying the decline of 100% free speech in the abstract, but does that mean I shouldn’t be critical of whether an article’s evidence supports the claims it’s making?


I welcome critical points of view but the fact remains that few are allowed to even see posts like this, much less discuss them with logic. The root comment here dismisses the article without any argument. How is a ticketing portal between government and social media companies for post take downs on subjective matters not evidence of censorship? Why is our government subsidizing moderation?


The burden of proof is on the person making a positive claim. So I didn't think I really needed any more argument than "the article's argument is insufficient for me to believe their claims". I wasn't trying to make any more general point about censorship or social media.

I don't have a problem in general with a ticketing system for the government to request specific kinds of moderation. There do exist valid carve-outs of free speech, and I see no reason why government and industry couldn't collaborate on that. If it were used to censor political speech, then yes, I would have a problem with that. But I haven't seen evidence of that.


It is in the article.

> In 2020, CISA officials and personnel from EIP were often on emails together, and CISA’s personnel had access to EIP’s tickets through an internal messaging system, Jira, which EIP used to flag and report social media posts to Twitter, Facebook, and other platforms.

Here is a more detailed article with links to many of the documents, but I am unsure of the trustworthiness or biases of this source. It is hard to find mainstream coverage or discussion on this topic.

https://www.realclearinvestigations.com/articles/2023/11/06/...


Do you feel that a language has to have something at the language level to prevent NPEs?

In my experience, Scala does pretty well without it.

I guess is your point that the language should make it impossible to write bad code, not just make it easy to write good code?


Just a heads up, Scala 3 actually has a compile flag that will make types exclude ‘null’ as a valid subtype, so every nullable variable will have to have type signatures like String | Null.


> 3) Don't transparently download and install stuff without user interaction, regardless of where it comes from!

This is an interesting one. I totally get your point. But also users are terrible about updating their software if you give them the choice. Automatic updates have very practical security benefits. I've witnessed non-technical folks hit that "remind me later" button for years.


> I've witnessed non-technical folks hit that "remind me later" button for years.

Doesn't that then become their problem and responsibility then?


> I've witnessed non-technical folks hit that "remind me later" button for years.

Maybe take the hint and add a "no" button instead of this manipulative "remind me later" shit.


> If you buy youtube premium you'll very likely see more ads for youtube premium

Really? I have youtube premium and I can't recall seeing ads for it. Why would they advertise a product to people that already have that product?

> Google Ads could have a flag that turns it's data collection/sharing off and replaces it with micropayments to any site that user visits that are running Google Ads.

FWIW, this flag already exists, except you don't have to do micropayments: https://adssettings.google.com/

I guess you still see ads with this setting, they're just not personalized. Hypothetically you could imagine a "stronger" setting that doesn't just do away with personalization, it does away with ads altogether by allowing the user to "outbid" any advertiser. But I suspect there will be some surprised users who get a bill for hundreds of dollars by doing some particularly high-value searches like "personal injury lawyer" or "mortgage" or something.

And if it were a flat rate, my intuition is that the fee would have to be much higher than most would expect or be willing to pay.


> Hypothetically you could imagine a "stronger" setting that doesn't just do away with personalization, it does away with ads altogether by allowing the user to "outbid" any advertiser.

Google built this, it was called Contributor, and it wasn't very popular.


I'm a Google engineer who interacts a little bit with this kind of stuff.

There are a few settings for this kind of thing. You can take a look at https://myaccount.google.com/data-and-personalization

The "Ad personalization" bit is probably what the parent comment is referring to. But it sounds like you're interested in the "Web & App Activity" bit, which will turn off the non-ads usage of your data. To a certain extent at least, since there are some grey areas.

For example, I'm on the team that sends Google Shopping emails. If you click a button to track the price of a specific TV, we'll still send you an email if that price drops even if you've opted out of "Web & App Activity". But if you've just been browsing shopping pages for TVs, we won't send you a general email about TV deals if you've opted out. Both of those cases are in some literal sense "web activity" but it's still pretty clear what the user expects.

But you might imagine- if you're tracking the price of a specific TV and opted-out of Web & App Activity, should we send you an email if a near-identical TV drops in price? We probably wouldn't, and we don't have anything like this today, but it's not quite as clear. And Google has so many features across different teams, I can imagine there's probably at least one where some privacy reviewer made a different call than what you would have made.


Forgive the grumpy old man approach here, but can't this all just stop.

The price of a TV (or almost any common product) is driven more by large scale factors (cost, competition, features and brand) than the latest 4 dollar shift because of some promotion in some channel somewhere.

We are optimising for the wrong things. Computers are supposed to be our agent in the digital realm, reaching out for us, in our best interests.

If we turn off all storage of personalised information completely and utterly, 95% of everything I ask can be answered from context (please show me cheap TVs).

Look, i don't object to the idea behind the sandbox (it's always been weird that browsers tell the server what fonts and other settings exist. I mean who ever optimised for that)

What bothers me is that it's a google id. Just let me have a few U2F ids - this is my shopping id, track it if you may. when I take it out of the slot stop tracking me.

I am sure I am feeling extra grumpy today, but when we stop tracking and trying to find ways to make me buy, and start finding ways to make my life better, that's when we have a digital revolution.


Yes we need (again) an inversion of control. At first advertising links only tracked origin (website), so announcers could target right website to put links on it. But soon advertisers realized you could gain more granular information in using more active technology like cookies, gifs, flash, etc. So they began tracking users and their journey through all websites with all privacy problems it entails. I think in terms of privacy the only level acceptable is the one you have to deliberately have to engage with to start "tracking" (that means advertising content must be displayed alongside publisher content and not personalized nor tracked through advertising company if the user hasn't engaged with).

I also think that marketing/advertising is bad since it distorts value perception but i remain pragmatic and we can't ban all advertising that easily.


I wanted to expand on this a bit.

If we are ever going to manage the flood of data out there we need digital agents acting on our behalf. The AI behind the facebook feed or the google ad tech is a very early beta of those agents - and "on our behalf" may seem a stretch. But that's not just a cheap snide remark - I think on our behalf is a fundamental issue of how they will be built and designed and regulated.

Richard Thaler has promoted Paternalistic Libertarianism - which seems to me a very good default setting for digital agents working on out behalf. It's not however that simple.

I hope we shall see a Medical approach to managing data - where everything is shaped by the best interests of the data subject. But there are three major world views - let's call them European, US and Chinese for over simplified ease.

The Us view is caveat emptor - one should be rugged individualist enough to be able to calculate the best co-pay arrangement and by extension work out ones preferred facebook privacy settings. If you get it wrong you will be prey for the payday loan industry.

The European view that by default it should be good for the individual (although as a Brit I should sneak in that the state gets to define good. And I suspect the Chinese view is it should be good for the confucian society - and the state may have some say in who is in and out of that.

As we are entering the new era of regulation of tech, these conflicting world views are becoming entangled - and we coders will be on that front line.

Choose a side - and be prepared for some very weird outcomes.


This is replacing one way of tracking with another. Can we stop it with the tracking entirely instead?

I would be happy to voluntarily provide a list of topics I would like to get ads for if I knew that this helps get rid of the cookie and tracking mess we have right now.

Think of a questionnaire a la Netflix that you spend 3 minutes answering to so that it gets your preferences.

The current system we have in place is not just creepy, but also crappy. I see it on YouTube, both the ads and the video suggestions are very poor.


And yet there is no way to turn off tracking, because the data will be collected anyways, and will be used one way or the other eventually.

As long as this is connected to a Google account, well, then that's one thing. One doesn't need to agree to those terms.

Google is continuously trying to transcend this. Look at AMP, which is an attempt to run the entire internet through Google for tracking purposes. The strategy is always the same: Promise some sort of consumer benefit under the condition that everything will be tracked. Yes, one can opt out of targeted advertising, but Google still owns the data and uses it internally.

This new play is the exact same thing. As is all other efforts to track people, logged in or not.

Your choice is this:

Have an account and have everything you do tracked by Google (or the next company), give them all your data to use in their algos, and agree to terms that state that they can do whatever they want with the data. Perhaps then you can disable targeted advertising, although that also doesn't really work, does it?

Or, second choice, do not have a Google account. However, then you are subject to subversive attempts to track everything you do and use the data whether you want it or not, illegal or not.

This goes against some basic tenets of law in many countries. My data is my property, not yours. Google is powerful enough to not care, but make no mistake: People working for Google and making these decisions should be in jail. Any malice and hate targeted toward them is 100% justified.


Do you use any signals from people who opt out of Web and App activity to feed into models that are used not just for measurement but for targeting?

What if someone is identified by a model as being in market for a TV and then opts out? Would they still be classified as in market at that point?

I work in digital media, feel free to get technical with your response.


I'll hedge this by saying I can only speak for the teams I've worked on, plus my somewhat limited understanding of company-wide policies.

The answer to both questions is no. If you opt out, your data is not used for modeling or targeting or anything. Perhaps some internal reporting that isn't used for anything other than like PMs wanting to understand user behavior? Even that I'm not sure about.

If you are identified as being in market for something based on activity and then opt out, you will no longer be classified as being in market. That classification will be deleted- though perhaps not immediately but within some reasonable time frame, say 24 hours or so.


I think each couple has a slightly different balance on what level of collaborative decision making they can expect, and this is actually a big factor in compatibility.

For instance, I take your approach for most purchases under two thousand dollars or so. If I want to buy myself a new computer or whatever, I'd mention it to her, but ultimately I'm probably going to get it even if she thinks I shouldn't. I know some couples where this isn't the case, even if they have the means. Their price threshold for making the decision together is much lower.

But on career changes we make decisions together. For instance, she recently made a change that will result in her making less money, especially in terms of long-term career trajectory. But her stress level and overall happiness is much better. And she knows that my income was a good amount higher anyway and it ultimately won't affect things like when we retire or our quality of life that much.

But then it would be pretty shitty of me to change to a lower-paying profession down the road without her OK. She has sacrificed her earning potential with this kind of commitment in mind and maybe wouldn't have if she didn't know she could count on me to make future decisions with her collaboration.

I'm not saying either end of the spectrum is necessarily better. Just that there are pros and cons and it's more important to be in agreement.


I could be wrong (I'm not a finance guy) but this isn't my understanding of how fractional reserve banking works.

If Apple deposits $200 billion and the ratio is 1/10, then the bank can loan out $180 billion of that 200; it must keep $20 billion in reserve - 1/10th.

In your scenario, the bank has negative $1.8 trillion in reserve.


You are right as far as you went, but it doesn't stop there. The 180 billion all but has to end up in a bank account somewhere and that bank will then lend out a further 9/10ths of the 180 billion. The limit of that action repeating over and over implies that 1/ratio dollars are created. As a rough theory.

But exactly how it all works out depends on a countries legal and financial implementation details.


Ah! Thanks for explaining, that makes sense.


I'm with you 100%. I thought censorship meant "not allowing you to speak". I guess now it means "not actively helping you profit from your speech".


According to much of HN, any moderation is censorship.


Roughly, we have three types of color-sensitive cone cells in our eyes. They each have different behavior in terms of how much they "react" to different wavelengths of light. Take a look at this chart: https://en.wikipedia.org/wiki/Trichromacy#/media/File:Cones_...

Each individual wavelength activates all three to some extent: think about this as a point in 3d space. For example, 400nm corresponds to something like (0.1, 0.05, 0.0), from that chart. 500nm might be (0.1, 0.4, 0.3).

But we experience a mix of many different wavelengths at once. So we can not only experience just these points in 3d space, but we can also experience any linear combination of any of these points. For instance, a mix of half 400nm and half 500nm light might be "sensed" by us as (0.1, 0.225, 0.15), even though maybe there's no individual wavelength that corresponds to that point. Any linear mix of any number of any wavelengths covers the entire gamut of what we can perceive.

The question then for someone picking primary colors for an additive display is: if I can only do a linear mix of three wavelengths, what wavelengths should I pick? What covers the biggest subset of the whole perceptible gamut? It just so happens that red, green, and blue do the best.

If you swapped green for yellow, there would be a section in that 3d space that you could no longer create. Specifically, the area where M cones are strongly activated compared to L and S. Unsurprisingly, this would be the greenest greens.


Thank you, as a painter this has bothered me for a long time.


The tough thing is of course you are right. It's obviously in your best interest to vote against new housing if you own a house. But it's not in society's best interest.


But it's really not. Homelessness and destitution also reduce property value. Lots of people leaving California because it's so awful.


> But it's not in society's best interest.

1. why are you so sure? What makes you an authority on the whole society's interests? 2. even if so, are you saying that collective interests trump individual rights? This is socialism 101. Government will decide who should live (and work) where, and who should own what.


1. I am not sure. This is all just my layman's interpretation. I'll admit I could be very wrong and have no expertise. I'm just discussing. 2. No, in general I believe there's a difficult balance to be struck between individual rights and collective interests. I tend to lean more toward individual rights actually.

I would love it if local governments would stop restricting the rights of local developers and allow the free market to determine what is built where more often.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: