Hacker News new | past | comments | ask | show | jobs | submit login

The private information is shared with Youtube/Google, so the assumption is that anyone who is an agent of Google is in on the secret. If it must only be in the hands of one or a small group of people at Google, you'd best go to those individuals directly, not through the overarching entity of Google as a proxy.



> so the assumption is that anyone who is an agent of Google is in on the secret

I think there is a difference here between "expectation" and "assumption".

Without the ability to do a third-party audit I agree the only reasonable assumption to make is that everyone is in on the secret and when dealing with sensitive information it should always be the assumption you go with.

However, as an expectation, I expect SaaS and social network providers (and by extension most of the HN crowd) to be better.


There may be a difference, but it seems you have them flipped. It is a reasonable assumption to think that they have controls to limit who is able to see information[1], but one must go in with the expectation that every acting agent has access.

[1] Of course, since you don't know who the individuals are, you still have to place your trust in every single agent that works for the entity you chose to entrust. As such, nothing is gained by restricting access. It remains that if it is important that it be private with only one or a few, you must go to those individuals you trust directly. Granting them private information by proxy will always be subject to man-in-the-middle-ing.


I think you have it backwards: an expectation is a standard (the term is used loosely here) that someone should be meeting. We expect people to do the right thing, but sometimes must, as in this case, assume they are doing the wrong thing.

Applied here, the expected and right thing to do is follow the principles of least access. However, we must assume google is not doing this, because there is insufficient evidence that they are, and there is actual evidence that they don't have sufficient controls to limit who is able to see information.


Right, expectation is the standard. The standard is that anyone who is an agent of the entity you have entrusted is also considered trustworthy. After all, giving full trust to an entity you only trust partially is nonsensical.

However, you make a fair point that it is reasonable to assume that entities you trust are willing to go above and beyond, for various reasons.


> Right, expectation is the standard. The standard is that anyone who is an agent of the entity you have entrusted is also considered trustworthy.

To clarify, I am the second person here telling you that that is not the expectation. The expectation, and/or the right thing to do, and/or "the standard we expect them to meet", is that Google follows the standard security principle of least privileged access, meaning each employee can only access data they need to see, with proper permission acquired beforehand, auditing during, and abuse-detection & alerting afterwards.

Unfortunately, they don't meet this expectation that we have of them. Your own expectations and/or standards might be lower, like you described.


The expectation is placed on the entity the trust is given to. Unless you go to individuals directly, there can be so such expectation on individuals. There may be an assumption that the entity you have given trust to do will "do the right thing" with individual agents, but it there is no such expectation as you have already trusted the entity, meaning that you have already trusted its agents. What have you gained by keeping information away from people you have already entrusted with the information? If you cannot find the trust to give them, why are you giving it?


Your own expectation is covered in the second paragraph of my previous post, I am describing the expectation, which is what I and the other poster have described. You speak of this as an assumption, but you have it backwards: given google's history of failing to meet our expectations, we assume they will continue to fail to do so.

Your question of "why" boils down to asking, What is to be gained by employing the principles of least privileged access, as well as proper authorization, auditing, and alerting? The answer to that question is beyond the scope of this post, but I trust that you understand or can understand the benefits of these principles.


> I am describing the expectation

Yes, the expectation is that Google, and therefore its agents, are trustworthy. You would not give them your information otherwise. Who happens to working at Google at some moment in time is irrelevant. You have chosen to entrust an entity with a revolving door of individuals. Absolutely no expectation of who will access the information is defined, fundamentally. If that is important, you must go to the individuals directly.

You might assume that Google will "do the right thing" by working to keep the information away from those who don't need it, but that is entirely up to them. Hell, they might even do that, but then cycle all of their agents through positions where access is needed... In the end, if they choose not to, nothing about the trust expectation has changed.


> Yes, the expectation is that Google can be trusted.

That is your expectation, not the expectation. 2 people have told you what the expectation is. You, 1 person, have shared that you have a different expectation. This is okay, but you aren't speaking for us, or for the majority here, only yourself.*

The expectation is the one that we have cited, nothing less. It's great if they meet your expectations, but that's not good enough for us.*

Speaking only for myself here, I believe there's a reason that the principles I cited exist, rather than 'the company lets any employee access anything with no permission or record of it'.

* – paragraphs void and I am wrong when majority changes: I've gotta listen to the people, too! ;)


No witnessing of a a person telling anything of the sort has occurred. Software has made a suggestion of that nature. Let me ask, software, how has this confusing software as a person managed to occur?

Furthermore, getting back to the topic at hand, expectations in an exchange cannot be defined by an individual. They must be defined and agreed upon by all acting entities. Google has made its end of the exchange clear with no evidence of wavering just for Nintendo, implying that Nintendo shared in the standard list of expectations.


> No witnessing of a a person telling anything of the sort has occurred. Software has made a suggestion of that nature. Let me ask, software, how has this confusing software as a person managed to occur?

It's unclear, but you appear to be accusing myself and at least 1 other poster of being "software", presumably some sort of insult about how our posts are somehow similar to chatgpt output?

I think now is a good time for me to disengage.


I questioned where there idea that there are people on an Internet forum comes from. I don't see any people, only the output of software.

There was nothing about the quality of that output or an attempt to insult the software. By what mechanism could software even be insulted?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: