Hacker News new | past | comments | ask | show | jobs | submit | try_the_bass's comments login

> The US benefits from the TOR being private.

Slight correction: The US benefits from TOR being private to _everyone but the US_


I’m glad I didn’t have to scroll too far to see your comment.

In fact, A major power wins by creating a mote just big enough that only they can cross.


everybody does such shenanigans, bro.

you don't have to be a major power to do such stunts.

everybody and their uncle are already doing it. look into your life to see the truth of this.


I mean, you do have to admit that by objecting to "massive surveillance nets", you're actively helping criminals who have things to hide, even if you don't.

If you think that's worth it, that's up to you, but you do have to admit that your position helps those with antisocial goals. You'll probably argue that "massive surveillance nets are inherently antisocial", but we both know that's not any more true than saying that "absolute freedom of speech is inherently antisocial". Arguably true, but wholly subjective.


It's just an observation: this proposal aimed at penalizing bad drivers gets upvoted and generally supported, but proposals aimed at hunting down child pornographers get attacked as dangerous overreach. "I have nothing to hide" is an invalid argument for E2E encryption backdoors, but it's the correct way to think about a dashcam botnet.

It's just an interesting insight into the collective tech consciousness.


>you do have to admit that by objecting to "massive surveillance nets", you're actively helping criminals who have things to hide

This is such a disengenious gaslighting way of framing critic it would make politicians blush. I hope none of your "positions" could ever be used for anything bad ever or its your fault wether you want or not


Why is that not the point?

You violated a law and received a penalty. You're not disputing that you violated said law, but are instead trying to justify it with "barely didn't stop" and "it's 3am and there is no traffic".

Isn't the point that you got punished for doing something you would have gotten away with had no one been watching?


because maybe the point is "The basic premise of democracy is that the citizens/ordinary people are trusted as the ultimate source of the law, and the law is to serve them, not them to serve the law."

Nice twist to the premise at the end, but no, the point is that the person got punished for using sound and reasonable judgement in a situation where the regulation (not law) was ill thought out.


"Sound and reasonable judgement" to save a couple seconds?

That still just seems like rationalization of bad behavior.

You're right that the basic premise of democracy is that citizens can be trusted as the source of the law, but it seems to me that this particular citizen can't actually be trusted? I mean, they're demonstrating a lack of integrity, are they not?


> That still just seems like rationalization of bad behavior.

I think the issue is that you're taking as fact that "in order to be safe, you must come to a full stop at a red light before turning right", and that not doing so is, indisputably, "bad behavior". I dispute that. I think in many situations it is just as safe to nearly-but-not-completely come to a full stop before continuing, and it's entirely fine behavior.

The law has some difficulty encoding that. (Not that it's impossible, but it's difficult, and enforcement perhaps gets weirder if you try.)

Let's take a related example: jaywalking. In many places, you can get a ticket for crossing the street somewhere where there isn't a crosswalk, or crossing against a red light or a don't-walk sign. I was taught as a child how to look both ways and only cross when and where it's safe to do so. I don't need a sign or stripes on the road to tell me that (though I do appreciate those things as hints and suggestions). Hell, in some places (Manhattan comes to mind), if you don't jaywalk, everyone around you will look at you funny and get annoyed with you.

California, recognizing this, finally eliminated most jaywalking laws a year and half ago[0]. You can only get cited here if you've failed to do what your parents told you, and you're crossing when it's not safe to do so.

Stopping fully at a red light before turning right is, IMO, similar enough. For many (most?) intersections, you're only going to be a teeny tiny fraction of a percent safer coming to a full stop. So why bother?

[0] Let's also remember that jaywalking laws exist only because car manufacturers wanted them. Walking in the street!? How absurd! Streets are only for our beautifully-produced cars! Not you grubby plebeian pedestrians. Away with you!


> I think in many situations it is just as safe to nearly-but-not-completely come to a full stop before continuing, and it's entirely fine behavior.

I'm sure the multiple people that would have hit me if I hadn't jumped out of the way because they were looking the ither way to see if cars where coming thought the same.

> Let's take a related example: jaywalking.

When walking one is not impaired in one's vision of the surroundings, and you're not operating heavy machinery. The worst you can do is get yourself killed. With a car, the most likely scenario is to kill someone else.


You're talking about someone who, from their description, slowed down to something like 0.1mph instead of absolute zero. At 3am, in an empty road. How is that bad behaviour, lack of integrity, and a sign someone can't be trusted?


Integrity is commonly defined as "doing the right thing, even if no one is watching", is it not?

I highly doubt this person would have rolled through the light if a cop were sitting at the intersection watching them, and they knew they were being observed.

To several other posters' points, the specific regulation in question exists for safety reasons. Those safety reasons don't go away just because you don't think they apply in the moment. I'm sure every person who has hit (or been hit by) another person when rolling through a right turn like that thought their judgement in the moment was reasonable, too. I'm also sure not every one of those would have been prevented by coming to a complete stop and looking at the turn, but certainly some of them would have, which is a net positive for everyone. This comes at a cost of a handful of seconds, which seems like the most trivial of inconveniences, and wholly worth paying every time.


I don't actually disagree with some level of automated enforcement, but I do disagree with your phrasing/justification of it.

I just don't believe violating the law is always wrong, always bad, or always unsafe. While I would agree that most people are bad at risk assessment, and most people are not good drivers, the law should be flexible enough to deal with cases where breaking it is absolutely fine to do.

As a perhaps weird and imperfect analogy, killing another person is illegal... except when it isn't. The law recognizes that sometimes, even if in rare cases, killing another person is justified. This is why we have different words: "homicide" is sometimes not "murder" or even "manslaughter"; sometimes it's "self-defense".


I wholly agree that violating some laws is entirely justified.

However, I don't think any violation is justified by what more or less amounts to laziness and the desire to save an inconsequential amount of time.


Or sometimes it is the death sentence.

I agree with you, FWIW.


> violated user privacy since the beginning

But why is there an expectation of privacy on the web, anyway? When browsing the web you're utilizing hundreds or thousands of other peoples' infrastructure, going to sites that are public or semi-public (aka "public spaces"), and otherwise interacting with strangers in shared spaces.

In the physical world, there is no common expectation of privacy in public spaces. If you go to a shopping mall, you may be observed by either the mall itself or its client stores, and this is not a "violation of your privacy". You elected to go to a public space, knowing you might or would be observed by any other occupant of that space. If you go to your friend's house, you may be observed by their security cameras, if they have them. This is also not really a violation of your privacy. You could also have a conversation with your friend about not being on their cameras, and maybe come to an agreement--presumably because they trust you. This isn't something that scales to hundreds of strangers, much less thousands or millions.

I think the core issue I see with the stance you appear to hold is this: privacy is not a fundamental feature of anything. Any interaction with another person involves the exchange of information, and the amount of information exchanged is neither fair nor necessarily finite. How much information is exchanged is not just a function of the purpose of the interaction. It's also determined by the abilities of all parties to observe and analyze the interaction. Trying to argue that privacy should be a fundamental right seems to ignore the obvious: that forcing any particular level of privacy requires making sacrifices that significantly impede the quality of interactions, especially those involving parties that do not necessarily trust each other.

If you care about privacy, interact with other privacy-minded individuals and establish a common set of norms amongst yourselves, and acknowledge that those norms will be different for every group. This is sane and defensible.

What isn't is demanding that the world adhere to your standards of privacy, while claiming that it's some sort of right that should be protected by someone other than yourself.

Understand that privacy cuts both ways: most of the world is apparently fine with having no expectation of privacy on public places--online or offline. They're okay with this because it makes antisocial behavior harder to hide, and makes pro-social behavior easier to verify. If everyone demanded there be an expectation of privacy in those public places, the world would be very different, and not in a good way. Distrust would be the norm, and cooperative endeavors would be bogged down by the endless need to make sure those you're working with are indeed who they say they are.

If you want good examples of what it looks like when privacy in public spaces is the norm on a large scale, look no further than the cryptocurrency world. Not exactly a shining example of a "pro-social culture".


> In the physical world, there is no common expectation of privacy in public spaces.

Oh, please. This is a false equivalence if I ever read one.

When you're in public nobody is building a profile on you. Nobody is tracking where you go, what you buy, how you behave, and what your interests are. Nobody is then storing this data and getting rich from it in perpetuity.

If privacy online were limited to companies knowing just your IP address and general location, and not storing this data, _that_ would be equivalent to public spaces in the real world. The fact it's not, and companies actively try to gather as much information they can about you, often in scammy ways that forces governments to come up with new regulations, where getting fined for violating them is just the cost of doing business... is beyond nefarious and any reasonable defense.

> privacy is not a fundamental feature of anything

Huh? Privacy has been a fundamental human right for millennia, codified as part of many legal jurisdictions.

> Any interaction with another person involves the exchange of information, and the amount of information exchanged is neither fair nor necessarily finite.

Again, the difference is in how this information is used. The keywords here are _reach_ and _consent_. We implicitly consent to sharing a limited set of information when interacting with another person or in public, but when that information and its use goes far beyond what we implicitly agree to, then this needs to be clarified, and explicit consent must be given.

> Understand that privacy cuts both ways: most of the world is apparently fine with having no expectation of privacy on public places--online or offline.

This boils down to two things: a) most of the world doesn't understand the implications of the transaction they're a part of when they use online services. The use of their data is hidden behind cryptic privacy policies (which are often violated and don't explain the full picture), and data collection happens in obscure ways without any consent. And b) even if they're aware of this, most users simply accept the transaction because "they have nothing to hide", or the value they get from the service is greater than their privacy concerns. Users are often never given the option of an alternative business model, and even when they are, the data collection machine never stops. Companies know that the allure of getting something for "free" is far more profitable than putting up a paywall, and so they justify extracting as much value as they can from their users' data. We're stuck in an endless cycle of having "free" services where the business transaction is far more valuable to the company than its users. And it's in the companies' best interest to keep the users, the general public and governments in the dark as far as the extent and details of how value is actually extracted. This is the most vile business model we've ever invented.

> If you want good examples of what it looks like when privacy in public spaces is the norm on a large scale, look no further than the cryptocurrency world.

What privacy? The blockchain of most cryptocurrencies is a public ledger where all transactions are visible to everyone. Transactions are done almost exclusively via exchanges that follow strict KYC regulations. So it's pretty easy to identify the person behind any transaction.

Following your logic, should we ban all cash transactions, since they're even more private than cryptocurrencies?

You have a twisted sense of the world, friend. My guess would be that you work for one of these companies, and this is how you rationalize the harm you're putting out in the world.


> Oh, please ... is beyond nefarious and any reasonable defense.

I don't know. This seems overly reductive. People build profiles on you all the time, just by looking at you. What you wear, how you talk, how you hold yourself--these things all present a large amount of information to anyone willing and able to observe it.

Companies invest large amounts of money into understanding customer movements in their stores, to better optimize product placement and things like checkout flow. Even sole proprietors will observe their customers' patterns to get a better understanding of how their store is performing. If a worker in a cafe sees the same person come in multiple times and order the same drink each time, is that not also "building a profile" on said customer?

A public IP address and a general location is what... maybe 12 bytes of information? You broadcast a lot more than 12 bytes of information when you pass through a public place, especially when you consider all the possible forms of information you could be broadcasting. You're being overly reductive to try to make your point, and I don't really buy it.

> Huh? Privacy has been a fundamental human right for millennia, codified as part of many legal jurisdictions.

Being "codified as part of many legal jurisdictions" does not make something a "fundamental right". To take a slightly extreme example, treating women as property is "codified as part of many legal jurisdictions" in the world today, and yet it would be insane to use that to argue that "treating women as property is a fundamental human right".

You may believe that privacy is a fundamental human right, but it's not because it's been codified in "many legal jurisdictions". The logic just doesn't follow.

> Again, the difference is in how this information is used. The keywords here are _reach_ and _consent_. We implicitly consent to sharing a limited set of information when interacting with another person or in public, but when that information and its use goes far beyond what we implicitly agree to, then this needs to be clarified, and explicit consent must be given.

When did "reach" and "consent" come into this? Why are these "keywords" here? Merely _existing_ in the world is consenting to sharing "private" information with other people (or even things!) in the world. It's unavoidable. I'd argue further that it's not just avoidable, but _necessary_ to the building of a functioning society--especially any when governed by any form of democracy. It's not a matter of consent, it's merely a fact of life.

You don't explicitly consent to any level of information sharing when you go out into the world, but some amount of it is unavoidable. In fact, I'd even go so far as to argue that it's impossible for you to ever know how much information you share when you venture into the world. You can never know how much someone else can deduce about you from any given interaction, because that would requiring knowing what only another person can know. You also can never know how far that information might spread--because again, this would require knowing others' intent and intentions _before ever interacting with them_, which is impossible.

> This boils down to two things: a) most of the world doesn't understand the implications of the transaction they're a part of when they use online services. The use of their data is hidden behind cryptic privacy policies (which are often violated and don't explain the full picture), and data collection happens in obscure ways without any consent.

I would largely argree with this, because I believe this is how "public spaces" already function. Data collection always happens without your consent. If I pass you on the street and note that you're wearing an Apple Watch, I have obtained information about you that you did not (explicitly) consent to share with me. I can draw reasonable conclusions from that information that might inform me of other aspects of your life that you also did not (explicitly) consent to share with me. This is how public spaces work!

> And b) even if they're aware of this, most users simply accept the transaction because "they have nothing to hide", or the value they get from the service is greater than their privacy concerns.

Again, also agree with this, because I think it's simply true. The "data" that you're sharing with a lot of these companies would be worthless to you yourself (because the value comes from correlations across many data points). The fact that it has value _to a given company_ when it might be worthless to you yourself is part of what gives that company's services value--even to you.

As an example (in an area I'm familiar with), if no company ever collected information about how their website was used (i.e. with a product like Datadog or Sentry), they would be faced with a much larger cost to improve their business. The testing burden would be far higher, they would have no insight into whether or not their product actually works for real users, they would only receive a fraction of the number of bug reports they would receive through automation, etc etc. These are all things that reduce the cost the company has to pay in terms of keeping the product in working order and increasing its feature set. These services exist for _very good reasons_, and they're pervasive because they provide a large amount of value to their customers. Which kind of gets to your next point:

> Users are often never given the option of an alternative business model, and even when they are, the data collection machine never stops.

Users are given the option of alternative business models all the time. I've lost count of how many "privacy-preserving" services I've seen that promise to never collect your data, or never share your data, or whatever. I've also lost count of how many of them have died a slow death, because a) people don't care about that as a value proposition, b) they don't improve their services as quickly as their competitors, c) they're more costly than their competitors, and d) they just suck as services because they fail at basic usability.

There are significant upsides for _all parties_ when it comes to certain kinds of data collection (i.e. telemetry data mentioned above, flow analysis in stores, etc). These things come with no downside other than some bogeyman of "oh noes they have my data".

> Companies know that the allure of getting something for "free" is far more profitable than putting up a paywall, and so they justify extracting as much value as they can from their users' data. We're stuck in an endless cycle of having "free" services where the business transaction is far more valuable to the company than its users.

Again, fully agree, but I also understand _why_ this is the case. There are _significant_ upsides to many kinds of data collection, and those in favor of "enhanced privacy" to this day _cannot_ provide viable alternatives. The fact that it's extremely hard to build a service on a paid subscription that does not subsidize its development/research/feature set through data collection of some kind should be ample evidence that this is the case.

> And it's in the companies' best interest to keep the users, the general public and governments in the dark as far as the extent and details of how value is actually extracted. This is the most vile business model we've ever invented.

Kind of agree with this, except the last bit. I don't see why it's vile at all. Certainly it is in some cases (see the general trend of enshittification), but I don't think the only answer is "make everything _more_ private". That only increases distrust, which is the last thing we need right now.

> What privacy? The blockchain of most cryptocurrencies is a public ledger where all transactions are visible to everyone. Transactions are done almost exclusively via exchanges that follow strict KYC regulations. So it's pretty easy to identify the person behind any transaction.

Okay, here's a Bitcoin transaction[1]. Please identify who the participants are.

> Following your logic, should we ban all cash transactions, since they're even more private than cryptocurrencies?

No, why would we do that? That doesn't follow from pointing at cryptocurrency as a place where minimization of trust and maximization of privacy are the norms. This just seems like you're throwing out whatever other bad things you can attach to the strawman you've constructed from my argument.

> You have a twisted sense of the world, friend. My guess would be that you work for one of these companies, and this is how you rationalize the harm you're putting out in the world.

Wildly incorrect, as well.

But here's the thing... I'm not the one saying "when I go out in public, all people can get from me is the equivalent of a public IP address and a general location". I'm trying to point out that when I interact with the world, there's depth to the interactions that I literally _cannot_ know, and that I'm sharing information with the world that I'm not even aware that I'm sharing.

I'd argue that taking the world of public spaces filled with (very real) people who have as much depth (or more) to their lives than your own, and viewing that as "just a bunch of public IP addresses and general location information" is the truly twisted view of the world. People aren't cartoon cutouts, and places full of real people are also full of a wealth of information that is essential to the functioning of our societies.

The answer isn't to take all that information away--it's to be better at using and utilizing it, for whatever definition of "better" we can all manage to agree on.


I won't reply to all your points, but I'll say this: I don't think I'm being overly reductive as much as you're being dismissive about the lengths tech companies go to build a profile of their users. The major difference between someone building a mental profile based on my interaction with them in the real world and the online profile companies build based on my digital footprint is that the real world profile is not stored perpetually and shared, or rather sold, behind my back to countless interested parties that will later use it to psychologically manipulate me, whether that's to get me to buy something I didn't need, or to change my mind about a political or social topic. Additionally, my real world profile is helpful in building tangible relationships, and I have full control over what I choose to share with the world. I may not be aware of all the information I'm sharing, but I'm certainly in control of it.

This is what privacy essentially is. Consent and reach _are_ keywords, since I get to decide what and how much I share publicly, and what I keep private. I can at any point stop interacting with whoever I want, at which point they only have a fading mental profile of me. This is not the case with digital profiles, since companies use dirty tricks to access as much of my data as possible, routinely exceed their boundaries, and, most crucially, they keep this data and continue profiting from it in perpetuity. Even principles like the right to be forgotten of regulations like the GDPR are not sufficient to put an end to this, since once my profile is sold on the data broker market, it is forever part of it, and will likely end up back in the hands of the company that claims to have deleted it.

This is why I maintain that these practices are vile and nefarious, and why we have far too few and far too lenient regulations to control them.

Re: tracing the Bitcoin transaction, I don't claim to be able to do this personally, just that it's technically possible. There are companies like Chainalysis that do this kind of service, and there are many examples of law enforcement agencies tracking individuals using these techniques. I'm only pointing out the flaw in your argument that cryptocurrencies are some kind of privacy standard, and extrapolating it to cash money, which would presumably foster even less of a shining example of a "pro-social culture".

In any case, I've exhausted my arguments on this topic, and it's clear we have very different views about privacy, so let's agree to disagree. Cheers!


> Consent and reach _are_ keywords, since I get to decide what and how much I share publicly, and what I keep private. I can at any point stop interacting with whoever I want, at which point they only have a fading mental profile of me.

I'll just point out that I think this is where I find the flaw in your argument: You _don't_ get to decide what and how much you share in public. You might try, but you will never succeed, because it's impossible to not share what you don't realize you're sharing. I think this is why I find your argument entirely unconvincing: if you're incapable of knowing the extent of what you're sharing with other people in public, informed consent us impossible by definition. And if you don't know the extent of what you're sharing, how can you know how far that information can reach?

Sure, being online makes collection and archival of what you share easier, but "more privacy" doesn't address the root cause, because the root cause is that you cannot be fully aware of what context you share with others in the world--both online and off. I don't think this is a problem, because I think this kind of unintentional information sharing provides all kinds of benefits to society at large. It's the kind of thing that drives "gut feelings" and other forms of intuition, and we'd be impoverished if everyone adopted the kind privacy maximizing standpoint you're advocating above.

But opposing viewpoints are good, so I'm not super concerned if I fail to convince you of anything, really. Cheers!


And yet "free market forces" are often the reason why monopolies and oligopolies arise...?

Monopolies are entirely consistent with free market economics. After all, if there's clearly a "best product" for a particular niche, it's entirely rational (free market actor) behavior for everyone to use the same product, leading to its monopoly in that market segment.

I don't understand why people think this isn't/won't be/shouldn't be a common result of "free market forces".


> Monopolies are entirely consistent with free market economics

Not in the least. Literally in the first semester, Economics 101 type class that any business/economics/etc student would take, it would be covered clearly that monopolies are violations of free markets.

A free market isn't a euphemism for anarchy or "no rules", it's a specific economic term. The things it is free of include artificial price floors or ceilings, barriers to entry, anti-competitive practices, etc. In other words, monopolies, oligopolies, cartels, monopsonies, etc are all violations of a free market. You do not have a free market if there is a monopoly supplier.


Economics 101 also assumes perfect information symmetry, perfect competition, and spherical cows. In other words, it only vaguely models reality... Good enough to teach the basic concepts, but without diving in the nuance that makes the basic models break down.

If one competitor is far enough ahead of the rest, they can maintain that lead given that they can extract sufficient momentum from their early mover advantage. If they keep this up long enough, competitors never reach the scale to sufficiently prevent them from becoming a monopoly (at least over their local market segment).

None of this requires anti-competitive behavior; simply good execution on the part of the leader.

Unless, of course, you're suggesting that "free markets" also involve government intervention to suppress their lead in the market...


And many times the sources of these monopolies come from special privileges given out by governmental authorities, such as the U.S. FDA


Or even more basic government-enforced restrictions like IP laws. If you want a survival of the fittest anarchy economy then those won't exist either. Neither will legal protections against espionage or circumvention of whatever technical means you come up to try and get all that back.


> Monopolies are entirely consistent with free market economics.

This is a fair critique. I'm approaching this from an admittedly American perspective in which "free market" colloquially implies competition - but I recognize that competition is not inherently a free market concept.

Good callout!


Yeah, none of that is true.

Starlink doesn't have a debris problem thanks to the low orbit. Any debris generated deorbit on time-frames of a few months to a free years. Starlink also has 0 reported Earth debris strikes that I can find.

Where do you get this misinformation from?


> Regarding hacking - many people learned java writing bots for Diablo 2.

I remember it being Javascript, not Java. I forget the name of the tools involved, but I do remember creating a lot of Javascript bindings in C or C++, so that scripting engines would have interfaces to various bits of game engine code.


I feel like this says something about the economics of running a SaaS company where the customer retains full control over all their data, including usage data?

Perhaps it's simply not feasible at the price point users are willing to pay?


Why would you sign up for a wait-list for a feature, only to deny the creators of that feature the data they can use to make said feature even better?

Do you not want it to continue to get better?


Your assumption, that I want to give them data to make the feature better is wrong.


But why don't you? That seems like a wildly selfish take.


> This is such unhelpful, unsympathetic advice.

I think writing this advice off as unhelpful is actually more harmful than offering it in the first place. I think this is very real and very helpful advice. Is it hard to follow? Speaking from experience: absolutely.

Self-control is like a muscle: exercise it frequently, and it gets stronger. It also gets tired and needs rest, and it atrophies with disuse.

And like exercise, it's almost always beneficial. Even folks with physical disabilities see very real benefits from exercise, even when it's hard and painful! I used to live next door to a man who walked with a cane and very obviously struggled to go up and down stairs... And yet any time I would offer him help, he would refuse, because he knew the effort would keep him as mobile and active as he could be, given his circumstances--and do accept that help would actually harm him in the long run by accelerating the decline in his abilities. I doubt I would have his level of discipline were I in his situation, and to this day I envy that of him.

I think going so far as to say "telling someone to exercise self-control is unhelpful/unsympathetic" is exactly analogous to telling someone exercise is harmful. Not "too much exercise is harmful", but "any exercise is harmful", which is obviously untrue.

I'll be the first to acknowledge that humans are innately lazy, and that exercise is hard/boring/inconvenient/whatever. However, we do no one justice by giving them reasons to excuse that laziness. Justifying a lack of internal effort/ability should be and explanation of last resort, not the baseline.

Put differently: very few people are physically incapable of doing a pushup (or whatever other basic exercise you want to reference) due to actual physical limitations. Most who cannot simply haven't put in the work to reach the point where they can.

[E] This turned out longer than I anticipated. It turns out I feel strongly about this, and feel like this is one of the most toxic aspects of the society I feel like I inhabit. People should be encouraged to push their abilities, not given excuses not to. It's all too easy to accept those excuses as truth, and this prevents us all from reaching our highest potential. This feels like a net harm to society and a driver of very real inequality


> Self-control is like a muscle: exercise it frequently, and it gets stronger. It also gets tired and needs rest, and it atrophies with disuse.

Addiction is not a matter of self-control.


It absolutely is. Most people become addicted to something because they lacked the self-control to say "no" the first time... And the second time... And so on until it became an addiction.

Also, by saying this you're insulting every person who has broken an addiction by way of exercising their self-control until it's strong enough to overcome the addiction. Some of us don't appreciate that.


It's 100% a matter of self-control.


Addiction is a psychological disorder - it is not at all a matter of self-control.

Just as being sad is not the same thing as being clinically depressed, and worrying about a review is not the same thing as having an anxiety disorder, people need to stop equating really liking something and being addicted to said thing.


> Addiction is a psychological disorder - it is not at all a matter of self-control.

And self-control is not psychological?

> people need to stop equating really liking something and being addicted to said thing.

Self control is precisely the difference between really linking something and consuming a reasonable/healthy amount of it vs being addicted to it and overconsuming.


> And self-control is not psychological?

The disorder that you dropped from "psychological" is actually semantically important, you know.

> Self control is precisely the difference between really linking something and consuming a reasonable/healthy amount of it vs being addicted to it and overconsuming.

If you were unable to get the hint from the other examples I gave, I'll be plain: you are terribly mistaken, addiction is a disorder of the brain.

What moral failing do you assume people with mood or anxiety disorders have?


> The disorder that you dropped from "psychological" is actually semantically important, you know.

Lack of self control can be described as a 'psychological disorder'. Also you completely avoided the actual question :) - maybe try answering it instead of playing semantics?

> I'll be plain: you are terribly mistaken, addiction is a disorder of the brain.

So you replaced 'phsycological disorder' with 'disorder of the brain'? That's supposed to make your point less pointless?

> What moral failing do you assume people with mood or anxiety disorders have?

Again, nowhere did I mention the word 'moral'. Try to address what I actually said instead of imagining strawmen in every comment.

> If you were unable to get the hint from the other examples I gave

No, I didn't get any 'hint' from your pointless examples. You see, I try to read what you said directly and respond directly, without any imagined 'hints', strawmen, or meaningless semantic games around vague and ambiguous terms.


This is such a false dichotomy. Even if you assume addiction can be a disease, self-control is still absolutely an element in overcoming it.

It's also not necessarily an either/or thing, and the existence of people who have broken addictions through exercise of their own self-control completely dismantles your black-and-white argument.


No it's not. That's a lazy way of thinking. Addiction isn't that simple, and deluding yourself into thinking it's that simple does yourself a disservice. It's not a moral failing, it's a disease.


And it's not lazy or delusional to claim addiction is as simple as being a disease?

If anything, that's a more lazy approach than saying it's a failing of self-control, which is itself a complicated and complex issue (hence my paragraphs and qualifications and whatnot above).

Writing it off as "a disease" removes all agency from the individual involved, whose lack of self-control is probably why they're addicted in the first place. That's true regardless of whether or not the addiction itself is a disease or a lack of self-control.

Saying "it's a disease" is also hugely insulting to the many people who have broken addictions by improving their self control. Even if you consider it a "disease", every person who has accomplished this has proven that the cure sometimes really is "improve your self-control".

And if that works in some cases, how can you prove whether or not it will work in another case without trying it first?


Self control is not 'simple', nor did I say anything about a 'moral failing'. 'Deluding yourself' by arguing against a position you imagined does yourself a disservice and just seems like a simple straw-man.


Whatever addiction, you’ll only get past it by assuming and believing it is under your control, paradoxically. Playing victim is, as very often in life, the opposite of empowering.


Few things are. The world would be a better place if people would try to help each other more.


Wanting people to understand that they can improve their own self-control is trying to help them. With more self-control comes more agency and more control over one's own outcomes.

How is that being spun as a bad thing? I don't understand how we've reached a point that it's considered more "helpful" to teach people they have no control over their actions, rather than helping them gain more control over those actions?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: