Hacker News new | past | comments | ask | show | jobs | submit | thecodemonkey's comments login

It's easy to hate on big companies. But can we just applaud Cox for having patched this within a day? That's incredible.


To be honest I would be very surprised if this was Cox as an organization and not just one or two very passionate workers who understood the severity of the issue and stayed after hours fixing it for free.


That was the most shocking part of the entire article! Unfortunate this vuln existed but clearly engineers there have enough teeth to get stuff done.


Agreed. Bugs happen, bug fixes don’t always happen (especially quickly)

That being said, we could all do with a bit more input sanitization, and I hope Cox learned their lesson here.


Seems more like a configuration error. Load balancer balancing over a few hosts, one of them missconfigured. Most likely over 2 hosts given the 50/50 success ratio of the intruder test. If that’s the case then it’s easy to fix in such timeframe


Are they using AI to filter the requests? That would be a match made in heaven!


They normally use "machine learning". In the past, they used catboost

https://blog.cloudflare.com/how-cloudflare-runs-ml-inference...


We use a lot of stuff. Here's another example: "To identify DGA domains, we trained a model that extends a pre-trained transformers-based neural network."

https://blog.cloudflare.com/threat-detection-machine-learnin...


Are some of these models/techniques open source?

It would be nice to integrate them into on prem solutions.


No. We open source a lot of stuff (most recently Pingora: https://blog.cloudflare.com/pingora-open-source) but the models used for protection services are not open.



Cloudflare is really the leader in this game


AI 8-ball says... allow "I'm afraid I can't do that, Dave".


The issue is also that most email clients will automatically convert URL's to links in HTML emails. So if an URL is put in e.g. your name field, it will still be clickable despite of no <a> tags.


I would pick whichever stack that I would be most productive in.

A Laravel app hosted with Laravel Vapor (AWS Lambda) with a MariaDB database. Would allow me to get up and running quickly, at low cost and without having to worry about scaling for a long time.

Using Tailwind and VueJS or AlpineJS for the frontend.


This. Your fastest stack is not my fastest stack. If you want to learn the ‘fastest’ frameworks, that is a totally different decision than getting an MVP out the door. That is an educational one… which is totally valid just not in an MVP sense.

The goal of the MVP is ascertaining product market fit, everything else is waste. Use what you know and optimize later. If your MVP can handle 1m calls a second, you have failed (unless it was natively supported by the framework)


I think it's a very reductive view. One cannot try every stack to find its fastest stack. This is why people ask about other people's experiences. Someone might have a better solution, and a convincing argument, so you could try it and become more productive.


I have a relatively terrible answer that I would recommend to no one but it works for me. That's the fastest stack for me.

The question wasn't "what is the fastest stack" or "of all stacks which is the fastest for you" but rather akin to "what is the fastest stack for you". Which is often the one that you are productive in.

It's almost always not worth learning a new stack to prototype something unless the goal is to learn the new stack.


That’s usually my take, but I still worry about a few things:

* which stack will still be around in 1/2/5years?

* which stack Will other teammates or future devs be productive in.

I’m still searching for a very light, productive open source stack that is well accepted and if not future proof at least we’ll backed.


For an MVP, none of those things matter. Just rewrite it in a different stack later if you realize that the one you picked doesn't fit your requirements long-term. Worrying about that stuff before you have users/customers is just a waste of time and energy.


But does that really happen? It seems that we have a lot of bloated, buggy, inefficient code out there because it was initially built using something that was 'quick and dirty' for the MVP and was never rewritten properly once it caught on.

I have clients that still use Excel spreadsheets for their database instead of using a real one just because their data was initially stored there and they never changed. New features were added incrementally over time and it became costly to break everything for a complete rewrite. So they limp along forever because management won't let them do it right until it absolutely breaks.


Yeah, it's definitely a cultural shift from the way most software is built today, but it's a better way to do it in most cases. I was also assuming a startup environment in my comment; established companies can generally afford to do more work up front to make the foundation more robust, and they are (slightly) more likely to have a better idea of what their customers want ahead of time.

But to answer your question more directly, it does happen, it's just uncommon. Where I've seen it done successfully, the rewrites have been piecemeal, not all at once, so that definitely helps with the buy-in factor.


you should check out https://wasp-lang.dev then. It's probably one of the fastest stacks out there for React + ExpressJS at the moment


While not strictly e-commerce. This was an important design choice when we designed our spreadsheet upload tool for Geocodio.

It should be possible to go through as much of the process as possible without having to sign up first.

It certainly makes it technically more difficult to develop, but it's incredibly powerful and user friendly.


We’ve done the same thing with our Chrome extension. Someone can access our 2 week free trial without so much as entering an email address. We do this partly so it can be easily tried out by young students, partly because it makes things nicer for all users, and partly because it reduces customer support and refund requests.

The downside is that we can’t put folks in an e-mail drip campaign, which would help us educate our users and increase our conversion rate. But we hate spam, so we don’t view not-spamming as much of a downside.


I'm pretty sure there was one product i abandoned in spite of it being somewhat useful - all because they started to send me tips and tricks emails after i signed up.


Neat! I built something similar a while ago, to scratch my own itch. It's just using text messages instead of a dedicated app (for good or for worse).

It's funny how similar the domain name is.

https://nudge.sh https://github.com/geocodio/nudge.sh


We got ~175 bare metal dedicated servers for maximum performance within our infrastructure.

We also have a separate instance of our SaaS running on AWS in the “cloud” for security and compliance reasons.


I'm a huge fan of Drone CI. Loads of customization options, and a simple, powerful UI. Easy to self-host and scale as docker containers.

https://www.drone.io


That’s a super cool concept, but doesn’t this fall a bit under “security by obscurity”?


The "security by obscurity" one-liner is one of my favorite examples of the sort of black-and-white thinking that is harmful to software engineering.

The truth is that playing defense is as much an exercise of technical design as it is economics.

Yes - if someone finds the SSH port, they have a window of opportunity, and you will be owned if you are not properly securing your server through the normal channels.

However, now they only have a small window of opportunity (say, 30 seconds). This does a few things:

- it takes time (money) to attack a target. without access to the OTP secret, randomly assigning ports dramatically increases the cost (time) of attacking you. throw in a tar pit and it's even worse. if you're not a high value target, the attacker moves on.

- now, any failed authentication attempt to your SSH server is a highly credible threat. repeat attempts are even more suspect - you are being targeted, and they probably have the OTP secret. effectively, you are able to resource your team more efficiently, because you can filter out noise.

security is not black and white. if you are a valuable enough target, someone will find a way in. defense in depth helps you manage your defense with limited resources.


I like this analysis, but I'm having a hard time seeing the advantage over port-knocking, which could also be randomized using OTP and would never reveal the SSH server to a port scan.

Zero window seems better than a 30 second window.

Excellent points otherwise.


> The "security by obscurity" one-liner is one of my favorite examples of the sort of black-and-white thinking that is harmful to software engineering.

My primary problem with people going for "security through obscurity" is that very often there are things which are intentionally obfuscated or obscured, in such a way that the implementer thinks that their method of hiding things will provide a significant measure of security. But then all of the other more common sense security precautions that should be implemented before the obscurity are ignored.

If I had a dollar for every industrial/embedded/M2M/IOT type thing that tries to be secure through obscurity but has other gaping holes in it, once you're familiar with the technical workings of the product...


> defense in depth helps you manage your defense with limited resources.

Indeed, but might there not better/cheaper ways to secure SSH?

Something that doesn't involve custom configuration that needs to be maintained.. like VPN to a jump-host... Or..?

Configuring and maintaining custom hacks is not cheap.


agreed - I would not deploy this as is, but the idea is interesting and I could imagine the concept being given some UX love in a different application


In the same way that passwords, private keys, and safe combinations are security by obscurity, sure.


Except that passwords, private keys, and safe combinations cannot be guessed in 0.1 seconds the way that a port can (65k possible values and a SYN packet really isn't large). There is a line to be drawn between high-entropy secrets and using an unpredictable port number.


Where are people getting the idea that "listening address" means port number? The title of the page literally says IP address...


Ah, misunderstanding on my part. For what it's worth, it doesn't say IP address, just listening address which I just took as whatever place (IP,port tuple) it listens on, and unless one has a huge IP range (uncommon with typical setups) I can see how "people" take that to mean port changing by default if they, like me, don't read carefully enough.


Huge ranges of IPv6 addresses are extremely common, virtually no ISP can be bothered to allocate smaller blocks than a /64. The minimum allocation of addresses recommended is a /32, which is 65k addresses.

You also know they don't mean port numbers because there's no such thing as a 6-digit port number.

From tfa: """Imagine your SSH server only listens on an IPv6 address, and where the last 6 digits are changing every 30 seconds"""


Yes, but that's only bad if it's your only security.


Layering has additional costs, like requiring additional client configuration and software and (in this case) only working over IPv6.

The number one step any public‐facing SSH server should take is to switch from password auth to keys only. Anyone who’s still concerned can put it behind a WireGuard VPN. Layers typically added beyond that (like changing port, etc.) don’t even register on the security scale, so to speak.

The tweet that inspired the post mentioned port knocking which has always been rather ridiculous given those alternatives.


The simple fact that ssh is over IPv6 already leaves out 99% of potential hackers aka bots.


I have some "cloud" VMs which have been around several years, they're attacked constantly, I haven't seen a single incoming IPv6 source to date.

I wonder if there's a "missed opportunity" for hackers there given presumably how many forget to configure their IPv6 firewall.


The search space is too enormous. Which is why you (somebody with a SSH server) should do IPv6 and then forget this other weirdness. [You should do publickey and so on, but I mean the Tosh stuff is "weirdness"]

Suppose you can write code that can try to connect to the SSH server on one million IP addresses per hour, if they respond you attempt an attack. You can try all of the servers in the entire IPv4 Internet in a few months even with a pretty naive algorithm.

But if you do this with IPv6 you won't ever finish trying addresses and indeed almost certainly won't find even one server (let alone successfully attack one) in your lifetime.

So immediately you need a more expensive attack method. Maybe you buy a supply of "passive DNS" (name -> address answers stripped of information about who asked, many big DNS providers sell this) which is not cheap and not well suited to this problem but it gets you somewhere. You pull out IPv6 addresses and try to SSH connect to them from your supplied list. This could work, but now you need to hope that your potential victims revealed themselves to you, all the juicy SSH servers in the world are invisible otherwise.


Actually it is far better than having no security at all


It's another layer of security. There have been exploits of OpenSSH in past so this may be prudent.


Trouble with extra layers, there's a point where it results in complexity. Which, in my experience, is more likely to be the root cause of a security problem.

I'm not saying this little demo is a disaster or anything. But for example, perhaps it requires an awareness of this scheme in an external firewall's rules, and maybe another machine pops up in the rather large IPv6 range that's now available.

At its extreme, these sorts of approaches can bring a lack of clarity which layer is providing the actual security.


I actually looked into this a few months ago and if memory serves, the last default setup authentication bypass was in something like 2003. Since then, I think the worst thing has been user enumeration. And 2003 was a very different world in terms of how much we cared about hardening, so ssh being reliable throughout all that time is really quite something.


No, the TOTP is[0] a shared-secret bitstream (like a stream cypher), which provides new randomness per use (obscurity means it relies on the attacker not knowing how it works in the first place). This is very weak security, similar to a combination lock or PIN, and should not be used in place of proper SSH crypto, but it is actual security. (The point, IIUC, is to quickly exclude drive-by attackers, so that serious/targeted attacks are higher above the noise floor.)

0: assuming I'm not being overly charitable


Security by obscurity is when you hide implementation details to improve security. Secrets are not obscurity, randomness is not obscurity.


Except that if they know you are using this technique (e.g. from snooping traffic) then it is straightforward to bypass, either by tailgating onto a recent connection attempt (if they can snoop) or just brute forcing it (they can test the whole key space in seconds).

There must be better ways to leverage long term shared secrets, recent authentication success, etc. I'd like to see something like Signal's ratchet mechanism.


I think the time aspect makes it OK, otherwise TOTP itself should be abandoned due to the same principle. (SSH still has password)


security by obscurity and defense in depth are synonyms.


No they are not. It doesn't help to have obscurity in depth.


I don't know why people knock on "security by obscurity" in general, it's a great defense for many threat models that an average individual would fall under. A lot of credential stealing viruses just look for specific folders or file names for examples, some vulnerability scanners look for specific ports, etc ...


I view it as a cost-benefit analysis between implementing security by obscurity versus actual security. If you have real security, then you don't need security by obscurity. If you have security by obscurity, you still need real security. So it's obvious which is a better ROI. That's not to say security by obscurity layered on top can't be useful, such as filtering out noise, but I think the point most people are trying to make is "this thing is not a solution to your security problem, it is a potentially dangerous distraction".


If you have real security, then you don't need security by obscurity. If you have security by obscurity, you still need real security.

If "real security" was a real thing and anybody knew how to actually do that, things like the Colonial Pipeline ransom-ware attack, etc. wouldn't happen all the time. As people have been saying since David Lightman in 1983 "Hey, I don't believe that any system is totally secure."

That's not to say security by obscurity layered on top can't be useful

I think that most people who would implement this (or similar schemes) realize exactly that, and are practicing "defense in depth". Could it be a "dangerous distraction?" Sure, in principle. But I don't see any particular reason that this would be more so than other elements of a "defense in depth" strategy.


"real security" means security that is not dependent on secrecy of implementation to remain secure. In the context of this post, it means configuring key-based access and disabling password-based access. If you do this, then the security-by-obscurity-based technique in the OP is unnecessary and redundant. Could key-based SSH access theoretically be cracked? Maybe (probably not, but let's say maybe for the sake of argument). But if so, a rotating listening address is probably no obstacle to an attacker of that caliber.


Could key-based SSH access theoretically be cracked? Maybe (probably not, but let's say maybe for the sake of argument).

Of course it could. Unless you're really going to posit that there are no bugs in any widely deployed ssh server implementations. Doesn't seem very likely to me.

Anyway... if you're being specifically targeted by a highly advanced adversary, it probably doesn't matter what you do. I tend to assume that most of us, most of the time, are not in that position, and should employ a layered, "defense in depth" strategy. Whether or not this specific technique is something worth deploying or not is an open question to me. My position is simply that we shouldn't just dismiss it out of hand without deeper consideration.


It’s because they had some security experts say it (about things like moving your open mail relay to port 7384 to ‘secures’ it) and it’s the only thing they know about security.


We're doing something similar, requiring email verification in certain cases based on past traffic, ip address source, etc. Unfortunately, we had to straight up block known temporary email addresses because there was too much abuse.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: