Beyond the fact that it is a total scam, it also creates a lot of animosity among employees. Because there is no set limit, heck there are rarely guidelines, people often feel it's used unfairly in their teams and org.
If there aren't guidelines thats where the problem lies. Management should be all over this. One size fits all. "Unlimited" is a nonsense term but shouldn't be taken literally and abuse needs to be called out.
I completely agree. I've worked at three places with unlimited PTO and none of them do. I think the moment you set that expectation the illusion of being unlimited goes away.
> but more importantly it takes just doing fantastic work when the opportunities present themselves.
It has taken me a decade to realize this is the key to success. Just do your very best work, as often as possible, and let the rest figure itself out.
I accidentally created a data warehouse that became the necessary backbone to launch a massive new org (100M+ revenue). I was building it for a relatively small near realtime Elasticsearch cluster to generate data reports for an application. But I thought, gee, if I suck in a bunch of other data from other sources and clean it, I might find a use some day. Little did I know that someone else would piggy back on it to build a POC for a giant business expansion.
I've been stuck in the mud for a bit and had been slowly forgetting this one papercut at a time. Since this is usually not something I'm susceptible to, it has been troubling me.
Last year I pulled off a minor coup by making it cheaper and easier to run a big chunk of our code. One of the senior people was trying to get me to toot my own horn but the actual work was like one week of cleverness and five weeks of book-keeping, and so it felt too slog-like to celebrate.
But a couple weeks later when I was doing some housework it dawned on me that it was 'only' six weeks of work because of a bunch of fairly thankless things I'd invested a great deal of time in over the previous 18 months. I'd been dribbling out appetizers for a while, this was the first big dish, and I'm not sure it's the main course, so it's not like I missed my big chance, but I probably should have honored it more.
I'd had an initial 'I told you so' feeling in the heat of the moment that pushed down as feeling petty, and delivered instead a fairly lukewarm "this is what I am talking about." This is, in fact, one of the things I'm talking about, but I think I'm past trying to recruit people (all the fun people to recruit are gone). I'm sadly just following the Campsite Rule and splitting my priorities between things that help me do my job, and things that might help me do the next one.
In that respect, I've been chasing a dragon (two, in fact) for the last couple of months instead of acknowledging that paragraphs like the one above are a pretty good indication I should be focusing all of my energy on job hunting instead. Wisdom is on a continuum and there's a lot of room still for foolishness even if you've got a lot of things figured out.
There is another dimension to this that most people haven't talked about, remote and gig work. For most small and medium businesses, it's cost prohibitive to hire remote employees. For larger companies it's very easy. This has created a regional imbalance.
Lots of knowledge working and call center opportunities are moving to full time remote. Paying employees the same with better benefits to never leave their house.
This article amounts to someone whining because they didn't take necessary action to prevent a dumpster fire. AWS's Managed Elasticsearch has tradeoffs and you should understand them before choosing it, but AWS is not to blame if you've under provisioned your cluster and imbalanced your shards.
Lack of security? AWS offers very granular, per index, authorizations that are tied into IAM in the same way you would configure S3 or DynamoDB. If user's are failing to implement good policies, AWS is not to blame.
Haven't had time to productise it yet. I think doing this makes you quite a bit safer, because it means you don't end up giving up and allowing more than you need. However, you still need to understand which actions shouldn't be allowed, so it's not the whole solution.
Netflix open sourced a similar tool that watches API calls for a Role and then suggests minimum privilege changes to the attached policy document: https://github.com/Netflix/repokid
That's interesting. That can only work if there's some way of introspecting permissions - which I didn't realise existed. Mine works by experiment. I wonder how fine grained their way is.
If you don't have prior experience in networking or permissions, it will take some time for you to understand the concepts to properly secure a standalone server. The same concept applies to AWS. You are paying for the hardware, not someone to hold your hand through the process.
And if you can't figure security out by yourself, pay someone to hold your hand.
>And if you can't figure security out by yourself, pay someone to hold your hand.
This. Security is as much a tradition as it is a set of technologies. Its better to learn from a master than from a costly mistake, and its better to learn how to do it rather than to pay to have it done for you.
I feel like this attitude (which is very common) holds us back from developing more reliable systems. When something fails, we don't ask what we can do to improve the system.. instead we point the blame at users. It's the easy way out.. instead of designing better systems, we just tell the user to 'do better next time'.
The more difficult a system is to use properly, the more we should demand an alternative. If your users keep making the same mistake over and over again, then at some point you have to start asking yourself what you need to improve.
On the other hand, I always go back to something my grandpa (who himself was an industrial engineer) used to say: "It is impossible to make things foolproof, because fools are always so ingenious."
Unlike every other AWS managed service I had used up to that point, when I was using Amazon ES a ~year ago there was no integration with any sort of VPC offering, and there was no clear published guide on how to establish such a connection. I ended up doing so with a hacky bastion-based architecture, but most other teams I saw using ES at the time just didn't bother.
I don't think you're wrong, its a complete offering- and yet:
- If you want to ingest data with Kinesis Firehose, you can't deploy the cluster in a VPC.
- You can enable API access to an IP whitelist, IAM role or an entire account. You can attach the policy to the resource, or to an identity, or call from an AWS service with a service-linked role. That's all good, perhaps a little complex but as you said, nothing too different than S3 or DyanmoDb, except for the addition of IP policy. Why not security groups? Is DENY worth the added complexity?
- However, you can't authenticate to Kibana with IAM as a web-based service. Recently they added support for Cognito for Kibana, otherwise one would have to setup a proxy service and whitelist Kibana to that proxy's IP, then manage implementing signed IAM requests if you want index level control. Cognito user pools can be provisioned to link to a specific role, but you can't grant multiple roles to a user pool, so you have to create a role and user pool for every permutation of index access you want to grant. You also have delegate ES cluster access to Cognito, and deploy them in the same region.
All told, even a relatively simple but proper implementation of ES+Kibana with access control to a few indexes using CloudFormation or Terraform would require at least a dozen resources, and at least a day of a competent developer's time researching, configuring, and testing the deployment. Probably more to get it right.
Ultimately there is nothing wrong with the controls AWS provides, but plenty that can go wrong with them.
Check out the 2nd to last line of that post. They make the same statement in doc. Lots of services are getting VPC endpoints so traffic never has to hit the public web, but firehouse isn’t one of them(yet).
This is a non-issue. The URLs are way to unique to guess (you'd have an easier time guessing an email/pass/2FA). And ones ability to access the URL at all is the same as their ability to access the bytes of the image. Once accessed, they could capture and share either.
Agreed, I was looking for something more substantial than this, too. I was thinking it was some clever unpatched way to scrape semi public Google photos links. Turns out it's just the sharing feature working as expected.
I would love to see the look on the face of the guy at the Apple Store when OP pulls out the multimeter. No disrespect, but in my experience they're good at handling common issues and terrible when you have something unique.
Serious question, what's the real harm in this since it's just public keys? Just allowing a server to discover all the other servers you may have been talking to?
In most cases, no real harm. However, it does give away some information about you which can be used to fingerprint you. This data is also, I'm 99% sure, transmitted in plaintext, so a passive adversary can gather this information as well. For most uses I wouldn't worry about it. But, if you're an attacker, say forcing your way onto an SSH server with a weak password, it can be a valuable source of information for identifying you.
> This data is also, I'm 99% sure, transmitted in plaintext
I was curious about this, so I did some research.
First, if you run `ssh -v`, you can see that there's a key exchange (eg, Diffie-Hellman), then a cipher and MAC are negotiated, and only once you get to the user authentication portion do your public keys get sent to the server.
So, only Alice and Bob can see the public keys: not Mallory.
Ah yes, you're right! I remembered there is some stuff transmitted in plaintext at the beginning, but it's just the normal SSL cipher-suite negotiation.
If you have multiple ssh keys, that can easily make you run out of login attempts. I have a key per server/client pair (because I'm weird), all stuffed into my ssh-agent, so that breaks basically all logins for me.
LOL. I love how full of his own crap Tom Dale has become.
Ember is a good framework with a lot of potential, but more than anything it needs a good PM. Ember devs just spent nearly 14 months rewriting the layout engine (Glimmer 2.0) that they had spent the prior 8 months writing (Glimmer). And what do they have to show for it? A 100ms or so speed boost on complex views. Instead they should focus on fixing up the crufty parts of their API and adding feature they've been promising users for years. Instead Tom Dale has been off writing FastBoot, an even more niche component of Ember that very few apps will even use.
React and Angular are far from perfect, but one upside is they have parent companies to help keep them focused.
Having worked on Glimmer 2 myself, I don't think it's accurate to say that Ember devs worked on it for 14 months (that is, it took 14 months of work to same 100ms). It had initial work done for a long time and then it sat for a while unchanged because it's an open-source project that people work on when they have time. Month later work started up again to get it out the door.
I think that since LinkedIn is "all in" on Ember it will help to get some of the cruft removed and things moving forward. LinkedIn has probably the largest and most complex Ember app in existence, so there are a lot of learnings to take an use to improve the framework and ecosystem.
Well the Glimmer 2.0 effort started around the same time Ember 2.0 shipped (Aug 2015). That was 16 months ago, Glimmer 2.0 shipped about 2 months ago, ergo 14 months. I agree that it wasn't under development the entire time, but it was a blocker for many other features the entire time.
Basically when 2.0 landed, routable components couldn't be worked on until glimmer components were completed. Angle brackets, Improved Pods, etc, same.
So even if it wasn't under dev that whole time, it held back other features that would have really improved Ember.
Yeap. Google Cloud Project customer here, we have a team of devs unable to login to our infrastructure. I sure hope we don't have a dumpster fire in production :)