Yeah I remember when Amazons AWS was new and people said "hey its cool but not secure." Then AWS added all these security features but added a caveat: BTW security is your responsibility
Here we are. I guess we can blame the users and not any shitty security architecture slapped on AWS.
The only mistake AWS made was making buckets originally public by default. It’s been many years since that’s been the case. At this point, you have to be completely ignorant to be storing PII in a public bucket.
It's literally, and I do mean this literally, 1 click to block all public traffic to an S3 bucket. It can be enabled at the account level, and is on _by default_ for any new bucket. What exactly more do you want?
> It's literally, and I do mean this literally, 1 click to block all public traffic to an S3 bucket.
I'm reasonably certain that for quite a while blocking all public access has been the default, and it is multiple clicks through scary warnings (through the console; CLI or IaC are simpler) to enable public access.
A real user will be worse … but that’s kinda the point.
The most valuable thing you learn in usability/research is not if your experience works, but the way it’ll be misinterpreted, abused, and bent to do things it wasn’t designed to.
The section on performance management is circular and vague: a good one is motivating and a bad one is demotivating. OK. Glad we got that out of the way.
The whole intro reads like a puffy resume and lots of gilding. Even a section of gushing testimonials.
And he puts his name on the title so you don't gotta read the author byline. Total cheese.
One of my more effective uses of AI is for rubber duck debugging. I tell it what I want the code to do, iterate over what it comes back with, adjust the code ( 'now rewrite foo() so 'bar' is is passed in'). What comes back isn't necessarily perfect and I don't blindly copy and paste but that isn't the point. At the end I've worked out what I want to do and some of the tedious boiler-plate code is taken care of.
I had some results last week that I felt were really good - on a very tricky problem, using AI (Claude 3.7) helped me churn through 4 or 5 approaches that didn't work, and eventually, working in tandem, "we" found an approach that would. Then the AI helped write a basic implementation of the good approach which I was able to use as a reference for my own "real" implementation.
No, the AI would not have solved the problem without me in the loop, but it sped up the cycle of iteration and made something that might have taken me 2 weeks take just a few days.
It would be pretty tough to convince me that's not spectacularly useful.
I've tried it, and ended up with the completely wrong approach, which didn't take that long to figure out, but still wasted a good half hour. Would have been horrible if i didn't know what I was doing though.
> some of the tedious boiler-plate code is taken care of.
For me that is the bit which stands out, I'm switching languages to TypeScript and JSX right now.
Getting copilot (+ claude) to do things is much easier when I know exactly what I want, but not here and not in this framework (PHP is more my speed). There's a bunch of stuff you're supposed to know as boilerplate and there's no time to learn it all.
I am not learning a thing though, other than how to steer the AI. I don't even know what SCSS is, but I can get by.
The UI hires are in the pipeline & they should throwaway everything I build, but right now it feels like I'm making something they should imitate in functionality/style better than a document, but not in cleanliness.
My experience with ChatGPT is underwhelming. It does really basic language questions faster and easier than google now. Questions about a function signature or questions like, "how do I get the first n characters of a string". Things like that. Once I start asking it more complex questions not only does it get it wrong often, if you tell it the answer is wrong and ask to do it again it will often give you the same answer. I have no doubt it will get there, but I continue to be surprised at all the positives I hear about it.
Agreed, its always nicer for me to have something to work with, even if by the end of it its entirely rewritten.
It helps to have it generate code sometimes to just explore ideas and refine the prompt. If its obviously wrong, thats ok, sometimes I needed to see the wrong answer to get to the right one faster. If its not obviously wrong, then its a good enough starting point we can iterate to the answer.
I love throwing questions at it where previously it would have been daunting because you don't even know the right questions to ask, and the amount of research you'd need to do to even ask the proper question is super high.
Its great for ideating in that way. It does produce some legendary BS though.
> although the AI gave me crap code it still inspired the answer
This is exactly my experience using AI for code and prose. The details are like 80% slop, but it has the right overall skeleton/structure. And rewriting the details of something with a decent starting structure is way easier than generating the whole thing from scratch by hand.
What model? What wrapper? There's just a huge amount of options on the market right now, and they drastically differ in quality.
Personally, I've been using Claude Code for about a week (since it's been released) and I've been floored with how good it is. I even developed an experimental self-developing system with it.
Here we are. I guess we can blame the users and not any shitty security architecture slapped on AWS.
reply