Hacker News new | past | comments | ask | show | jobs | submit | CaffeineLD50's comments login

Yeah I remember when Amazons AWS was new and people said "hey its cool but not secure." Then AWS added all these security features but added a caveat: BTW security is your responsibility

Here we are. I guess we can blame the users and not any shitty security architecture slapped on AWS.


The only mistake AWS made was making buckets originally public by default. It’s been many years since that’s been the case. At this point, you have to be completely ignorant to be storing PII in a public bucket.

> shitty security architecture slapped on AWS

It's literally, and I do mean this literally, 1 click to block all public traffic to an S3 bucket. It can be enabled at the account level, and is on _by default_ for any new bucket. What exactly more do you want?


> It's literally, and I do mean this literally, 1 click to block all public traffic to an S3 bucket.

I'm reasonably certain that for quite a while blocking all public access has been the default, and it is multiple clicks through scary warnings (through the console; CLI or IaC are simpler) to enable public access.


A real user might be worse. A program is less flexible (maybe) and more consistent (definitely) than a meat space CBL.

The goal is not realism but a kind of ready made "you must be this tall to ride the rollercoaster" threshold


A real user will be worse … but that’s kinda the point.

The most valuable thing you learn in usability/research is not if your experience works, but the way it’ll be misinterpreted, abused, and bent to do things it wasn’t designed to.


Very clever. Reminds me of using Alexa to test your pronunciation of foreign words. If Alexa has no idea you probably said it wrong.

Its not mass spying. The NSA is just making time capsule backups for everyone. Stop being so dramatic.

In a hundred years when it gets published its gonna be the bomb hilarious. Totes.


Elon Musk, for example, appears to be wholly self taught as a coder. Do you want Elon doing your code reviews?

I want him to call me a pedo while I'm trying to save people stuck in a cave :D

Lol. Nice one.

The section on performance management is circular and vague: a good one is motivating and a bad one is demotivating. OK. Glad we got that out of the way.

The whole intro reads like a puffy resume and lots of gilding. Even a section of gushing testimonials.

And he puts his name on the title so you don't gotta read the author byline. Total cheese.


I believe there's nothing wrong with backing up your own media. Not sure how YouTube DRM impacts your 'backups' tho?

It’s just a pattern of throwing DRM everywhere, and restricting where and how I can watch stuff. I don’t like it.

If ya don't like monetized content tactics how about enjoying the public domain?

A clever idea. Has anyone tried it?

And if you want to understand the theory of Skinner's Verbal Behavior check out

https://bfskinner.org/wp-content/uploads/2020/11/978_0_99645...


I had a minor desire to make a feature that had a slightly higher effort than reward, so although I knew I could struggle it out I didn't bother.

After years of this I decided to give an AI a shot at the code. It produced something plausible looking and I was excited. Was it that easy?

The code didn't work. But the approach made me more motivated to look into it and I found a solution.

So although the AI gave me crap code it still inspired the answer, so I'm calling that a win.

Simply making things feel approachable can be enough.


One of my more effective uses of AI is for rubber duck debugging. I tell it what I want the code to do, iterate over what it comes back with, adjust the code ( 'now rewrite foo() so 'bar' is is passed in'). What comes back isn't necessarily perfect and I don't blindly copy and paste but that isn't the point. At the end I've worked out what I want to do and some of the tedious boiler-plate code is taken care of.

I had some results last week that I felt were really good - on a very tricky problem, using AI (Claude 3.7) helped me churn through 4 or 5 approaches that didn't work, and eventually, working in tandem, "we" found an approach that would. Then the AI helped write a basic implementation of the good approach which I was able to use as a reference for my own "real" implementation.

No, the AI would not have solved the problem without me in the loop, but it sped up the cycle of iteration and made something that might have taken me 2 weeks take just a few days.

It would be pretty tough to convince me that's not spectacularly useful.


I've tried it, and ended up with the completely wrong approach, which didn't take that long to figure out, but still wasted a good half hour. Would have been horrible if i didn't know what I was doing though.

Yes, that's one of the bigger traps. In my case I knew what needed to be done and could've done it on my own if I really needed to.

A novice with no idea could blunder through but get lost quickly.


> some of the tedious boiler-plate code is taken care of.

For me that is the bit which stands out, I'm switching languages to TypeScript and JSX right now.

Getting copilot (+ claude) to do things is much easier when I know exactly what I want, but not here and not in this framework (PHP is more my speed). There's a bunch of stuff you're supposed to know as boilerplate and there's no time to learn it all.

I am not learning a thing though, other than how to steer the AI. I don't even know what SCSS is, but I can get by.

The UI hires are in the pipeline & they should throwaway everything I build, but right now it feels like I'm making something they should imitate in functionality/style better than a document, but not in cleanliness.


The idea of untangling AI generated typescript spaghetti fills me with dread.

It’s as bad as untangling the last guy’s typescript spaghetti. He quit, so I can’t ask him about it either.


When you ask AI to do a thing, you at least have some minimal amount of intent — you just need to retain the prompt.

Most of the time, programmers don’t record all of the assumptions and design decisions in code or documentation.


Weird argument, but I’m down to bikeshed a bit.

most of the time there’s an old ticket in the work tracker.

I’d bet those tickets contain at least as much info as any prompt.


My experience with ChatGPT is underwhelming. It does really basic language questions faster and easier than google now. Questions about a function signature or questions like, "how do I get the first n characters of a string". Things like that. Once I start asking it more complex questions not only does it get it wrong often, if you tell it the answer is wrong and ask to do it again it will often give you the same answer. I have no doubt it will get there, but I continue to be surprised at all the positives I hear about it.

What language are you writing? I mostly write go these days, and have often wondered if it is uniquely good in that language given its constraints.

Agreed, its always nicer for me to have something to work with, even if by the end of it its entirely rewritten.

It helps to have it generate code sometimes to just explore ideas and refine the prompt. If its obviously wrong, thats ok, sometimes I needed to see the wrong answer to get to the right one faster. If its not obviously wrong, then its a good enough starting point we can iterate to the answer.


I love throwing questions at it where previously it would have been daunting because you don't even know the right questions to ask, and the amount of research you'd need to do to even ask the proper question is super high.

Its great for ideating in that way. It does produce some legendary BS though.


it looks like a variation of Stone soup story https://en.wikipedia.org/wiki/Stone_Soup

> although the AI gave me crap code it still inspired the answer

This is exactly my experience using AI for code and prose. The details are like 80% slop, but it has the right overall skeleton/structure. And rewriting the details of something with a decent starting structure is way easier than generating the whole thing from scratch by hand.


> I decided to give an AI

What model? What wrapper? There's just a huge amount of options on the market right now, and they drastically differ in quality.

Personally, I've been using Claude Code for about a week (since it's been released) and I've been floored with how good it is. I even developed an experimental self-developing system with it.


I prefer open source models.

I had a similar experience but found that with a little prodding, I was even able to get it to finish the job.

Then it was a little messy, so I asked it to refactor it.

Of course, not everything lends itself to this: often I already know exactly the code I want and it's easier to just type it than corral the AI.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: