Hacker News new | past | comments | ask | show | jobs | submit login

You didn't replace a SQL Analyst, you just gave them a query generator. End data consumers don't understand the data model, assumptions, quirks, etc. If they fire the analyst, they are going to wind up drawing a lot of bad conclusions on anything more complicated than simple aggregations.



Maybe 5% of the business/operations people I've worked with would even want to do this. The rest prefer a smart human to walk them through building a report - most of the time this is because they don't actually know what they need, and they actually need an expert to sit down and figure it out with them.


The longer I'm in the business, the more I feel like my value as a software developer is identifying and asking clarifying questions.


You nailed it. At the beginning of my career I thought that success and value to the organization was all about technical skills.

30 years later I now understand that most successful projects need people with modest or average technical skills and outstanding communication skills.

It doesn't matter if you have super-genius engineers; if the business people don't really understand the problem that they're trying to solve then you're going to end up with a crap solution (may shiny, fast, and beautiful, but still crap).


I think I saw a comic years ago detailing a discussion that went something like this:

A: Eventually we won't need programmers people will just tell the computer what they need and it will generate the code for them.

B: True, there's actually already an industry term for a specification that's detailed enough to generate a working program from.

A: Oh, what is it called?

B: Code.


Yup. I think it was from Commitstrip.

edit: found it. https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...?


Yeah that's the one.


People don't realize how imprecise specs can be until they have this conversation for the first time.

"Okay, I understand what to do when {thing} happens. What do you want done when {not thing} happens?"

"Oh, umm..."

"{Not thing} can happen, right?"

"Oh yeah, all the time."

"So was the plan when it does?"

"I'm not sure..."


IME, this is the number one reason outsourcing (whether on-shore or off-shore) fails for many projects.


We've been prompt engineers all along.


Touché.


When I was new, I saw one of my more experienced colleagues ask a few questions that together saved the company more than $1 mil each year. ChatGPT might be a threat to automate some low-level tasks or help eliminate bugs, but it is nowhere near ready to evaluate the context of a system, understand its history, or think* through the consequences of a major business decision.

* or think at all, in any meaningful way.


Though if it comes an AI with the capacity to include more context (ie: all company financials, communications, market analysis, etc...) it might be even more effective than a human with precise context.

Communication might be strictly email in the future. Or something that could be pipelined into the "AI" for context. Video/Calls might make it too at some point. Face to Face meetings strictly prohibited.


I agree with you to a point, but I think the only reason that it can't understand the context of a system is because it hasn't been trained on that system's code and documentation, which is obviously a future coming soon.


I'm not sure training these models on code and documentation will make that much of a difference. These models struggle significantly with subtlety, relevance, and correctness. It also doesn't have a theory of its own knowledge or confidence, and so tends to "hallucinate" and put out confidently-worded nonsense. Especially for complex and nuanced topics.

A big part of my job in software is having a very sharpened grasp of my ignorance, the ability to weigh a variety of tradeoffs, and the ability to convey my confidence of my abilities and my team's abilities. I'm not sure this is possible for this generation of AI.


The hallucination part is due to a lack of constraints. The AI can recognize constraints, but it can't recognize what it doesn't know for lack of context.

Prohibit all physical meetings. Force all communication through mediums that can be pipelined. Feed everything (accounting, contracts, law, etc...). Work will be then to architecture the AI to produce the optimal response.

The first company to figure this out will be too ahead. I don't think anything will be anywhere close to compete including nation states.


The problem is not the system, but the context of the system.


True, so true.

We work as translators. We translate intentions into actual descriptions.


The same is true in technical sales. Obviously you need a sufficiently technical background to be able to _do_ stuff, but the primary value you bring is probe further than "we want to do X". The prospect / customer has mayybe done what they are asking twice, you've seen it done 100s of times.


You just described most SQL analysts I've worked with


Yeah, that is an issue a friend of mine has right now with his team. He randomly ended up as the manger of the data analytics team despite no analytics background (or really much programming background either). And one of his main frustrations is that the analysts do not understand the data model or the business. Before he became their manager his team just took report requests and then wrote some SQL and delivered some numbers. Without understanding what those numbers were supposed to mean or the quirks of the underlying data.


Clearly, the correct move here is to replace the entire team with Looker and let every department head create their own dashboards to track arbitrary kpis with no understanding of the product or underlying data model. /s


Who needs a data model when you can just dump a bunch of excel sheets and database dumps in a shared folder, call it a data lake, and encourage anyone who has a report to just throw it together in the free version of power bi?


The correct move seems to be to replace the team with a set of oracle bones. From the very limited information we have it seems to me like the team was used to justify whatever conclusions were already made before consulting them.


The whole practice is kind of a proof-of-work scheme for credibility and liability laundering: even if you intend the team/company to follow your conclusions regardless of what anyone else thinks or says, getting some analysts or outside consultants to burn non-trivial amount of time and money evaluating the situation before rubber-stamping your proposal, is what may be necessary for you to sell your ideas to the rest of the team/company. Such exercise may be especially important if you want to protect yourself from having your head served on the platter after your hare-brained idea fails spectacularly.


> He randomly ended up as the manger of the data analytics team despite no analytics background

I assure you, it wasn't random. It was punitive. ;)


Honest question: how rampant is this kind of bullshit politicking in tech companies, and how does this differ with industry, size, company age, etc.? My friends who have worked at big names like F and T have described the cultures there as following: "it's as if everyone who studied too much in high school and college missed all the parties and bullshit, so they're making up for it 'on campus' here". And the stories from folks at G are not really better, though allegedly they tend to exercise politics more in the quiet-sabotage style. I feel like I'm describing races from Star Trek, company cultures are weird.


I'm pretty sure the GP was a joke.

Whoever made the decision very likely wasn't intending to punish the person. It was only the consequence.


And it is not really punishing, yet at least. He likes his new job despite the mess he was put in. He loves it because he learns so many new things (analytics, how the be a better manager, etc) and how he has managed to make his team actually deliver a lot of value to the company in spite of these problems. It is possibly that the challenge will be too big for him and it will burn him out but so far he likes it.

And, no, it is very unlikely to be intended as a punishment. More likely he got it because he was too interested in getting good reports. The usual accidental volunteering where if you are too interested in something you end up doing all the hard work.


Definitely a joke. It only FEELS punitive!


the larger the company the worse this becomes. When you hit mega level like government the IT side barely knows what the business does and has no idea what anything means. If you're lucky they may know how to support your application from completely failing. If you need any major development you need to compete against everyone else in the government for IT resources.


This is a key learning I had to learn as a manager: people learn what they need to turn their inputs to outputs well enough not to be fired. It's management's job to internalize the business and its needs and make sure that the individual contributors have an accurate mental model of the world.

I wonder what it would look like if we had people across the business working on the same problem together rather than a game of telephone, which is how these data requests end up.


I think that's because most SQL analyst who can understand the data model, assumptions, quirks, etc. usually get promoted into other positions.


Unless they actually like what they do...

It can actually be pretty rewarding to be the person who knows most about the data in the company, while solving logic puzzles during the day.

PS. i do hope most analysts solve more interesting problems than the ones in TFA.


Is that TFA as in RTFM?


Yes, maybe that acronym was in fashion a decade ago. And maybe on another forum like /.

But yes, similar meaning, but replace M with A = Article

https://news.ycombinator.com/item?id=28436632


Yeah, IME the good business savvy SQL analysts get sucked into non-SQL assigned roles and nailing it because they write their own scripts.


Why so. Why do you mean scripts are better than SQL?


A file of SQL code is commonly referred to as a script.


*SQL scripts


Except now with no accountability whatsoever. Just a magic box that you get to worship.


You can easily add more components to start thinking about models/assumptions/etc, like adding interfaces to Data Catalogs (i.e. Glue Data Catalog).

As part of a POC I made, I built a similar bot without recursion for debugging and iterative query building though. It does the following:

- It predicts most probable entities from the question. - Searches AWS Glue Data Catalog for the most probable/useful tables. - It builds an Athena SQL Query from N most useful tables.

It obviously get it catastrophically wrong sometimes, but hell, it was a 3 hour POC. If you can make better indices that map entity->table relationships it should get better at searching tables. Add this kind of recursive/iterative debugging of queries, and you get at least something near a junior-level SQL Analyst.

These kind of bots are analogous to Stable Diffusion, they DO need a good prompter/puppeteer/solution-verifier. Most non-senior Data Analysts also need one anyways.


Thats not really fair to the human. The time to learn basic SQL may be longer, but the algorithm will never develop intuition or independence, while many junior analysts will eventually operate effectively on their own.

It’s a neat tool for analysts as a query generator - I would use it in situations where I’m not familiar with the schema, but it would become less useful as I learn.


Yeah, maybe I should've said "first-weeks/months" level junior data analyst.

But hell, as an analyst I would've paid a lot for a tool that searched intelligently through giant datawarehouses (or whatever consultants call them now) and at least gave you probable matches.

Now that same thing exists and you can even finetune its "DSL" towards your own organization.


We're building https://www.olli.ai/ to help with this. User's can ask questions in natural language, but more importantly, Olli can suggest what questions to ask based on the data.


But it does lower the bar for asking questions a lot. Before you ask a human you have to do a sanity check. Most obvious is asking the same as last week but it could also be something readily available elsewhere.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: