Hacker News new | past | comments | ask | show | jobs | submit login

Excluding QA wasn't a plus. Quality works better when there's an advocate from the beginning.

Problem is a lot of companies only engage QA for post-build integration testing. They try to phase it as a mini-waterfall within the agile cycles, which doesn't play well with agile and drags the process. At that point, leaving them to their own horizontal meetings seems like a plus, but there's a ton of opportunity cost for not having them in the primary conversation.

Other companies try to leverage QA for unit tests, but those really should be written by whoever wrote the interface as part of ensuring their intended contracts are fulfilled. That makes unit test writing not really a good QA role, other than making sure they're there.

The exception would be if you're going to pull QA in and pair-program/test TDD style--which is its own potential boondoggle. But you need real SDETs at that point, as opposed to scripters. Real SDETs aren't terribly common, and are not terribly cheap as they're niche specialists.

The best solution I've seen is to split the advocacy/short-term roles and the test/architecture/long-term development roles (even if it's two hats on one person), have them in the meeting for the former and have the horizontal run their own projects with milestone syncs for the latter.

At that point you can treat the short-term role as a team member, and the long-term roles as an internal vendor or external dependency (i.e. no expectation of control). Any sort of one-size-fits-all approach won't work.




Just curious - when you say "(real) SDET" can you expand on what that means for you? I ask because I started out as SDET, but my title is now QA Automation Engineer. I sort of see myself as a developer working in a QA role - I work on testing frameworks and the architecture of that framework and then write integration tests against whatever a company's product is - whether it be an internal API, an external-facing API, a single page website, or a complicated multi-page website. I was just curious if there were additional responsibilities or techstacks I could be learning that other companies' "real" SDETs do as well. In my near 6 years of work in this niche, I've only been at two companies and they let me do my thing, so I'm not really sure if I'm "missing out" on knowledge or something.

I am also curious as to what you mean by "Quality works better when there's an advocate from the beginning." In my experience, even when (or if) I am invited to an agile Sprint Grooming or Planning sessions, I am more there to report on how a bug works and how severe I think the issue is - no matter how well I document some bug I found or whether product is there (as they should be deciding severity). I'm not sure how to meaningfully leverage my QA experience into helping developers drive their design and architecture to create more bug-free, quality code. I do notice that once I join a company and find/report zillions of bugs, their code slowly gets better over time, but isolating an incident and then measuring my impact is nigh impossible.

Anyway, I know this has nothing to do with the topic, I just wanted to see what you thought because SDET/QA talk is rare to find on this website.


Not the parent poster, but I would guess that when they say things like "(real) SDET" they are referring to someone who creates automated tests, creates scripts for testing, possibly uses white box / gray box approaches, creates test cases for sub-components and layers.

Compare to someone who only does black box testing and only tests from the standpoint of manually testing via a user interface.


It's actually more that an SDET has classically been the one who writes the tools and infrastructure that QA Automation Engineers would use to write scripts.

It implies an education/experience background equivalent to any other T&I Software Engineer, but usually with additional experience in QA. Alternately, it could be a QA engineer who would be genuinely qualified as a general SWE in a promotion path, but who chooses to stay in quality as a domain. But either way it implies equivalent qualifications as a SWE at the same level.

The "real" bit is because over the last half decade or so, SDET has been blurred by title inflation. Scripters have been getting the title without any real experience maintaining longer-term or larger software projects, and who aren't qualified enough to be hireable as a SWE generalist.

That's one reason you still get "lesser than SWE" status arguments about SDET, when IME you actually tend to demand a little more in the market because of the need for experience in two separate disciplines.

My response to the comment above you has a lot more detail expanding on that.


I've been in/around QA for about 20 years now, after 5 years as an app dev and doing a career semi-shift during the 2001 downturn. For the last ten years or so, I alternated between SWE and SDET titles depending on the job, and before that between QA and QA Automation Engineer. That history is where I'm coming from here.

In short, there's been a lot of title inflation in QA, in part to avoid the "black box tester" stigma for both the employees and the organizations alike.

One of those inflations has been calling what were QA Automation Engineers--generally people writing scripts--SDETs. I saw it start to rev up in the late 2000s, and saw it accelerate (maybe not coincidentally) in the early 2010s, shortly after a GTAC talk where they claimed SDET makes 30% more on the average than QA. I suspect a lot of people asked for title changes, or a lot of managers realized they'd also look 30% better managing SDETs than QA and advocated for them.

Thing is that SDET as originally conceived was/is supposed to be a Software Engineer who specializes in testing, with the same qualifications and training as any mainline SWE. The normal domain of SDET would not be scripts, per the other response, it'd be tools, infrastructure and harnesses, maybe seeding test frameworks for incremental development during test creation, usually along with some level of QA thought leadership.

The primary difference there would be the ability to scope, implement, and maintain longer-term software engineering projects. A QA Automation Engineer might be as skilled a coder, but test scripts tend to be short bursts of isolated code and don't require as much architectural or lifecycle experience.

But SDETs are also useful in cases where you want to blur SWE and test, since most SDETs have heavy experience in both and can talk shop and build rapport with both sides. That comes in very handy when pairing with a mainline SWE, hence mentioning it as a near-requirement for that process.

No offense to you and your journey, but I'd have never stepped back from an SDET role to a QA Automation Engineer. It really is objectively worth less in the market, for one thing, and there really is a stigma there with QA in the title. It shouldn't be that way, but it is, and inflation has been the natural response.

Re: advocating for quality from the beginning,

Testing is by nature quality control. You can do it early, you can do it late, but at the end of the day it's a screening process.

That's different than Quality Assurance, which is monitoring and influencing the process and decisions when they're made to promote and advocate for quality when determining the good/fast/cheap project management compromises.

That includes arguing for testability as a primary requirement, arguing against overly complex requirements likely to produce emergent bugs you can't easily catch with QC, pointing out gotchas (hey, everyone is on vacation in December, expect crap code rushed checkins in November), basically anything you can do to reduce risk.

So in that role you're not a tester, per se, but a Subject Matter Expert in quality, hopefully influencing a whole team towards testable products, intelligent context-driven processes and good use of test pyramid, use of static analysis and other code quality techniques (think SonarQube), etc.

All this is over and above just testing, because you simply can't test quality into a project, you can only delay it until it's good enough. That type of gatekeeping is an adversarial tension fraught with both process and organization risk, and it should be avoided.

When you're an obstacle, people will have the natural tendency to work around you and resent you when they can't, which makes it an unsustainable dynamic to rely on as your 80% case. Ideally, most testing should just confirm what you already know: you have a solid process that caught all major bugs before you ever did a final pass.

Even if your org is going to just go full QC instead of QA (which is pretty typical, unfortunately) being able to absorb the full architecture and use cases around what you'll be testing is vital for coming up with an efficient and sustainable test strategy.

If you've been in QA any length of time, you know that these qualities--efficient and sustainable--aren't exactly associated with that corner of the process. A lot of that is because QA/test is often relegated to reporting on how a bug repros and how severe the issue is. You could be doing so much more, but that requires cooperation from the people around you.

When I was doing my time as a QA Engineer before swinging back to SDET, I did this by leveraging my SWE experience to talk shop with devs in low level terms until they trusted me. But it's a hard road to climb, particularly if you didn't have a flipflop like I did.

I hope that answered your questions. It does reflect a degree of personal bias, of course, but it's pretty well-informed by a long time in this corner of the industry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: