"Understand the importance of the Trinity of delivery: Delivery manager, product owner, tech lead"
I'm only familiar with teams too small to have separate roles like this. How does good software planning scale down to smaller teams -- say 5 or 10 people in the whole organization? Ultimately someone has to be responsible for the same concerns, but I wonder how it maps.
In general I'd love to see a comparison of software teams at different sizes. What are the key, identified roles in a company of 5, 50, or 500? What are habits that smaller organizations ought to borrow from larger ones?
I'm a lead on a team of 13 right now: myself, 6 developers, a QA lead, 2 testers, analyst, scrum master, and product owner.
We're one of ~15? teams in my organization (part of an overall IT shop of ~3000), but we're fairly separate from the others in business and technologies. We have a delivery manager for our team and a couple other efforts. There's a dedicated tech lead of tech leads, program level PMs, lead product owner, 2 architects. There are also some other folks from the business side, change management, etc that we work with regularly.
We're going through a bit of a change at the moment, removing the QA and analyst roles, spreading scrum masters across 2 teams, and limiting team size overall. So by the end of the year, I expect to have myself, the product owner, analyst is moving to scrum master, 1 QA as developer, 1 QA maybe as developer (we'll see how he adapts), and 4 developers.
It is a really big organization with dedicated departments for security, security testing, application security, performance testing, accessibility testing, brand management, change & release, test data, DB, DB security, gateways, shared components. And those are only the folks I interact with - there are mainframe folks, server maintenance (linux and windows), and the list goes on.
Some of that is moving to the teams now, some later. It's a pendulum though, things will be more generalized and distributed until something breaks spectacularly and someone will ask "why don't we have dedicated testers? why are we trusting every ol' dev with DB access?" and so on.
There are many industries which have found value in specific people to do the paperwork and deal with process. I don't think it's needed under 10 or so people if you have a good team, but once you're we'll into the 20+ range, I don't understand why so many devs are so set against giving this type of work to someone else.
It's certainly not what I want to to be doing day-to-day, yet our tools aren't smart enough today for me to do well in a larger org without it.
Yes and no. On the useful side, they own some of the process inherent in large organizations. But they also act as a conduit to push that process to team members.
In practice, our scrum masters have been a place for the PMs, Analysts, and QA leads to go as their positions have been displaced. I would like to see more technical scrum masters that can own technical impediments, but that doesn't seem to be happening yet.
I also think that spreading scrum masters across teams will disconnect them from the specifics of the work and allow them to act more as coaches than non-technical team members.
They forgot QA lead. I know it's currently posh in some companies to do away with QA (YOLO software development methodology?). I've found successful software delivery without a strong QA lead on equal footing as the tech lead and PM/PJM is a recipe for disaster. QA is often the only source of truth that you can rely on when you want an accurate assessment of where things stand.
The thing is that tests generally have to be a core part of the development process now, so if the QA isn't being driven by the devs then they aren't doing software engineering quite up to contemporary standards.
Having said that, I have had a lot of jobs on small teams, especially some years ago, where testing was inadequate or not really part of the development process, and I always tried to convince them to hire one good QA person. They never listened. Instead on a few occasions a user or business analyst would be assigned to QA which I explained was a joke. But I guess I have just not had the best budgeted projects or something.
I don't think testing and QA hit the same sweet spot. Trying to automate acceptance tests often gets into brittle territory, doubly so on mobile apps. Obviously it depends a lot on the nature of the product, but in a lot of cases human QA (perhaps with QA engineers building some partial automation) leads to a higher quality product at a lower cost than trying to get fully automated test coverage for everything.
I was a QA manager for a while. It's been over 10 years since I've worked somewhere that had any sense of QA/Test.
Now we have Horde, err, Scrum and DevOps. Proudly. It's like the industry just gave up.
I don't what it's like at the allegedly highest functioning teams (environments) Google, Facebook. But out here in the wild, it's madness.
Dan Luu has been best at capturing and articulating "contemporary standards" for what passes as "quality" in software development. http://danluu.com (I wish I was 1/2 as smart as Dan.)
"Delivery manager" is basically how people say "project manager" now. Their job is to set priorities, triage incoming issues, make sure everyone knows what everyone else is doing (when relevant); coordinate with third parties or external integrations, etc. Product owner is who you're building the product for - in client services it would be your client; for internal projects it would be who is "driving" (more bizspeak) the project - could be the CEO or a marketing person or a VP or whatever. Tech Lead is responsible for making technical decisions, architecture, hopefully writing some code, etc.
"Project Manager" has negative stigma. I'm not entirely sure why, but I think the perception has to do with the way that project management might be done in larger organizations - concern for minutia and TPS reports over actual accomplishments.
The Project Management Institute and their certification process has led to a situation where project management is seen as a domain-agnostic discipline. As a result, many certified project managers know little about the domains they are managing. Furthermore, coming from the construction industry, many of the traditional project management processes and thought is linear and waterfall in nature. They have taken steps to adopt agile and iterative processes in the past few years, but bear great legacy costs when it comes to applying project management to technology projects (where many projects fail for a variety of reasons).
The minimum team size is one: A single person building a project.
I'd guess the minimum successful team size is in the 2-3 range. At the very least you must have a client or customer of some kind and a developer or implementer.
What’s the budget and the value proposition? should be #1. Projects without a clear purpose (happens way more than you might think) are sinking ships you've got to get away from.
I would add "Make sure every stakeholder has the same idea what the project will be". I have seen a lot of projects where once you talk to all stakeholders it becomes pretty clear that there is no shared understanding of what we are trying to achieve. Especially be wary of senior managers injecting their pet ideas.
See also: https://www.youtube.com/watch?v=8fz-AowdiL8 - a 1 hr webinar from the folks behind Microsoft Press' recent Software Requirements books (Seilevel) on how to document these requirements and make sure folks are on the same page.
This 1000 times. I can't tell you how often I've seen this happen. Many times it's at least a factor in failed projects.
Clues this might be happening: senior managers pulling you aside to discuss their ideas for the project. Getting different questions on project updates than the goals you're building towards in scrum. Constantly changing short term goals.
While one is spending their time on the first points imagining infrastructure (for usage that might one day materialise), the company goes out of business because they failed to solve a problem anyone cared about.
And continually revisit it. When you're knee deep in requirements, corner cases, and industry best practices, you can lose sight of why you were building the damn thing in the first place, and end up with something that doesn't satisfy that need.
How do you define done? Software is never done.
A better way to look at things is whether a feature is in good enough shape and whether moving on to working on other things is likely to be more beneficial.
You're not building "software," you're building something for someone to use. Done is defined by what that person needs to do their work. Done should be defined up front, or at least as far ahead as possible so it's clear.
One of the first questions I ask when probing about the state of a project or task is "how do you know when you're done?" Too often, the answer is a shrug, or a deer in headlights look!
Super important even if you're doing client based work (whether pure custom development or customizations on top of a platform).
Your customers will each have their own individual pet features that they care about but if you don't drive to business value (meaning you just build whatever random shit the client tells your build) then it will be a failure 100% of the time.
Might want to add company size and team size(s). Many of these points show it is most likely for small to medium sized companies. For an enterprise the checklist does not entirely hold due to more specialized roles. Though there are some good tips.
Scrum = Shite verbosity about agile process and training for mgmt larva.
Agile = Hyperbole.
AWS = I don't know better and my developers are < 35 years old or on the make (+ stock). My customers are just stupid.
Security = big $$, modest skills, certs for the neglected comptia and northcut industures. I'm a CEH!! So is every script kiddie on the internet.
tester: can't write working code, useless to the degree that you come in when they need you to justify junit crap numbers you generated when drunk. This will scale if 12 coors == 12 saudi virgins.
analyst: I'm an experienced tester.
tech lead: best resume and best bully of the bunch. I can understand stack exchange!
QA: I like your doxygen docs and the test numbers but I have a problem with connecting to server(x) at ip(x) from greenland at 2pm on a sunday. Any ideas?
team: Bunch of backstabbing co-moderators.
My primary requirement is a demonstration that the site can be torn down and recovered any time – you never know when you will have a DNS problem or someone take over your domain
Unrelated minor rant: Pocket is hopeless when dealing with bullet points. It so often skips list items, especially if they have links in them. I understand that they are trying to avoid including navigation stuff, but overzealous way too often.
I wish I could integrate Instapaper with my Kobo ereader instead.
You might feel like you're providing harsh yet practical advice, but you're not explaining your reasoning which is why you're getting downvoted. If you forced yourself to explain your ideas you might discover some aren't as concrete as you think, and you might strengthen some ideas that work. Posts like this have a "I have it all figured out" attitude that I would argue indicates a lack of interest in learning, and I don't think that's what you want.
God forbid any of them gets home in time to eat dinner with their family! I can't think of anything worse than a sustainable work/life balance and employees who work the hours their contract specifies.
How do you identify superstars? Do you look at their code or do you simply see the UI when its done? I've seen a couple superstars/JS ninjas that produced unmaintainable crap, but a lot of it, and it took actually looking at their code to see why. They didn't last.
Fun to be around if you're their boss, has a confident yet subtly incorrect opinion on every single aspect of programming, climate science, dietology and immigrants that are not them, personally stabbed one of the "weaklings" in the face, knows lots of GoT trivia, "weaklings" burned out while fixing their bugs.
I'm only familiar with teams too small to have separate roles like this. How does good software planning scale down to smaller teams -- say 5 or 10 people in the whole organization? Ultimately someone has to be responsible for the same concerns, but I wonder how it maps.
In general I'd love to see a comparison of software teams at different sizes. What are the key, identified roles in a company of 5, 50, or 500? What are habits that smaller organizations ought to borrow from larger ones?