Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Dev-owned testing: Why it fails in practice and succeeds in theory (acm.org)
119 points by rbanffy 15 hours ago | hide | past | favorite | 147 comments




The conversation is usually: devs can write their own tests. We don't need QA.

And the first part is true. We can. But that's not why we have (had) QA.

First: it's not the best use of our time. I believe dev and QA are separate skillset. Of course there is overlap.

Second, and most important: it's a separate person, an additional person who can question the ticket, and who can question my translation of the ticket into software.

And lastly: they don't suffer from the curse of knowledge on how I implemented the ticket.

I miss my QA colleagues. When I joined my current employer there were 8 or so. Initially I was afraid to give them my work, afraid of bad feedback.

Never have I met such graceful people who took the time in understanding something, and talking to me to figure out where there was a mismatch.

And then they were deemed not needed.


There are layers to this:

1) There are different types of tests, for different purposes. Devs should be writing some of them. Other types & forms of testing, I agree that this is not in many dev's sweet spot. In other words, by the time code gets thrown over the wall to QA, it should already be fairly well vetted at least in the small.

2) Many, but far from all, QA people are just not skilled. It wasn't that long ago that most QA people were washed out devs. My experience has that while testing isn't in the sweet spot of many devs, that they've been better at it than the typical QA person

3) High quality QA people are worth their weight in gold.

4) Too often devs look at QA groups as someone to whom they can offload their grunt work they don't want to do. Instead, QA groups should be partnering with dev teams to take up higher level and more advanced testing, helping devs to self-help with other types of testing, and other such tasks.


> Too often devs look at QA groups as someone to whom they can offload their grunt work they don't want to do.

That's a perfectly legitimate thing to do, and doing grunt work is a perfectly legitimate job to have.

Elimination of QA jobs - as well as many other specialized white collar jobs in the office, from secretaries to finance clerks to internal graphics departments - is just false economy. The work itself doesn't disappear - but instead of being done efficiently and cheaply by dedicated specialists, it's dumped on everyone else, on top of their existing workloads. So now you have bunch of lower-skill busy-work distracting the high-paid people from doing the high-skill work they were hired for. But companies do this, because extra salaries are legible in the books, while heavy loss of productivity isn't (instead it's a "mysterious force", or a "cost disease").


The problem of handoffs makes this work far from cheap.

And tests are not dumb work. TDD uses them to establish clarity, helping people understand what they will deliver rather than running chaotic experiments.

Highly paid people should be able to figure out how to optimize and make code easy to change, rather than ignoring technical debt and making others pay for it.

QA is just postponing fixing the real problem - hard to change the code.


The best QA people I've worked with were effective before, during, and after implementation - they worked hand in hand with me both to shape features testably, work with me on the implementation for the harness for additional testing they wanted to do beyond what was useful for development, and followed up with assistance for finding and fixing bugs and using regression tests to prevent the category of error from happening again.

At the very least I want someone in QA doing end-to-end testing using e.g. a browser or a UI framework driver for non-web software, but there's so much more they do than that. In the same way I respect the work of frontend, backend, infrastructure, and security engineers, I think quality engineering is its own specialized field. I think we're all poorer for the fact that it's viewed as a dumping ground or "lesser"


> Many, but far from all, QA people are just not skilled

Most (but not all) devs are just not skilled


>High quality QA people are worth their weight in gold.

They absolutely are, but I've only met a couple high quality QA people in my career.


That's because we don't value QA in the way that matters.

If you're a talented SDET, you're probably also, at least, a good SDE.

If you'll make more money and have more opportunity as an SDE, which career path will you follow?


Also, for most people passionate about software, they'd rather be building than testing, especially if pay is at least equal.

Testing is probably my favorite topic in development and I kind of wish I could make it my "official" specialty but no way in hell am I taking a pay cut and joining the part of the org nobody listens to.

That, and this

> Many, but far from all, QA people are just not skilled

can also be said of developers.


I am going to say that outside the HN echo chamber, it is closer to all than on the other side. Have you been to fortune 1000 non software corps? If you would throw away 90% of their IT people, people would barely notice. Probably just miss John his cool weekend stories on Monday (which is basically almost weekend!). LLM drives this home, painfully; we come in these companies a lot and it is becoming very clear most can be replaced today with a 100$ claude code subscription. We see the skilled devs using claude code on their own dime as one of their tools, often illegally (not allowed yet by company legal) and the rest basically, as they always did, trying to get through week without getting caught snoring too loud.

I've also met about the same number of high quality developers in my career.

Most people are mid.


The average QA/SDET I've worked with are far, far less capable than the average SDE.

Best QA people I worked with were amazing (often writing terrific automated tests for us). The worst would file tickets just saying "does not work"

I sometimes suspect that the value of a QA team is inversely proportional to the quality of the dev team.


> I sometimes suspect that the value of a QA team is inversely proportional to the quality of the dev team.

My experience has been that this is true, but not for the reason you likely intend. What I've seen is the sort of shop that invests in low tier QA/SDET types are the same sorts of shops that invest in low tier SEs who are more than happy to throw bullshit over the wall and hand off any/all grunt work to the testers. In those situations, the root cause is the corporate culture & priorities.


> There are different types of tests, for different purposes.

I'm unconvinced. Sure, I've heard all the different labels that get thrown around, but as soon as anyone tries to define them they end up being either all the same thing or useless.

> Devs should be writing some of them.

A test is only good if you write it before you implement it. Otherwise there is no feedback mechanism to determine if it is actually testing anything. But you can't really write more than one test before turning to implementation. A development partner throwing hundreds of unimplemented tests at you to implement doesn't work. Each test/implementation informs the next. One guy writes one test, one guy implements it, repeat, could work in theory, I guess, but in practice that is horribly inefficient. In the real world, where time and resources are finite, devs have to write all of their own tests.

Tests and types exist for the exact same purpose. Full type systems, such as seen in languages like Lean, Rocq, etc. are monstrous beasts to use, though, so as a practical tradeoff we use "runtime types", which are much more practical, in the languages people actually use on a normal basis instead. I can't imagine you would want a non-dev writing your types, so why would you want them to write tests?

> High quality QA people are worth their weight in gold.

If you're doing that ticketing thing like the earlier comment talked about, yeah. You need someone else to validate that you actually understood what the ticket is trying to communicate. But that's the stupidest way to develop software that I have ever seen. Better is to not do that in the first place.


I very rarely worked with good QA.

In my mind a good QA understands the feature we're working on, deploys the correct version, thoroughly tests the feature understanding what it's supposed and not supposed to do, and if they happen to find a bug, they create a bug ticket where they describe the environment in full and what steps are necessary to reproduce it.

For automation tests, very few are capable of writing tests that test the spec, not implementation, contain sound technical practices, and properly address flakiness.

For example it's very common to see a test that clicks the login button and instead of waiting for the login, the wait 20 seconds. Which is both too much, and 1% of the time too little.

Whenever I worked with devs, they almost always managed to do all this, sometimes they needed a bit of guidance, but that's it. Very very few QA ever did (not that they seemed to bothered by that).

A lot of QA have expressed that devs 'look down' on them. I can't comment on that, but the signal-to-noise ratio of bug tickets is so low, that often it's you have to do their job and repeat everything as well.

This has been a repeated experience for me with multiple companies and a lot of places don't have proper feedback loops, so it doesn't even bother them as they're not affected by the poor quality of bug reports, but devs have to spend the extra time.


I'll espouse the flip side of this:

I've worked with a handful of excellent QA. In my opinion - the best QA is basically a product manager lite. They understand the user, and they act from the perspective of the user when evaluating new features. Not the "plan" for the feature. The actual implementation provided by development.

This means they clarify edge cases, call out spots that are confusing or tedious for a user, and understand & test how features interact. They help take a first draft of a feature to a much higher level of polish than most devs/pms actually think through, and avoid all sorts of long term problems with shipping features that don't play nicely.

I think it's a huge mistake to ask QA to do automation tests - Planning for them? Sure. Implementation? No. That's a dev's job, you should assign someone with that skillset (and pay them accordingly).

QA is there to drive quality up for your users, the value comes from the opinions they make after using what the devs provide (often repeatedly, like a user) - not from automating that process.


Right - the best QA people need only be as technical as your user base. Owning QA environment, doing deploys, automated testing, etc are all the sort of things that can live with a developer.

They are there to protect dev teams from implementing misunderstandings of tickets. In a way a good Product Manager should wear a QA hat themselves, but I've seen fewer good PMs than good QAs....


Reminds me of how often I've felt a little envious as a dev of how much influence QA people had on effective specification. Whenever a spec appears a little ambiguous (or contradictory) the QA person becomes judge and their decisions effectively become law.

yes - devs are great at coding so get them to write the tests and then I, a good tester, (not to be confused with QA) can work with them on what are good tests to write. With this in place I can confidently test to find the edge cases, usability issues etc And when I find them we can analyze how the issue could have been caught sooner

Coz while devs with specialties usually get paid more than a generalist, for some reason testing as a specialty means getting a pay cut and a loss in respect and stature.

Hence my username.

I wouldnt ever sell myself as a test automation engineer but whenever i join a project the number one most broken technical issue in need of fixing is nearly always test automation.

I typically brand this work as architecture (and to be fair there is overlap) and try to build infra and tooling less skilled devs can use to write spec-matching tests.

Sadly if i called it test automation i'd have to take a pay cut and get paid less than those less skilled devs who need to be trained to do TDD.


I think there are 3 'kinds' of QA who are not really interchangeable as their skillsets don't really overlap.

- Manual testers who don't know how to code at all, or at least arent' good enough to task them with writing code

- People who write automated tests (who might or might not also do manual testing)

- People writing test automation tools, managing and desigining Test infra etc. - these people are regular engineers and engineering skillsets. I don't think there's generally a difference in treatment or compensation, but I also don't really consider this 'QA work'

As for QA getting paid less - I don't agree with this notion, but I see why it happens. Imo and ideal QA would be someone, who's just as skilled in most stuff as a dev (except does something a bit different), has the same level of responsibility and capacity for autonomy - in exchange I'd argue they deserve the same recognition and compensation. And not giving them that leads to the best and brightest leaving for other roles.

I think it's amazing when one gets to work with great QA, and can rest easy that anything they make will get tested properly, and you get high quality bug reports, and bugs don't come back from the field.

Also it bears mentioning, that it's self-evident to me, but might not be self-evident to everyone, that devs should be expected to do a baseline level of QA work themselves - they should verify the feature is generally working well and write a couple tests to make sure this is indeed the case (which means they have to be generally aware how to write decent tests).


> A lot of QA have expressed that devs 'look down' on them. I can't comment on that, but the signal-to-noise ratio of bug tickets is so low, that often it's you have to do their job and repeat everything as well.

When I was a lead, I pulled everyone, (QA, devs, and managers) into a meeting and made a presentation called "No Guessing Games". I started with an ambiguous ticket with truncated logs...

And then in the middle I basically explained what the division of labor is: QA is responsible for finding bugs and clearly communicating what the bug is. Bugs were not to be sent to development until they clearly explained the problem. (I also explained what the exceptions were, because the rule only works about 99.9% of the time.)

(I also pointed out that dev had to keep QA honest and not waste more than an hour figuring out how to reproduce a bug.)

The problem was solved!


Communicating a bug clearly is testing/QA 101

In my experience, I find that management doesn't understand this, or otherwise thinks it's an okay compromise. This usually comes with the organization hiring testers with a low bar, "sink or swim" approach.

Having worked with both good and bad QA...

The biggest determinant is company culture and treating QA as an integral part of the team, and hiring QA that understands the expectation thereof. In addition, having regular 1:1s both with the TL and EM to help them keep integrated with the team, provide training and development, and make sure they're getting the environment in which they can be good QA.

And work to onboard bad QA just as we would a developer who is not able to meet expectations.


I used to work with a QA person who really drove me nuts. They would misunderstand the point of a feature, and then write pages and pages of misguided commentary about what they saw when trying to test it. We'd repeat this a few times for every release.

This forced me to start making my feature proposals as small as possible. I would defensively document everything, and sprinkle in little summaries to make things as clear as possible. I started writing scripts to help isolate the new behavior during testing.

...eventually I realized that this person was somehow the best QA person I'd ever worked with.


how did misunderstanding a feature and writing pages on it help, not sure I follow the logic of why this made them a good QA person? Do you mean the features were not written well and so writing code for them was going to produce errors?

In order to avoid the endless cycle with the QA person, I started doing this:

> This forced me to start making my feature proposals as small as possible. I would defensively document everything, and sprinkle in little summaries to make things as clear as possible. I started writing scripts to help isolate the new behavior during testing.

Which is what I should have been doing in the first place!


I worked with someone a little while ago that tended to do this; point out things that weren't really related to the ticket. And I was happy with their work. I think the main thing to remember is that the following are two different things

- Understanding what is important to / related to the functionality of a given ticket

- Thoroughly testing what is important to / related to the functionality of a given ticket

Sure, the first one can waste some time by causing discussion of things that don't matter. But being REALLY good at the second one can mean far less bugs slip through.


Most of the time QA should be talking about those things to the PM, and the PM should get the hint that the requirements needed to be more clear.

An under-specified ticket is something thrown over the fence to Dev/QA just like a lazy, bug-ridden feature is thrown over the fence to QA.

This does require everyone to be acting honestly to not have to belabor the obvious stuff for every ticket ('page should load', 'required message should show', etc.). Naturally, what is 'obvious' is also team/product specific.


I think noticing other bugs that aren't related to the ticket at hand is actually a good thing. That's how you notice them, by "being in the area" anyway.

What many QAs can't do / for me separates the good and the not so good ones, is that they actually understand when they're not related and just report them as separate bugs to be tackled independently instead of starting long discussions on the current ticket at hand.


so, QA should be noticing that the testers are raising tickets like this and step in and give the testers some guidance on what/how they are testing I've worked with a clients test team who were not given any training on the system so they were raising bugs like spam clicking button 100 times, quickly resizing window 30 times, pasting War and Peace.. gave them some training and direction and they started to find problems that actual users would be finding

I didn't mean reporting things that you wouldn't consider a bug and just close. FWIW tho, "Pasting War and Peace" is actually a good test case. While it is unlikely you need to support that size in your inputs, testing such extremes is still valuable security testing. Quite a few things are security issues, even though regular users would never find them. Like permissions being applied in the UI only. Actual users wouldn't find out that the BE doesn't bother to actually check the permissions. But I damn well expect a QA person to verify that!

Was I meant though were actual problems / bugs in the area of the product that your ticket is about. But that weren't caused by your ticket / have nothing to do with that ticket directly.

Like to make an example, say you're adding a new field to your user onboarding that asks them what their role is so that you can show a better tailored version of your onboarding flows, focusing on functionality that is likely to be useful for you in your role. While testing that, the QA person notices a bug in one of the actual pieces of functionality that's part of said onboarding flow.

A good QA understands and can distinguish what is a pre-existing bug and what isn't and report it separately, making the overall product better, while not wasting time on the ticket at hand.


If a QA person (presumably familiar with the product) misunderstands the point of a feature how do you suppose most users are going to fare with it?

It's a very clear signal that something is wrong with either how the feature was specified or how it was implemented. Maybe both.


I took GPs meaning that the QA person in question sucked, but them being the best meant the other QA folks they've worked with were even worse.

That's not at all what they meant. They meant they ended up raising their own quality bar tremendously because the QA person represented a ~P5 user, not a P50 or P95 user, and had to design around misuse & sad path instead of happy path, and doing so is actually a good quality in a QA.

Let's call the person in question Alex. Having to make every new feature Alex-proof made all of the engineers better.

Did it? Sounds like making things "Alex proof" may have involved a large amount of over-engineering and over-documenting.

It's possible but I'd guess they are probably not worse than the average user.

Ha, that's certainly a way to build things fool-proof.

There’s definitely a bimodal distribution of QA people for capability. The good ones are great. The bad ones infuriating.

The lack of respect and commensurate compensation at a lot of companies doesn't help. QA is often viewed as something requiring less talent and often offshored which layers communication barriers on top of everything. I've met QA people with decent engineering skills that end up having the most knowledge about the application works in practice. Tell them a proposed change and they'll point out how it could go wrong or cause issues from a customer perspective.

This 100%

Companies think QA is shit, so they hire shit QA, and they get shit QA results.

Then they get rid of QA, and then the devs get pissed because now support and dev has turned to QA and customers are wondering how the hell certain bugs got out the door.


Yeah and then we started expecting them to code. Which has not gone well. And the thing is if you have the suspicious mind of a top rate QA person and you can code well, you’re already 2/3 of the way to being a security consultant or a Red Shirt engineer and doubling your salary.

This is why your duty in engineering is to drag QA into every specification conversation as early as possible so that they can display that body of knowledge.

Yes. The best QA people are gold. Infuriating at times, but gold.

> end up having the most knowledge about the application works in practice

The best I've worked with had this quality, and were fearless advocates for the end-user. They kept everyone honest.


I was at a company once where they were talking about trying to do a rewrite of an existing tool because the original engineers were gone. But the requirements docs were insufficient to reach feature parity, so they weren’t sure how to proceed. Once I got the QA lead talking they realized he had the entire spec in his head. Complete with corner cases.

The problem is usually in the company culture and hiring process.

Are the QA people & team treated like partners, first class citizens, and screened well the way you would an SWE?

Or are they treated like inferior replaceable cogs, resourced from a 3rd party consulting body shop with high turnover?

You get what you hire for.


We hired a guy with an English Lit degree as QA. He was super smart, and really self-motivated. He learned full-stack dev, and wrote a fcking amazing dashboard and test config wizard in like half a year. (This was before AI)

People at that point were complaining about tests being hard to run for YEARS.

He then left for a dev role at another company in a short time.


And “they had huge impact & left quickly” is actually a good outcome right!

Better than underhiring to set the whole endeavor up for failure


Necessary but insufficient. On several projects where I was the lead I started honoring the QA lead's attempts at vetoing a release. I was willing to explain why I thought changes we had made answered any concerns that QA had, or did not, but if the lead said No then I wasn't going to push a release.

If you're consistent about it, you can restore a sizable chunk of power to the QA team, just by respecting that No. With three people 'in charge' of the project instead of two, you get restoration of Checks and Balances. Once in a while Product and QA came to me to ask us to stop some nonsense. Occasionally Product and Dev wanted to know why things were slipping past QA. But 2/3rds of the interventions were QA and Dev telling Product to knock something off because it was destroying Quality and/or morale.

God do I miss QA and being able to go 2 against one versus the person whose job description requires them to be good at arguing for bullshit.


Why would you upskill as a QA when you can become a dev? Every single QA person I know only became a QA as a stepping stone. That's now it's seen.

Companies don't care about QA, so of course you don't see any QA wizards anymore.


I agree with everything you're saying here. The only thing I would add is

Before I hand a ticket off to QA, I write up

1. What I understood the requirements to be,

2. What I implemented,

3. How to interact with it (where it is in the tool, etc), and

4. What _other_ areas of the code, besides the obvious, are touched; so they can regression test any areas that they feel are important

Doing that writeup makes sure I understand everything about my implementation and that I didn't miss anything. I find it extremely valuable both for QA and myself.


"What I understood the requirements to be"

That is a way to get your changed approved quickly, so it is good for you. It is terrible for a project that values quality.

A tremendous value of a QA team is that they interpret the requirements independently and if in the end they approve you can be pretty confident you implemented something that conforms to the commonly understood meaning of requirements and not your developer biased view.


Challenge. I use it as a way to double check, "Did the way I understood the ticket/requirement match the way that QA did?". They're testing what the ticket says is required. But part of that is testing that I understood the ticket correctly.

Rubber-ducky in human form.

Like all other job functions tangential to development- it can be difficult to organize the labor needed to accomplish this within a single team, and it can be difficult to align incentives when the labor is spread across multiple teams.

This gets more and more difficult with modern development practices. Development benefits greatly from fast release cycles and quick iteration- the other job functions do not! QA is certainly included there.

I think that inherent conflict is what is causing developers to increasingly managing their own operations, technical writing, testing, etc.


I can’t imagine any role in software that gets better delivering more work in longer cycles than less work in shorter cycles.

And I can’t speak for technical writing, but developers do operations and testing because automation makes it possible and better than living in a dysfunctional world of fire and forget coding.


In my experience, what works best is having QA and a tech writer assigned to a few teams. That way there is an ongoing, close relationship that makes interactions go smoothly.

In a larger org, it may also make sense to have a separate QA group that handles tasks across all teams and focuses on the product as a unified whole.


Yeah. As a dev, it is simply not always a great idea that the same person that built the feature is the one testing it. Sometimes I already tested it 100 times, and by the 110th time I basically become blind to it because I know it too well. Then its great to have someone with fresh eyes and without the detailed knowledge do the testing to see if it works and if it works for our customers.

dev tests - whitebox tests

qa tests - blackbox tests

there is a place for both.

I think the problem is that usually the business doesn't care enough about that level of quality (unless nasa or avaiation)


> I believe dev and QA are separate skillset.

I'm not sure it's a separate skillset. You need the other side's skills all the time in each of those positions.

But it's certainly a separate mindset. People must hold different values in each of them. One just can't competently do both at the same time. And "time" is quantized here in months-long intervals, because it takes many days to internalize a mindset, if one is flexible enough to ever do it.


No QA is better than bad QA. I've had great QA teams and just awful QA teams. Most of them are somewhere in the middle. I'll take no QA over bad QA every time. Filing bugs that aren't bugs, not understanding the most basic things like what a Jpeg file is for a product centered around images, etc. QA for the sake of QA doesn't always yield results, and can cause a lot of distraction for a competent dev team.

I've worked in enterprise software development with the full lifecycle for over 30 years.

I have found QA to be mostly unnecessary friction throughout my career, and I've never been more productive than when QA and writing tests became my responsibility.

This is usually what has happened during a release cycle.

1) Devs come up with a list of features and a timeline.

2) QA will go through the list and about 1/2 of the features will get cut because they claim they don't have time to test everything based on their timeline.

3) The cycle begins and devs will start adding features into the codebase and it's radio silence from the QA.

4) Throughout the release QA will force more features to get dropped. By the end of the release cycle, another 1/4 of the original number of features get dropped leaving about 1/4 of the original features that were planned. "It will get done in a dot release."

5) Near the end of the release, everything gets tested and a mountain of bugs come in near the deadline and everyone is forced to scramble. The deadline gets pushed back and QA pushes the blame onto the devs.

6) After everything gets resolved, the next release cycle begins.

This is at quite a few enterprise software companies that most people in Silicon Valley have heard of if you've been working for more than 10 years.


Release cycles are the problem

Oh man do I have opinions.

First of all, I've seen all type of teams be successful, ranging from zero QA at all, to massive QA teams with incredible power (eg. Format QA at Sony in Europe). I have absolutely seen teams with no QA deliver high quality full stop, the title is nonsense.

My firm belief is that QA can raise the ceiling of quality significantly if you know what you're doing, but there is also huge moral hazard of engineers dropping the ball on quality at implementation time and creating a situation where adding more QA resources doesn't actually improve quality, just communication churn and ticket activity. By the way the same phenomenon can happen with product people as well (and I've also seen teams without product managers do better than teams with them in certain circumstantes).

The most important anchor point for me is that engineering must fundamentally own quality. This is because we are closer to the implementation and can anticipate more failure modes than anyone else. That doesn't mean other roles don't contribute significantly to quality (product, design, QA, ops absolutely do), but it means we can't abdicate our responsibility to deliver high quality code and systems by leaning on some other function and getting lazy about how we ensure we are building right.

What level of testing is appropriate for engineers to do is quite project and product specific, but it is definitely greater than zero. This goes double in the age of AI.


> The most important anchor point for me is that engineering must fundamentally own quality.

This is huge. I was selling software to help QA. I saw a CEO demand a Head of QA guarantee their super buggy app be free of bugs by a certain date.

This is terrible. She didn’t write the thing. Total responsibility without authority trap. She was, not at all to my surprise, fired.

I think the deal fell through and I don’t know how else things ended up with them.

QA’s job is signal. If you’re getting clear signal, they’re doing their job.


yes - testing gives information, what is done with that information is a business decision

This is my favorite high powered individual at a company trope.

Hey you, yeah you with no power to change or order people around, yeah, you go ahead and do my job with no new tools or authority.

I have had so many CTOs tell me that I should go tell an entire department to change their entire goal system because of something they want done, but do not want to deliver any thought process of how that's going to happen or are willing to put in any effort to idk, move the gigantic ship they have in motion.

And of course, the blame only goes one direction in such a bad org, tbqh the QA person probably is happier literally anywhere else.


I have limited experience working in orgs with a QA apparatus. Just my anecdotes:

The one time I got to work with a QA person, he was worse than useless. He was not technical enough to even use cURL, much less do anything like automated e2e testing, so he'd have to manually test every single thing we wanted to deploy. I had to write up extremely detailed test plans to help him understand exactly what buttons he had to press in the app to test a feature. Sometimes he'd modify the code to try and make testing it easier, break the feature in doing so, and then report that it didn't work. In nearly all cases it would have been faster for me to just test the code myself.

The majority of the time I've worked in orgs where there is no QA team, the devs are expected to own the quality of their output. This works okay when you're in a group of conscientious and talented engineers, but you very quickly find out who really cares about quality and who either doesn't know any better or doesn't care. You will constantly battle management to have enough time to adequately test anything. Every bit of test automation you want to build has to be smuggled in with a new feature or a bugfix.

So really, they both suck, pick your poison. I prefer the latter, but I'm open to experiencing what good looks like in terms of dedicated QA.


that sucks, hope you get to work with a good tester/QA person at some point so you get to see what they can do

Most orgs I've worked for are so growth and product-focused that if you try adjusting your estimates to include proper testing, you get push back, and you have to ARGUE your case as to why a feature will take two weeks instead of one.

This is the thing I hate the most about work, having to ARGUE with PMs because they can't accept an estimate, there's often some back-and-forth. "What if you do X instead?" "Team Y (always uses hacks and adds technical debt with every single feature they touch) did something similar in two days." But we're just communicating and adding transparency so that's good and it certainly doesn't matter that it starts taking up 4+ hours of your time in Slack conversations and meetings of people 'level setting' 'getting on the same page' trying to help you 'figure out' how to 'reduce scope' etc. etc.

Also, I think testing via unit or integration tests should be standard regardless, and that isn't what I am thinking about here. I'm thinking about QA, the way QA does it. You hammer your feature with a bunch of weird bullshit like false and unexpected inputs, what happens if I refresh the page in strange ways, what happens if i make an update and force the cache to NOT clear, what happens if I drop my laptop in a dumpster while making the request from firefox and safari at the same time logged in as the same user, what happens if I turn off my internet in the middle of a file upload, and so on. When devs say that devs should be responsible for testing, they usually mean the former (unit and integration tests), and not this separate skillset of coming up with a bunch of weird edge cases for your code. And yes, unit tests SHOULD hit the edge cases, but QA is just better at it. You usually don't have engineers testing what happens when you try sending in Mandarin characters as input (unless they live in China, I guess). All of that effort should bring up your estimates because it is non-trivial. This is what getting rid of QA means, not happy path end-to-end testing plus some unit and integration tests.


PM's are generally the most irritating people to deal with in any organization. This is coming from someone who has been one - effective ones are very obviously effective, but the vast majority are glorified note takers and ticket pushers with very little ability to get anything done, whether due to lack of talent or lack of empowerment in the organization or both. I find arguing with them pointless.

It sounds like you've had a lot of bad PMs (or been one yourself).

Almost every PM I've had has been awesome. They understand the product and the customer far better than me, and help communicate exactly what I need to understand about both, so I can do my job (as a coder).

Also they have zero feedback on how long something should take. They are hyper-focused on what the business needs and when, but that doesn't make them pressure engineers to do anything faster unless it's a case of "hey this important thing just popped up; how quickly can we solve it?"


The ticket pushers also range from annoying or even threatening, to "helpful ticket pushers" who maybe don't understand everything, but they keep track of tickets, documents, links to other projects and so on and make sure nothing is forgotten.

> Most orgs I've worked for are so growth and product-focused that if you try adjusting your estimates to include proper testing, you get push back, and you have to ARGUE your case as to why a feature will take two weeks instead of one.

Yeah this one pisses me off too. No, PM, you do not know how long it should take to implement a feature I get paid to work on and you don't.

Good PMs take your feedback and believe you. Bad PMs do the opposite.


If most of your time is spent haggling, it's time to have a frank conversation with upper leadership; or leave. Some portion of your organization is not working in good faith.

(In my case, the person in upper management driving haggling was pushed out about a week or two after the incident.)


The stupid fast tempo of our industry grinds my gears.

When I worked defense we moved slowly and methodically. It almost felt too slow. Now in the private sector I move like triple the speed but we often need to go back and redo and refactor things. I think it averages out to a similar rate of progress but in defense at least I had my sanity.


While good points are made, I worry this gives the wrong impression. The paper doesn't say it is impossible, just hard. I have, very successfully, worked with dev owned testing.

Why it worked: the team set the timelines for delivery of software, the team built their acceptance and integration tests based on system inputs and outputs based on the edges of their systems, the team owned being on-call, and the team automated as much as possible (no repeatable manual testing aside from sanity checks on first release).

There was no QA person or team, but there was a quality focused dev on the team whose role was to ensure others kept the testing bar high. They ensured logs, metrics, and tests met the team bar. This role rotated.

There was a ci/cd team. They made sure the test system worked, but teams maintained their own ci configuration. We used buildkite, so each project had its own buildkite.yml.

The team was expected by eng leaders to set up basic testing before development. In one case, our team had to spend several sprints setting up generators to make the expected inputs and sinks to capture output. This was a flagship project and lots of future development was expected. It very much paid off.

Our test approach was very much "slow is smooth and smooth is fast." We would deploy multiple times a day. Tests were 10 or so minutes and very comprehensive. If a bug got out, tests were updated. The tests were very reliable because the team prioritized them. Eventually people stopped even manually verifying their code because if the tests were green, you _knew_ it worked.

Beyond our team, into the wider system, there was a light weight acceptance test setup and the team registered tests there, usually one per feature. This was the most brittle part because a failed test could be because another team or a system failure. But guess what? That is the same as production if not more noisy. So we had the same level of logging, metrics, and alerts (limited to business hours). Good logs would tell you immediately what was wrong. Automated alerts generally alerted the right team, and that team was responsible for a quick response.

If a team was dropping the ball on system stability, that reflected bad on the team and they were to prioritize stability. It worked.

Hands down the best dev org I have part of.


I've worked in a strong dev-owned testing team too. The culture was a sort of positive can-I-catch-you-out competitiveness that can be quite hard to replicate, and there was no concept of any one person taking point on quality.

If as a developer you want to be seen as someone advancing and taking ownership and responsibility, testing must be part of the process. Sending an untested product or a product that you as a software engineer do not monitor, essentially means you can never be sure you created an actual correct product. That is no engineering. If the org guidelines prevent it, some cultural piece prevents it.

Adding QA outside, which tests software regularly using different approaches, finding intersections etc. is a different topic. Both are necessary.


The problem in big companies is that as a developer, you are usually several layers of people removed from the people actually using the product. Yes you can take ownership and implement unit tests and integration tests and e2e tests in your pipeline, to ensure the product works exactly as you intended. But that doesn't mean it works as management or marketing or the actual user intended.

Developers want things to work.

QA wants things to break.

What worked for me, devs write ALL the tests, QA does selective code reviews of those tests making devs write better tests.

I also wrote the failure of Dev-Owned Testing: "Tests are bad for developers" https://www.amazingcto.com/tests-are-bad-for-developers/


A nice piece that outlines all the challenges, the opportunities, and the cultural and social adjustments that need to be made within organizations to maximize the chance of left-shifted testing being successful.

IMPO, as a developer, I see QA's role as being "auditors" with a mandate to set the guidelines, understand the process, and assess the outcomes. I'm wary of the foxes being completely responsible for guarding the hen-house unless the processes are structured and audited in a fundamentally different way. That takes fundamental organizational change.


Having been at Microsoft when we had SDETs for everything (and I miss it greatly, though the way we could write a feature and then just toss it to test was ridiculous), I think things have swung too far away.

On one hand, engineers needed to take more ownership of writing things other than low-level unit tests.

On the other, the SDETs added immense value a ton of ways, like writing thorough test specs based off of the feature spec (rather than the design spec), testing without blind spots due to knowledge of the implementation, implementation of proper test libraries and frameworks to make tests better and easier to write, and an adversarial approach to trying to break things that makes things more robust.

I've also worked with manual QA for product facing flows, and while they added value with some of their approaches to ensuring quality - poking at our scenarios and tests, and looking more closely at subjective things - they often seemed to work as a crutch for the parts of code paths that engineers had made too difficult to test.

I've never seen anywhere attempt to replace the value that SDETs delivered with what engineers were tasked with. I'd argue it's not necessarily possible to fully replicate that when you're testing your own things. But with services now, it also seems like product/management are more willing to have slightly few assurances around quality and just counting on catching some in production in favor of velocity.

I've never seen places that got rid of QA


Writing tests cases is part of development. Programmers simply need to do it. It helps them immensely.

Product acceptance and user acceptance testing is entirely different and often conflated. That's where people go wrong and are too simple minded (or cheap) to understand or invest in both.


This paper has 7 references and 4 of them are to a single google blog post that treats test flakiness as an unavoidable fact of life rather than a class of bug which can and should be fixed.

Aside from the red flag of one blog post being >50% of all citations it is also the saddest blog post google ever put their name to.

There is very little of interest in this paper.


The article argues that Dev-Owned testing isn't wrong but all the arguments it presents support that it is.

I always understood shift-left as doing more tests earlier. That is pretty uncontroversial and where the article is still on the right track. It derails at the moment it equates shift-left with dev-owned testing - a common mistake.

You can have quality owned by QA specialists in every development cycle and it is something that consistently works.


I'm interested, as I've never been in an org with QA specialists. What does that look like?

You do everything the same as today. Then you turn it over to QA who keep finding weird things that you never thought of. QA finds more than half your written bugs (of course I don't write a bug everytime a unit test fails when doing TDD, but sometimes I find a bug in code I wrote a few weeks ago and I write that up so I can focus on the story I'm doing today and not forget about the bug)

QA should not be replacing anything a developer does, it should be a supplement because you can't think of everything.

We also use QA because we are making multi-million dollar embedded machines. One QA can put the code of 10 different developers on the machine and verify it works as well in the real world as it does in software simulation.


The are breakers. Good devs are makers.

You can be both but I have yet to meet someone who is equally good in both mindsets.


They find all the things the devs and their automated tests missed, then they mentor the devs in how to test for these and they work out how the bug could have been found earlier. Rinse and repeat until the tester is struggling to find issues and has coached the devs out of his job

My experience with this was great. It went really well. We also did our own ops with in a small boundary of systems organized based on domain. I felt total ownership for it, could fix anything in it, deploy anything with any release strategy, monitor anything, and because of that had very little anxiety about being on call for it, best environment I ever worked in.

I cannot believe the excuse for why shift-left QA is “not working” is that Amazon hires developers who can’t learn basic testing skills that QA engineers picked up in three months. If developers can’t write valid code for tests, that’s on the organization, not on the practice.

The author forgot to mention the costs of handoffs, which paid off all those tiny learning investments.

Shift-left has over 30 years of proof as one of the most effective ways to build reliable software.

P.S. This isn’t an ACM article; it’s a strongly opinionated post based on personal experience.

P.P.S. I'm not against QA, but make them as bug/quality hunters, instead of toil.


This paper is really good and accurate. Many people would benefit from reading it.

> Why does dev-owned testing look so compelling in theory, yet fall short in almost every real-world implementation

Seems like a weird assertion. Plenty of startups do “dev owned testing” ie not hiring dedicated QA. Lots of big-tech does too. Indeed I’d venture it’s by far the most common approach on a market valuation-weighted basis.


It's very common and most software is extremely buggy, so that adds up.

I think most consumer software just would not stand the scrutiny of enterprise or professional applications. It just does not work a lot of the time, which is truly insane to think about.

I mean, imagine if your compiler just 10% of the time did not work and output a wrong program. But when I log into website XYZ, that's about the rate of failure.


As a developer, I frequently tell higher ups that "I have a conflict of interest" when it comes to testing. Even though I fully attempt to make perfect software, often I have blind spots or assumptions that an independent tester finds.

That being said: Depending on what you're making and what platform(s) you target, developer-owned testing either is feasible or not. For example, if you're making a cross-platform product, it's not feasible for a developer to regression test on Windows 10, 11, MacOS, 10 distros of Linux. In contrast, if you're targeting a web API, it's feasible for a developer to write tests at the HTTP layer against a real database.


> At Google, for example, 16% of tests exhibited flakiness

This really surprised me. In my experience, usually a flaky test indicates some kind of race condition, and often a difficult-to-reproduce bug.

In the past year, we had a flaky unit test that caused about 1-2% of builds to fail. Upon fixing it, we learned it was what caused a deadlock in a production service every 5-6 months. As a result of fixing this one "flaky" test, we eliminated our biggest cause of manual intervention in our production environments.


At scale, every test is flaky.

> > At Google, for example, 16% of tests exhibited flakiness

> This really surprised me.

It doesn't surprise me. Close to 80% of Google code is actually quite terrible.

> In my experience, usually a flaky test indicates some kind of race condition, and often a difficult-to-reproduce bug.

Yup, that's my experience too. I flip shit when I start seeing flaky tests because it means a lot of actual work. Luckily there's generally a lot of tools available (sanitizers are a godsend) to fix it, and a lot of experience I rely on to look for smells when those tools come up empty.

It makes me really sad when I work on a project with few (or, gasp, zero) tests. I've spent the last 5 months just adding tests to my current project at $employer and... those tests have revealed all kinds of problems in dependencies created by other teams at $employer. Ugh.


Dev-owned testing is great when it succeeds. It's definitely not the only way. But you really do need the whole team to be on-board with the concept, or willing to train the team to do it. If that doesn't line up then you're not going to have a good time.

I've been on teams that own the whole pipeline, and have lead teams where I made it mandatory that the engineer writing a feature must also write tests to exercise that feature, and must also write sad-path tests too. That gets enforced during review of the pull request.

It works. But it takes a lot of effort to teach the whole team how to write testable code, how to write good tests, how to write sad-path tests, and how to even identify what sad paths might exist that they didn't think about.

I can tell you from experience that, when it does succeed and the whole team has high collaboration, then individual developer's work output is high, and new features can be introduced very rapidly with far fewer bugs making it to production than relying on a whole QA team to find all the problems.

It fails in practice because most (not all!) devs don't want to "waste time" doing that, and instead rely on QA cycles to tell them that something is wrong. Alas, QA cycles are a hell of a lot slower than the developer writing the tests. QA teams often don't have access to (or perhaps don't understand) the source code, and so they're left trying to find bugs through a user interfaces. That's valuable, but takes a completely different skillset and is a poor time to find a lot of the basic bugs that can show up.

On the other hand, the teams I've been on that failed (especially hard) often had huge (!) QA teams and budgets. Despite the size of team and budget, multiple projects would fall over from inertia and bickering between teams about who owns which bug, or which bug needs priority fixing.


(Disclaimer: I work for shipyard.build so i am biased here..)

Dev-owned testing (or even dev-involved testing) is much more realistic when devs have shorter feedback loops between their code and its deployment. So often i've seen momentum get lost when devs have a wait period before they can run tests/do basic manual testing. Then the test aspect becomes the thing that "slows down" devs before they ship a feature, so they might tend towards shortcuts


Something I've always believed, and my experience with shipping multinational software on a schedule that has severe drop-dead dates confirmed: If you are contractually obligated to deliver a product that does x, y, and z correctly? QA is the only way to do that seriously. If you don't have QA, you don't care about testing full stop.

This only compounds when you have to comply with safety regulations in every country, completely setting aside the strong moral obligation you should feel to ensure you go far above & beyond mere compliance given the potential for harm. This compounds again when you are reliant upon deliverables from multiple tiers of hardware and software suppliers, each contract with its own drop-dead dates you must enforce. When one of them misfires, and that is a "when, not if", they are going to lie through their teeth and you will need hard proof.

These are not small fines, they are company-killing amounts of money. Nobody profits in this situation. I've been through it twice, both times it was a herculean effort to break even. Hell, even a single near-miss handled poorly is enough to lose out on millions in potential future work. The upsides are quite nice, though. I didn't know it was possible to get more than 100% of your salary as a bonus until then.

Don't take my word for it, though. Ask your insurance agent about the premiums for contractual liability insurance with and without a QA team. If you can provide metrics on their performance, -10-15% is not uncommon, this discount increases over time. Without one? +15-50% depending.


I'm surprised that with this many comments about the relationship between testing, development, and QA there is so little mention of environment and deploy process.

The usability of your test environment (and associated tooling) has a massive impact on quality assurance.

Every small difference between Production and Production-Plus-Feature creates friction and, even in systems of only moderate complexity, that friction adds up fast.


Conflict of interest that the author fails to mention: he's a QA manager at Amazon, and has a vested interest in QA being seen as a necessary role. It may well be, but this is definitely a conflict of interest.

Aside from that, this article is incredibly heavy on theory and very light on empirical fact. Its bibliography consists of a very narrow selection of blogs (4 of the articles he quotes are one and the same blog somehow), which talk about a very narrow subset of the industry. This article not referencing the serious and well-established research that has been done on the effectivity of dev-owned tests by for example the DORA folks, almost seems dishonest.

The clickbait title, when compared to the content of the article, is outright dishonest. The author theorizes about some warts dev-owned testing may have at some specific companies, but this is a very far cry from it failing in practice, especially when you compare them to the warts of offloading quality to a different team.

It's probably a bit harsh, but I feel like, as an industry, we should have a higher standard of empiricism when it comes to evaluating our ways of working.


> The problem is not that dev-owned testing is a flawed idea, but that it is usually poorly planned

In our case there was zero plan. One day they just let our entire QA team go. Literally no direction at all on how to deal with not having QA.

It's been close to a year and we're still trying to figure out how to keep things from going on fire.

For a while we were all testing each other's work. They're mad that this is slowing down our "velocity", and now they're pushing us to test our own work instead...

Testing your own work is the kind of thing an imbecile recommends. I tested it while I wrote it. I thought it was good. Done. I have all the blind spots I had when I wrote it "testing it" after the fact.


Frustrating to see.

Dev-led testing is too fundamentally different from a QA function, just as any amount of E2E tests can't replace manual testing. Each tries to solve for a different type of problem. Is it possible to do effective dev peer "QA" without essentially duplicating the QA role? And forget about testing one's own work..


The abstract says it really:

"It was clearly a top-down decision"

Many many things that are imposed like this will fail.

Its not willful non-compliance even, its just that its hard for people to do things differently, while still being the same people in the same teams, making the same products, with the same timelines...

Context is key here, lots of people see a thing that works well and think they can copy the activities of the successful team, without realising they need to align the mindset.. and the activities will follow. The activities might be different, and thats OK! In a different context, you'd expect that.

I'd argue that in most contexts you don't need a QA team at all, and if you do have one, then it will look a lot different to what you might think. For example, it would be put after a release, not before it.. QA teams are too slow to deal with 2000+ releases a year - not their fault, they are human.. need to reframe the value statement.


It pretty much comes down to whether QA is just doing what dev tells them--in which case they're not applying any scrutiny to dev's decisions, or if they're deciding for themselves what constitutes an appropriate validation for the dev work at hand.

Do people actually send PRs with no tests? That is so bizarre to me

If your review was based on features shipped, and your bosses let you send PRs with no tests, would you? And before you say "no" - would you still do that if your company used stack ranking, and you were worried about being at the bottom of the stack?

Developers may understand that "XYZ is better", but if management provides enough incentives for "not XYZ", they're going to get "not XYZ".


That actually wasn't why I didn't write tests a lot of the time.

What stopped me was that after a year of writing tests, I was moved to a higher priority project, and the person who followed me didn't write tests.

So when I came back, many of the tests were broken. I had to fix all those in order to get new ones to not be a bother.

Repeat again, but this time I came back and the unit testing suite had fundamentally altered its nature. None of the tests worked and they all needed to be rewritten for a new paradigm.

I gave up on tests for that system at that point. It simply wasn't worthwhile. Management didn't care at all, despite how many times I told them how much more reliable it made that system, and it was the only system that survived the first giant penetration test with no problems.

That doesn't mean I quite testing. I still wrote tests whenever I thought it would help me with what I was currently working on. And that was quite often. But I absolutely didn't worry about old tests, and I didn't worry about making sure others could use my tests. They were never going to try.

The final straw, less than a year before I was laid off, was when they decided my "storybook" tests weren't worth keeping in the repo and deleted them. That made me realized exactly how much they valued unit tests.

That isn't to say they had no tests. There was a suite of tests written by the boss that we were required to run. They were all run against live or dev servers with a browser-control framework, and they were shaky for years. But they were required, so they were actually kept working. Nobody wrote new tests for it until something failed and caused a problem, though.

tl;dr - There are a lot of reasons that people choose not to write tests, and not just for job security.


Well this wasn’t really aimed at individual devs, but the team/company standards.

I’ve worked at several teams and it was always the norm that all PRs come with tests. There was never a dedicated QA person (sometimes there would be an eng responsible for the test infra, but you would write your own tests).

I would never accept a PR without tests unless it was totally trivial (e.g. someone mentioned fixing a typo).


A broken environment engenders broken behavior and this explains is why it is bizarre, not that it isn't bizarre.

Breaking prod repeatedly probably impacts your stack ranking too.

Depends on how easily the failure is connected back to you personally. If you introduce a flaw this year and it breaks the system in two years, it won't fall back on you but the poor sap that triggered your bug.

So can "heroically" save prod ... anti patterns.

> "Do people actually send PRs with no tests?"

Rarely

Do people send PRs with just enough mostly useless tests, just to tick the DoD boxes.

All the time.


It depends on the application but there are lots of situations where a proper test suite is 10x or more the development work of the feature. I've seen this most commonly with "heavy" integrations.

A concrete example would be adding say saml+scim to a product; you can add a library and do a happy path test and call it a day. Maybe add a test against a captive idp in a container.

But testing all the supported flows against each supported vendor becomes a major project in and of itself if you want to do it properly. The number of possible edge cases is extreme and automating deployment, updates and configuration of the peer products under test is a huge drag, especially if they are hostile to automation.


Once, for a very very critical part of our product, apart from the usual tests, I ended up writing another implementation of the thing, completely separately from the original dev, before looking at his code. We then ran them side by side and ensured that all of their outputs matched perfectly.

The "test implementation" ended up being more performant, and eventually the two implementations switched roles.


Yes, depends on what you're building. Is it just a prototype? no tests needed. Are you trying to move fast and break things? no tests needed. Are tests just not feasible for this piece of code (e.g., all UI, not unit testable), then no tests needed.

When I spell text wrong! Or want to add a log. Lots of reasons something is too silly to need a test.

Yes

To put things in context, it both depends on organization standards, and what the change actually is.

Where I work, there are areas that, if you change, you must update the tests. There are also development helper scripts and internal web sites where "it compiles" is good enough.

Likewise, I've done quite a bit of style cleanup PRs where the existing tests are appropriate.


I've seen it many times. I think it often arises in business that are not very technical at their core.

Simple. Because all theories are approximations. And our most high resolution theory of reality assumes reality is intrinsically random making it so that even the most accurate theory of reality can't predict anything.

First they came with the NoOps movement, and you were happy cause those damned ops people were always complaining and slowing you down. I can manage my infra!

Then, they came with the dev-owned testing and fired all the QAs, and you were happy because they were always breaking your app and slowing you down. I can write my tests!

Now, they are coming with LLM agents and you don't own the product...


> you were happy ... you were happy

Like heck I was.


I have worked with bad ops people who didn't let anything get done, and good ops people who knew how to do tricky things and kept the system working so I didn't have to care. I have worked with good and bad QA testers. Guess who I'm glad are gone.

I think devs owning testing only works where they’re consumers of the product.

So a developer productivity tool - perfect.

A fully fledged engineering application targeting monitoring of assets? Not so much


Note: the references in the article seem to be incorrect [4] through [7] are all the same article. I do not think that was intentional.

This just seems to be basically a blog post that somehow got published in ACM?

No, they got it published in ACM SIGSOFT Software Engineering Notes.

That's one of the things that publication is for.

The paper is a well-supported (if not well-proofread) position paper, synthesizing the author's thoughts and others' prior work but not reporting any new experimental results or artifacts. The author isn't an academic, but someone at Amazon who has written nearly 20 articles like this, many reporting on the intersection of academic theory and the real world, all published in Software Engineering Notes.

As an academic (in systems, not software engineering) who spent 15 years in industry before grad school, I think this perspective is valuable. In addition academics don't get much credit for this sort of article, so there are a lot fewer of them than there ought to be.


Author-Owned proof reading is next

If your product interface is with humans, you test it firstly with devs, then QA, then your customers.

Devs are a bit leaky for bugs/non-conformance, so if you skip the QA, then your customer is exposed. For some industries this is fine - for others, not so much.


Another vital quality of good QA teams is that they often serve as one of the last/main repositories of tribal knowledge about how an org's entire software system actually behaves/works together. As businesses grow and products get more complex and teams get more siloed, this is really important.

One purpose of QA testing is compliance assurances, including with applicable policies, industry regulations, and laws. While devs are (usually) good at functioal testing, QA (usually) does non-functional testing better. I have not known any devs that test for GDPR compliance for example. (I am certain many devs do test for that, just stating my personal experience.)

The paper highlights the problem in two words of the first sentence of the abstract: "shrink QA".

Corporations do it to save money, and accept the loss of quality as the cost of doing business. Therein lies part of the reason for the sad state of software today.


I suspect they underestimate the loss of quality; or underestimate the consequences.

I don't think they underestimate the loss of quality. As for the consequences, "No one in this world, so far as I know ... has ever lost money by underestimating the intelligence of the great masses of the plain people" - H. L. Mencken

I feel the need to point out a phrase that was very popular among my dev peers:

The difference between theory and reality, is that in theory they're the same, but in reality they're not...

While any new feature, or bug fix, introduced by a dev should certainly be tested at that dev's desk to confirm to themselves that it's correct; it should also (of course) be tested by a product test group (call it QA if you must) to insure that all functional features of the product are still fully and correctly implemented.

I would aim a big fat finger at "agile", "scrum", "standup" culture for encouraging the violation of this, very obvious, testing requirement.

"What have you accomplished in the last 4 hours", type of management interface to development, fully and completely misses the primacy of confirming the functionality of updates before release.

This is really due to management, especially C-suite management of startups, living in a make believe world of deadlines and feature requirements pulled arbitrarily out of their ass, while refusing (or not having the capacity) to understand the technical issues involved.


I think they've got that the other way around.

(Joke)

Can't AI just replace QA?


Honestly very pleased that AI is not dominating the discussion. Nature is healing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: