WHY DO QA PROCESSES SUCK?
Sometimes, the answer to product scale is not innovation but process improvement. In this article, I described how I improved QA processes at early stage startups to drive scale.
Key takeaways:
Functionality testing assesses UI-driven actions' outcomes.
Manual functionality testing is flexible since there’s no fixed approach.
Any process can be reimagined with a clear "why."
From the outset of my career, I've carefully planned my daily tasks, breaking them down into organized steps. This approach has significantly benefited my role as a Product Manager.
Throughout my career trajectory within the technology ecosystem, my engagements have predominantly been at startups, necessitating the wearing of multiple hats. Within this narrative, the task of executing Quality Assurance (QA) emerges as a potential responsibility, and over the last two years, I've been fascinated with developing a more reliable and effective method of quality assurance. This attempt has proven especially meaningful in the context of early-stage startups, where the usual framework is “Build, Individual/Comprehensive Testing, Deploy.”
During QA, you can test APIs or functionalities. A Product Manager usually checks how things work, but if they're tech-savvy, they might also look at the software's inner workings, especially the backend, using tools like Postman.
Unlike Functional testing, which focuses on results rather than how things work, it checks if the app meets user expectations. This type of testing can be performed either manually or through automated means.
In the context of early-stage startups, there are two prevalent approaches: solitary and collective testing, where in the former, the Product Manager alone conducts comprehensive testing to ascertain adherence to stipulated requirements, and the latter involves team members. An example of collective testing is in a startup with 15 employees, where the Product Manager could task the remaining 14 to test the product or feature, subsequently sharing their observations on a designated sheet or reporting errors through a bug-tracking system.
Although these methods yield the desired outcomes of feedback acquisition and bug resolution, they harbor inherent shortcomings. The process could prove time-intensive, with team members potentially unable to offer feedback promptly or in a meaningful manner. Feedback might lack depth, residing at a surface level. For instance, in the case of solitary testing, potential errors within certain elements of the feature design might go unnoticed.
The aforementioned insights are derived from personal experiences and conversations with fellow product managers. My interactions extended to stakeholders and other product teams, prompting a pivotal question:
Do you think the current approach to testing features is flawed, and if so, why?
This inquiry was directed to the relevant designers and engineers. The feedback received in response to the inquiry prompted refinements to the QA process, including a test run on a sub-feature during a specific sprint. Notably, feedback attested to the effectiveness of the testing procedure, particularly its efficiency. A noteworthy outcome was the streamlining of the process via a pre-planned call that encompassed carrying out tests on Usability, Performance, User Interface, etc
I documented TC 1, 2, and 3, providing descriptions, current outcomes, assumptions, KPIs, and necessary preconditions. Additionally, TC 4 format is created and is to be completed by the product designer, includes the expected outcomes and descriptions for execution.
Amidst positive feedback, I acknowledged an avenue for improvement, which is the proactive sharing of the test case document with all pertinent team members (inclusive of individuals directly linked to the product or feature) a day prior to the QA calls.
This synchronous virtual meeting, typically spanning 45 minutes to an hour, entails screen sharing to assess the feature's alignment with acceptance criteria or step descriptions. Should any issues arise during the call, they are promptly documented, and the designated engineer is notified. Subsequent to the meeting's conclusion, the engineer addresses identified issues and updates the status on the shared document.
Following this, other stakeholders or colleagues partake in testing, affording them a preview of the product ahead of its production phase. A designated document facilitates the input of feedback, entailing error identification, context, and general impressions of the feature. This approach fosters an open-ended atmosphere conducive to candid expression.
What this new approach resulted in was an increased spike in our velocity, a product which was close to 99% bugs-free, and a well-defined feedback mechanism among teams.
This narrative is not an assertion that my perspective constitutes the definitive manual QA procedure but rather underscores that “an idea's worth lies in its application”. It illustrates the potential for continual refinement and optimization within established practices.