Quality Assurance Engineering at GitGuardian
Dinkar Singh KaranvanshiQA Engineer at GitGuardian.Dinkar ensures quality through rigorous testing and attention to detail. |
Nathan RivièreNathan is QA Engineer at GitGuardian |
Quality Assurance (QA) engineering is a crucial aspect of any software development process, especially for a fast-growing scale-up company like GitGuardian. Delivering high-quality products is not just a matter of meeting customer expectations but can also make or break the company's success.
But let’s start from the beginning: what exactly is QA engineering?
At its core, QA engineering ensures that the software being developed meets the desired quality standards. This involves various activities, including testing, bug reporting, and verification.
QA engineering is often a multifaceted role that requires individuals to be highly adaptable and capable of working across different teams.
This blog post will present how the QA engineering team at GitGuardian works. We’ll explore the different techniques and processes we use and discuss some challenges we encounter.
Whether you’re a startup founder, developer, or QA engineer, this post will give you valuable insights into how QA engineering works in a startup and how it can help you deliver high-quality products to your customers.
What We Test
GitGuardian develops very technical products aimed to be used by security teams and software developers. This means QA Engineers must have a deep understanding of technical development concepts such as how version control systems work, authentication mechanisms, REST APIs, and so on. In addition, some of our products can be delivered self-hosted (which means our users have to manage the deployments of the product), so we must have infrastructure knowledge.
Another very specific thing is that our products only work with integrations: you must at least connect a git hosting platform to use GitGuardian products. In other words, our data come from external systems rather than from user input.
Team Organization
We are currently four QA engineers at GitGuardian. As in many other companies, we dedicate one QA Engineer per team. However, we can’t cover all the teams, so our resources are devoted to teams facing the highest criticality. In each team, the QA Engineer is responsible for helping developers ensure new features work as expected and bugs are fixed as expected.
In addition to the work performed in each team, every QA Engineer must contribute to QA-specific tasks. The most common duties include implementing and maintaining end-to-end testing, exploratory testing, and implementation of dedicated testing tools.
For now, unit tests and integration tests are implemented by development teams. We are lucky that GitGuardian developers have a strong testing culture and produce well-tested code. This allows us to focus on the tests we are responsible for. Currently, end-to-end tests are fully maintained by the QA team.
We also allocate half a day per week to training or work on specific topics we do not have time to tackle daily. For example, we currently focus on visual testing (testing the visual appearance and behavior of the user interface) and on infrastructure as code.
Regarding infrastructure as code, we must find new ways to test the numerous configurations we support. Self-hosting GitGuardian can be done on an existing or an embedded Kubernetes cluster, and each cluster type can also be deployed in an air-gapped network. Deploying our infrastructure in an automated fashion can considerably reduce the time required to test our self-hosted product.
Day to Day Tasks
End-to-end tests
As QA Engineers, we are responsible for developing and maintaining end-to-end tests for our products. These tests give us an accurate picture of the products' current state. Our end-to-end tests only focus on testing the happy and unhappy paths. Any test that can be tested on the frontend or backend in isolation is not implemented as an end-to-end test. For example, we do not include form validation in end-to-end tests, as this can be tested separately on the frontend and backend.
Features fall into two categories: internal features and third-party services features. The first category is relative to features we develop on our own, and that are independent, such as renaming your workspace or creating an API token. In the second category are all the third-party dependent features, such as how GitGuardian interacts with GitHub, GitLab, Slack, or PagerDuty.
To test these integrations, we implement clients to communicate with the services. For example, to test our GitLab integration, we have implemented a GitLab client that allows us to create a new personal access token in our beforeEach
block, use this token to verify we can add a new GitLab integration on GitGuardian, and finally revoke the GitLab personal access token in the afterEach
block. Whenever we want to implement new tests for integration features we provide, we first verify if such a client exists. If not, we implement one specific to the required area.
Another very important thing we do is verify our end-to-end tests only test one thing and test it right. We want end-to-end tests only to verify a few different features at a time. This is to avoid having lots of failing tests if one feature is broken. For example, we have a few tests to test the authentication works as expected. All the other ones rely on a state saved on the filesystem, so we do not need to sign in or handle the cookie consent banner.
We use Playwright with TypeScript to implement our end-to-end tests. We also use lots of static tests to ensure our codebase meets a number of standards: we use linters, formatters, spell checkers, and (of course) secrets detection!
We make sure that the selectors we use to select DOM elements are adequate and we try to rely on accessibility selectors as much as possible. Bad selectors tend to be responsible for test flakiness.
Dedicated Tools Implementation
Sometimes, we struggle to find existing tools that suit our needs. A very typical example is tools to generate git repositories. In order to properly test some of our repositories scanning features, we need a way to generate repositories with specific characteristics: number of commits, size of commits…
For a long time, we relied on famous open-source repositories to verify our scanning feature worked (the Linux kernel or the Chromium repositories for example). However, this did not allow us to generate repositories from scratch with desired characteristics, in particular, one crucial characteristic for our expertise domain: the secret presence probability.
This led to the development of a homegrown CLI for generating such repositories. This tool is still maintained and regularly updated with new features.
Processes and Exploratory Testing
When GitGuardian develops new features, the product team writes a product requirements document (PRD). Our first step as QA engineers is to read the PRD and make sure we understand the whole feature. A PRD will include some user stories. We need to write the test cases for each user story. During that time, we work closely with the product team and the development team to ensure we do not have a blind spot and we do not miss something important.
Even if we want to automate things as much as possible, we must take a pragmatic approach when testing our products. Certain elements still require manual testing. If a feature requires further improvement, it is not a good idea to implement end-to-end tests straight away, as this could lead to lots of refactoring in the near future. Also, some of our features can be very hard to cover with an end-to-end test whereas manual testing can be a more efficient option.
An example of this is our Check Run feature. It allows our user to have their pull requests scanned on GitHub by a GitHub job and see if GitGuardian has uncovered some secrets on the pull request. This is something we still test manually as it’s very quick and would be hard to automate.
Keeping some manual tests and doing exploratory testing also helps us in staying close to the product itself. We must know it very well. QA Engineers at GitGuardian are power users of the products.
One key characteristic of our product is that it can be self-hosted. For the QA team, this means product installation from scratch needs to be tested for each release, and obviously that we can still update it from previous versions.
This is one of the biggest challenges we face: being able to test all the different infrastructure configurations our users can choose is a challenging task. As mentioned earlier, GitGuardian can be installed as a Kubernetes application. So we use some basic Kubernetes commands to install our app and verify everything is working properly.
Also, some features are specific to our SaaS product, and some are specific to our self-hosted product. We use a system of test flags (similar to feature flags) to configure how tests are run based on whether or not the feature is specific to SaaS or self-hosted.
Conclusion
QA engineering is a critical aspect of software development that ensures the desired quality standards are met. At GitGuardian, our QA engineering team puts lots of effort into developing and maintaining end-to-end tests and dedicated testing tools to ensure our products meet the highest quality standards.
We face unique challenges, such as testing products that are tightly integrated into other platforms and testing self-hosted installations of our application on diverse types of Kubernetes clusters.
As we continue to grow and expand, we are constantly learning and improving to tackle our following challenges. Our QA engineering team is still small enough to allow us to work on various topics and learn a lot.
There’s always room for improvement, and we continue to learn and grow to face our next challenges!