QA Continues – Part II – Whitebox


What is white-box testing?

It means, testing the inside functionality of the software. Your focus will be on testing broken or poorly structured code paths, internal security holes, the flow specific inputs through the code, testing of loops, functions and statements.

White-box testing involves checking of predefined inputs against expected outputs.

What is expected from the testers?

  • Understanding the source code.
  • Be proficient in the programming language that is used.
  • Need to know how to create test cases.

How to measure?

Key performance metric is;  code coverage. 80% to 90% code coverage is where you should aim at.

Let me explain how we did it on two projects in Better Collective.


We can break White-box testing to 3 categories

  • 1. Unit Testing:
    • Unit testing is done to ensure the code is working as intended before any integration happens with previously tested code. Isolated
  • 2. Integration testing
    • This the part where we test the interactions of each interface with each other. An open environment
  • 3. Regression test
    • This where you put everything to cycle, and do it continuously. You should decide to what test to run when, according to their importance, the time it takes to execute, etc.


Let’s hear it from our Software architect Henrik Schlichter.
 High level testing has been a requirement from the start. We wanted to have coverage of all our API, so we can be sure that everything is working. The inner workings of the API have secondary priority, as long as the actual endpoint is working as according to the spec everything should be acceptable. This was the minimal amount of testing that we could afford. But it also facilitated the wish for having a holistic rather than an atomistic approach.
Using the RAML specification, as a basis for the testing serves two purposes. First is documentation, second is testing specification. But RAML is first and foremost a documentation tool for your API. So having the specification tested, also means that we test the documentation. Hence, it will test if the code and documentation have gone out of sync.
To accomplish this, we have written a tool that generates Jasmine test specs that covers all of the specifications. This requires that we have to be very specific about the execution order, because some of the tests are dependent on each other (creating resources and deleting them again). So you would want to specify the POST endpoint first so you could benefit from the data it creates. Then specify the DELETE endpoint as last so it would clean up after itself and leave a clean footprint.
To facilitate the RAML tests we needed some mock data in the database which would provide us with consistency and deterministic results. Because of the duality of the tests, being both documentation and spec, we couldn’t make mock data as part of the generated tests. There are normally two approaches you can take to handling this. Either you create the mock data during a prerequisite phase when writing the test spec, or you introduce some sort of fixture phase where you install mock data before running the tests. We chose to go with the fixture phase because we wanted as little “pollution” in the RAML files as possible, so they we be kept clean and to the point as readability shouldn’t be sacrificed on that account.
To accommodate the RAML tests, we are running the PhpUnit suite to provide us with the TDD/BDD development styles. This is something developers are highly encouraged to do, but it is not required. Main reason for not enforcing it, is because it can be notoriously hard to get into the correct mindset that you need to have when working using this style of programming. It can easily take years to accumulate experience enough for you feel comfortable with it. So a middle ground is usually your best approach when wanting programmers that are also productive.
It is close to impossible, when creating a new system from a clean slate, to practice full TDD/BDD. This is because, often you dont know precisely how a system should be structured, and there will always be a fair bit of boilerplate or bootstrap code that you need to write in order to actually do any testing. In this case systems testing, or what you might consider black box testing, could help you getting started.
Having tests also means we have to run them, and just running them manually is not an option if you want consistency and trust across the entire codebase. As any developer with respect for them self, we have a continuous integration process around all our commits to our git repository. It is absolutely crucial that this is in place. You want to have the knowledge of what the current state of your code is at any in time.


We think having a good workflow that is natural, is the key to productivity, it should feel natural for programmers to follow processes. Purpose should be clear at all time.
So if you want to start QA – Start with Black box, master the regression and implement whitebox. Good luck.

Leave a Reply

Your email address will not be published. Required fields are marked *