We've had QA, yes. What about automatic QA?

We want to deliver the best LMS possible, and use many techniques to achieve this. Automated tests is a way we improve Easy LMS, and cover here.

Doing automatic QA
Bram
Back-End Developer
Posted on
Reading time 6 minutes

Whenever we want to introduce a new feature or improve an existing one, it's easy to fall for the "it works on my machine" trap: You wrote some code, opened the corresponding page, and saw that it worked. You could think that's all we need, but having a working feature is only a small part of actually finishing that feature.

We want to stay calm whenever we release a feature, and ensure a degree of certainty that we don't accidentally break something. There are two main processes in place which prevent this from happening: manual testing and automated testing. We thoroughly explain our manual QA process in a separate blog post, so we will only go into the automatic parts here because sometimes you just need robots instead of humans 😉!

What do we mean by automated testing?

Automatic tests are code as well

If we change something on one page, it should not suddenly break a different page. If we were to test every existing feature whenever we introduce something new to make sure it still works, we would never be able to release anything! To still ensure a degree of safety, we have automated tests. The concept of automated tests isn't anything new, and we hope every other software company uses them as well! However, we thought it would be valuable to explain why and how we use them.

The automated tests run whenever we change something in the code. To go even further than that, the tests themselves are code as well! Even though a feature works when we manually test it, it is not complete unless there is also a test for it in the code. The fact that it works at this moment, in your specific test case, on your machine, doesn't mean that it will continue to work indefinitely.

We believe that tests are as necessary as the actual code itself, if not slightly more so. They give us a clear-cut definition of what our code should do and automatically makes sure that it does! We can freely make changes to the code without worrying about things breaking. If the tests pass, everything should work as expected!

Why do we use automated testing?

Our automated tests allow us to move faster without breaking things. Our manual QA is a lot faster because we do not have to, for example, try to make Exams with many different names. We know from our automated tests that this just works, so we don't need to try it again ourselves!

Automated tests allow us to move faster without breaking things

Because of this, we can also be a lot more thorough than would be reasonable to do manually. We can test all the possible actions the code could take, even those which assume part of the site is down. Naturally, this should never occur, and manually testing it would be cumbersome. However, we can just pretend the site is down and see that everything will continue as expected with our automated tests!

An aside: Metrics don't matter

Automated tests always spark the debate of how much test coverage is needed. Coverage is a metric of how much of your code is actually tested by your tests, given as a percentage. The coverage of your tests is something you can automatically track, which makes it very easy to decide it should be as high as possible. If we have 100% test coverage, nothing should accidentally break, right?

Well…

Let's say we have some function which we pass a number. It does a highly complex calculation and then gives us back the result. What we could do as a test is call the function and then check if the result is a number. Naturally, this is not a sensible test for this code: we have no idea if the function performs the correct calculation! However, this test will let us achieve 100% test coverage: the entire piece of code has been covered, just not in a way that means anything.

Metrics are a nice crutch to see if we aren't doing anything really strange, and as a result, we do use them. Metrics are, however, not the measure to make sure we're writing good quality code!

Is everything important?

We always write tests for everything, but we differentiate between scenarios

We believe that every part of Easy LMS is essential, as it wouldn't be the same without it! But, unfortunately, we are still bound by a finite amount of time. It is not reasonable to test everything. We always need to make the trade-off of how in-depth we want to automate something. We always write tests for everything, but we differentiate between scenarios. In some cases, simple tests like we mentioned before are enough, but if something is mission-critical, we go even further!

But what if something is mission-critical?

Certain parts of the system are mission-critical. For example, if no one can log in anymore, there's not much you would be able to do with Easy LMS. Even though our tests test the login code, we would still like to try logging in every time before we release a new feature. Performing this validation by hand would mean we would need to spend a couple of minutes every time we want to release something. Of course, we could do this, but the more things we deem necessary, the more minutes we would need to spend. And this adds up. Fortunately, we can go one step further. The tests we have previously mentioned test the code, but we also have acceptance tests. These interact with the website the same way you would in your browser, allowing us to automate away some of the QA steps we would otherwise need to take! This means we can use them to test an entire feature end-to-end. To refer back to our previous example, we don't need to log in every time anymore to make sure it works. If it doesn't, the acceptance test will fail! All these methods help us catch many bugs before they reach you, but we’re all human after all :wink: