fbpx

We’ve already discussed what has been going on in the back-end of the Labster platform, an exciting project which glimpses into the future of education. Rollout has been helping Labster with several senior backend, frontend engineers, and QA testers (altogether 11 IT professionals). This time we share Judit’s experience, who was a member of the testing team. She played a crucial role in safeguarding the complex transition of the Labster web portal.

This article offers insight into how you should think about software testing, what kind of hidden values could be generated for your business with a good testing setup in place, and what makes a great working environment for software developers.

Judit joined the Labster team when the development of the portal started, and stayed all the way through the release.

She is a mechanical and transportation engineer double major, who got into software testing right after college. She got hired to test the software of automated railway control systems, so she learned the basics in an environment where hardcore, bulletproof safety is the lowest standard. After that she went to work at a Hungarian unicorn startup where they produced a navigation software that has become a global hit.

She got into the Rollout fold with almost 10 years of experience in software testing.

She joined the Labster project in Q3 of 2021, when they prepared for a switch to a new web portal. A process like this requires long testing procedures before, and after. This hasn’t only been a design change, Labster changed the way they handle data as well.

Judit’s main responsibilities were:

– building test cases

– picking the testing scenarios that could be automated

– writing and managing the documentation

– manual testing, based on the tickets in an AGILE system

– working together with automated testers to identify edge cases, execute the test plans.

When should you test manually?

Automation is something every company is chasing now. The processes need to be digitalised, and when they are online, they should be automated, as much as possible. In the case of testing, this is not so straightforward. Ultimately you want to automate as much as you want, but you often can’t get away without at least some manual testing.

Labster has been Judit’s first project where testing in AGILE really felt well managed. But how come? Why is this so hard?

Manual testing, in her experience, usually devolves into a biweekly waterfall-model. The main challenge of manual testing in AGILE is to divide the project into subtasks that are small enough. Usually, the new developments arrive in batches that are too big to test at once. Slowly, but surely, testing almost always gets behind.

In the Labster web project, front-end and back-end development has been separated, so the testers were able to prepare in time.The Sprint only featured truly doable testing tasks.

In AGILE, Tickets are generated in every Sprint. Judit thinks that in an ideal world you could set up automated testing systems right when the new code is done, but usually this is not possible. The new iterations need to be tested manually first. And even before that, you need to know exactly what you want to test.

Sometimes it is evident: here is a new button, you push it. It’s also clear that in the case of a web UI, you need to take a look at it with several devices and browsers, before you can automate the process for further testing.

Sometimes though, it’s not so clear. Even during a simple registration process, lots of test cases can occur. A few examples:

– email address does not exist

– email address has bad format

– email address too long

– unsupported special characters in the form

It’s impossible to know everything in advance, that’s why you need preparation before automation.

A clear example when manual testing is superior is when you identify fringe cases, situations that have a very low chance to occur. You don’t want to automate the tests for these, because it would take more work than manually testing them, case-by-case.

Another situation where automation is not feasible is due to timing issues, when you’ve got no reason to automate testing on the build, because changes will come too quickly.

What are the major types of manual software testing?

It’s almost always worth it to include a manual testing phase. Manual testers usually studied testing theory, which is not always the case with test automation experts. Experienced manual testers can possibly have a deeper and more well-rounded view on the project.

Testing tasks are gathered into ‘sets’, and these sets are defined by which development phase they are used in.

The smoke set is a bundle of very minimal, very fast kinds of tests that are used to perform the most important system checks. The outcome of these tests decide if the current build is even usable. This can save you from situations like scheduling a 3-day test and encountering a major issue in the first hour of it, forcing you to reschedule completely.

The regression set is bigger. It is used to check how different elements cooperate, to identify the unexpected network effects that can occur when you change anything in a complex system. Sometimes these effects can be beneficial, but more often than not, they present a new challenge.

There is the performance set, which gathers non-functional tests together. A typical one of these is checking load times.

And Judit performed acceptance sets which emulate longer, complex user stories. You choose an entry point where a typical user would start using Labster and go through the whole user experience, with different roles and levels of access, like teachers, students, or outsiders — their experience can vary greatly.

As more and more functions are added, test cases need to be modifed as well — we call this ‘maintenance’. Both in automated and manual tests, the tester must recognize that the scenario should be changed. Smoke tests have a tendency to grow too big, taking up too much time if left unchecked.

How to make sure the tests run well?

When a tester joins a project, one of their first tasks will be to familiarize themselves with the currently automated tests, because they will need to keep those in good condition and up-to-date. According to Judit, the tester should be able to understand what the test does just by its title, without looking at any code.

Another important thing is to harmonise cooperation between the developer and the tester. One of the key responsibilities of the tester is to identify what to test, create checklists and test scenarios. If project management helps with managing this, the developer should be able to form opinions and suggest test cases for the tester.

Team spirit can and should be kindled in remote teams as well

We already mentioned that the Labster team showed very advanced project managament skills, using some interesting methods.

Judit felt that it’s important to talk about the human side of this project. Small things, which, in her opinion, push forward the cooperation between team members a lot:

The team leaders paid close attention to how the overtime hours are looking. If hours started ramping up, they warned the developers themselves and stepped in to change the schedule and lower the burden on the individual team members.

They organised online team chats, not just meetings, especially for team building purposes. The goal was to help team members get to know each other more, personally, to overcome the distance of online work.

It was also a nice touch that after the launch, Labster sent small physical presents to everyone as a token of appreciation.

Things like this build the team spirit and commitment immensely, according to Judit.

This is just one more reason we are happy to take part in the Labster-story.

If you are interested in more stories from the strange world of IT, follow our Medium!

We share everything we learn about the latest industry insights, the many aspects of the remote lifestyle, the key challenges of project management, and we dive deep into the diaries of our developers!

You keep your softwares always up-to-date, right? Be informed about the IT & Remote work news and click on the link: https://email.rolloutit.net

Check Rollout IT among the best software testing companies here: https://www.designrush.com/agency/software-development/software-testing

Book a call or write to us

Or

Send email

By clicking on ‘Send message’, you authorize RolloutIT to utilize the provided information for contacting purposes. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Did you know that developers spend an average of 17.3 hours per week debugging code? That's nearly half of a typical work week! But what if we told you there's a tool that could dramatically reduce this time and boost your overall productivity? Cursor AI is the new Integrated Development Environment (IDE) that's revolutionizing the way we code and debug.
Did you know that Google's monorepo contains over 2 billion lines of code across 9 million source files? This staggering scale highlights the immense challenges developers face when working with large codebases.  Git, the distributed version control system created by Linus Torvalds, has become the de facto standard for managing source code. Its powerful branching and merging capabilities make it an excellent choice for handling code repositories. However, as we'll see, Git faces some challenges when dealing with extremely large repositories. Today we will learn about how developers can easily manage the monorepo codebase in git using git’s sparse index feature.
In software development, AI-powered tools have emerged as a developer productivity suite, and Cursor AI is at the forefront of this improved productivity workflow.  As seasoned developers, we've seen many IDEs and code editors. But when Cursor AI burst launched, it was clear that this was something special. In this article, we'll dive deep into why Cursor AI is winning the hearts (and keystrokes) of developers worldwide.
In the world of mobile app development, developers are always looking to improve efficiency, speed, and reliability. Rust is a programming language that's becoming more popular for this reason. It offers unique features that make it great for creating apps that run fast, are secure, and can handle a lot of users. This article will show how Rust can make your mobile app development better. We'll talk about how it helps with performance, keeps data safe, handles many tasks at once, and works on different platforms.
Creating a Minimum Viable Product (MVP) and growing it into a successful digital product is tough. It needs the right partner. Picking the wrong agency can cause delays, missed chances, and a less than perfect product. But how do you make sure you pick the right agency for your MVP? We'll help you check out agencies, see what they know, and find the best one for your business.
In the fast-paced world of product development, launching a successful MVP is key. It helps businesses test their ideas, get customer feedback, and set the stage for growth. The key to success lies in picking the right core features and KPIs that match your goals and what users want. This article will walk you through the steps to pinpoint the core elements for your MVP's success.