Web front-end testing: here’s how to do it well!

We’ve already discussed what has been going on in the back-end of the Labster platform, an exciting project which glimpses into the future of education. Rollout has been helping Labster with several senior backend, frontend engineers, and QA testers (altogether 11 IT professionals). This time we share Judit’s experience, who was a member of the testing team. She played a crucial role in safeguarding the complex transition of the Labster web portal.

This article offers insight into how you should think about software testing, what kind of hidden values could be generated for your business with a good testing setup in place, and what makes a great working environment for software developers.

Judit joined the Labster team when the development of the portal started, and stayed all the way through the release.

She is a mechanical and transportation engineer double major, who got into software testing right after college. She got hired to test the software of automated railway control systems, so she learned the basics in an environment where hardcore, bulletproof safety is the lowest standard. After that she went to work at a Hungarian unicorn startup where they produced a navigation software that has become a global hit.

She got into the Rollout fold with almost 10 years of experience in software testing.

She joined the Labster project in Q3 of 2021, when they prepared for a switch to a new web portal. A process like this requires long testing procedures before, and after. This hasn’t only been a design change, Labster changed the way they handle data as well.

Judit’s main responsibilities were:

– building test cases

– picking the testing scenarios that could be automated

– writing and managing the documentation

– manual testing, based on the tickets in an AGILE system

– working together with automated testers to identify edge cases, execute the test plans.

When should you test manually?

Automation is something every company is chasing now. The processes need to be digitalised, and when they are online, they should be automated, as much as possible. In the case of testing, this is not so straightforward. Ultimately you want to automate as much as you want, but you often can’t get away without at least some manual testing.

Labster has been Judit’s first project where testing in AGILE really felt well managed. But how come? Why is this so hard?

Manual testing, in her experience, usually devolves into a biweekly waterfall-model. The main challenge of manual testing in AGILE is to divide the project into subtasks that are small enough. Usually, the new developments arrive in batches that are too big to test at once. Slowly, but surely, testing almost always gets behind.

In the Labster web project, front-end and back-end development has been separated, so the testers were able to prepare in time.The Sprint only featured truly doable testing tasks.

In AGILE, Tickets are generated in every Sprint. Judit thinks that in an ideal world you could set up automated testing systems right when the new code is done, but usually this is not possible. The new iterations need to be tested manually first. And even before that, you need to know exactly what you want to test.

Sometimes it is evident: here is a new button, you push it. It’s also clear that in the case of a web UI, you need to take a look at it with several devices and browsers, before you can automate the process for further testing.

Sometimes though, it’s not so clear. Even during a simple registration process, lots of test cases can occur. A few examples:

– email address does not exist

– email address has bad format

– email address too long

– unsupported special characters in the form

It’s impossible to know everything in advance, that’s why you need preparation before automation.

A clear example when manual testing is superior is when you identify fringe cases, situations that have a very low chance to occur. You don’t want to automate the tests for these, because it would take more work than manually testing them, case-by-case.

Another situation where automation is not feasible is due to timing issues, when you’ve got no reason to automate testing on the build, because changes will come too quickly.

What are the major types of manual software testing?

It’s almost always worth it to include a manual testing phase. Manual testers usually studied testing theory, which is not always the case with test automation experts. Experienced manual testers can possibly have a deeper and more well-rounded view on the project.

Testing tasks are gathered into ‘sets’, and these sets are defined by which development phase they are used in.

The smoke set is a bundle of very minimal, very fast kinds of tests that are used to perform the most important system checks. The outcome of these tests decide if the current build is even usable. This can save you from situations like scheduling a 3-day test and encountering a major issue in the first hour of it, forcing you to reschedule completely.

The regression set is bigger. It is used to check how different elements cooperate, to identify the unexpected network effects that can occur when you change anything in a complex system. Sometimes these effects can be beneficial, but more often than not, they present a new challenge.

There is the performance set, which gathers non-functional tests together. A typical one of these is checking load times.

And Judit performed acceptance sets which emulate longer, complex user stories. You choose an entry point where a typical user would start using Labster and go through the whole user experience, with different roles and levels of access, like teachers, students, or outsiders — their experience can vary greatly.

As more and more functions are added, test cases need to be modifed as well — we call this ‘maintenance’. Both in automated and manual tests, the tester must recognize that the scenario should be changed. Smoke tests have a tendency to grow too big, taking up too much time if left unchecked.

How to make sure the tests run well?

When a tester joins a project, one of their first tasks will be to familiarize themselves with the currently automated tests, because they will need to keep those in good condition and up-to-date. According to Judit, the tester should be able to understand what the test does just by its title, without looking at any code.

Another important thing is to harmonise cooperation between the developer and the tester. One of the key responsibilities of the tester is to identify what to test, create checklists and test scenarios. If project management helps with managing this, the developer should be able to form opinions and suggest test cases for the tester.

Team spirit can and should be kindled in remote teams as well

We already mentioned that the Labster team showed very advanced project managament skills, using some interesting methods.

Judit felt that it’s important to talk about the human side of this project. Small things, which, in her opinion, push forward the cooperation between team members a lot:

The team leaders paid close attention to how the overtime hours are looking. If hours started ramping up, they warned the developers themselves and stepped in to change the schedule and lower the burden on the individual team members.

They organised online team chats, not just meetings, especially for team building purposes. The goal was to help team members get to know each other more, personally, to overcome the distance of online work.

It was also a nice touch that after the launch, Labster sent small physical presents to everyone as a token of appreciation.

Things like this build the team spirit and commitment immensely, according to Judit.

This is just one more reason we are happy to take part in the Labster-story.

If you are interested in more stories from the strange world of IT, follow our Medium!

We share everything we learn about the latest industry insights, the many aspects of the remote lifestyle, the key challenges of project management, and we dive deep into the diaries of our developers!

You keep your softwares always up-to-date, right? Be informed about the IT & Remote work news and click on the link: https://email.rolloutit.net

Check Rollout IT among the best software testing companies here: https://www.designrush.com/agency/software-development/software-testing

Book a call or write to us

Or

Send email

By clicking on ‘Send message’, you authorize RolloutIT to utilize the provided information for contacting purposes. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

According to the Cloud Native Computing Foundation (CNCF), Cloud-Native adoption has soared in recent years, with over 5.6 million developers using Kubernetes alone as of 2021. Leading companies like Netflix, Spotify, and Airbnb have used cloud-native approaches to power their global operations. Just moving to the cloud isn't enough. True success in cloud-native development comes from rethinking how software is built from the ground up. That means embracing flexible architectures, breaking apps into smaller parts that can be deployed independently, automating as much as possible, and focusing on continuous improvement. It's not about using the cloud. It's about using the cloud well.
Did you know that software bugs cost the global economy around $2 trillion annually in the US in 2020? The consequences of defective software range from lost revenue to security breaches and system failures. As businesses scale and products grow more complex, ensuring software quality at every stage becomes a mission-critical task. This is where automated testing toolkits play an important role in maintaining reliability and efficiency. In a world where technology is evolving rapidly, relying solely on manual testing is no longer practical for enterprises that need speed, accuracy, and reliability. Automated testing helps organizations deliver high-quality software while making better use of their resources. Let’s dive into how these toolkits improve quality and efficiency at different stages of software development.
Vibe Coding is the process of developing AI-driven applications in a flow-based, intuitive manner, where developers build prompts, logic, and workflows rapidly, often without writing traditional code. This approach emphasizes creativity, flexibility, and speed, allowing teams to iterate quickly without being constrained by traditional development lifecycles. Focuses on rapid iteration, natural language, and modular building blocks. Popular in environments using LLMs, chatbots, and generative AI products. Empowers non-traditional developers (project managers, designers, analysts) to prototype AI features. Encourages exploration and experimentation with model capabilities. Lowers the barrier to entry for creating intelligent systems.
Many enterprises struggle with outdated systems that don’t work well together. As businesses grow, they add new software and tools, but without a solid integration strategy, these systems become disconnected and difficult to manage. Traditional development often treats APIs as an afterthought, leading to slow development, high maintenance costs, and limited flexibility. API-first development takes a different approach. Instead of building software first and figuring out integrations later, it starts with designing APIs as the foundation. This ensures that all systems, whether internal tools, customer applications, or third-party platforms, can connect smoothly from the beginning. The result? Faster development, easier system upgrades, and a more scalable, future-ready architecture.
By 2025, the mobile learning market is expected to reach around $94.93 billion and is projected to grow to $287.17 billion by 2030, with an annual growth rate of 24.78%. With smartphones becoming more widely accessible, mobile learning (m-learning) has become an essential part of modern education.  This rapid growth reflects a shift in how people access education, making learning more flexible, interactive, and personalized. Whether it's students looking for supplementary resources, professionals upskilling on the go, or educators seeking innovative teaching tools, mobile learning apps have revolutionized the way knowledge is shared and consumed. As technology continues to evolve, the demand for well-designed and engaging educational apps is higher than ever, shaping the future of learning across all age groups.
By 2025, the mobile learning market is expected to reach around $94.93 billion and is projected to grow to $287.17 billion by 2030, with an annual growth rate of 24.78%. With smartphones becoming more widely accessible, mobile learning (m-learning) has become an essential part of modern education.  This rapid growth reflects a shift in how people access education, making learning more flexible, interactive, and personalized. Whether it's students looking for supplementary resources, professionals upskilling on the go, or educators seeking innovative teaching tools, mobile learning apps have revolutionized the way knowledge is shared and consumed. As technology continues to evolve, the demand for well-designed and engaging educational apps is higher than ever, shaping the future of learning across all age groups.