fbpx
Web front-end testing: here’s how to do it well!

We’ve already discussed what has been going on in the back-end of the Labster platform, an exciting project which glimpses into the future of education. Rollout has been helping Labster with several senior backend, frontend engineers, and QA testers (altogether 11 IT professionals). This time we share Judit’s experience, who was a member of the testing team. She played a crucial role in safeguarding the complex transition of the Labster web portal.

This article offers insight into how you should think about software testing, what kind of hidden values could be generated for your business with a good testing setup in place, and what makes a great working environment for software developers.

Judit joined the Labster team when the development of the portal started, and stayed all the way through the release.

She is a mechanical and transportation engineer double major, who got into software testing right after college. She got hired to test the software of automated railway control systems, so she learned the basics in an environment where hardcore, bulletproof safety is the lowest standard. After that she went to work at a Hungarian unicorn startup where they produced a navigation software that has become a global hit.

She got into the Rollout fold with almost 10 years of experience in software testing.

She joined the Labster project in Q3 of 2021, when they prepared for a switch to a new web portal. A process like this requires long testing procedures before, and after. This hasn’t only been a design change, Labster changed the way they handle data as well.

Judit’s main responsibilities were:

– building test cases

– picking the testing scenarios that could be automated

– writing and managing the documentation

– manual testing, based on the tickets in an AGILE system

– working together with automated testers to identify edge cases, execute the test plans.

When should you test manually?

Automation is something every company is chasing now. The processes need to be digitalised, and when they are online, they should be automated, as much as possible. In the case of testing, this is not so straightforward. Ultimately you want to automate as much as you want, but you often can’t get away without at least some manual testing.

Labster has been Judit’s first project where testing in AGILE really felt well managed. But how come? Why is this so hard?

Manual testing, in her experience, usually devolves into a biweekly waterfall-model. The main challenge of manual testing in AGILE is to divide the project into subtasks that are small enough. Usually, the new developments arrive in batches that are too big to test at once. Slowly, but surely, testing almost always gets behind.

In the Labster web project, front-end and back-end development has been separated, so the testers were able to prepare in time.The Sprint only featured truly doable testing tasks.

In AGILE, Tickets are generated in every Sprint. Judit thinks that in an ideal world you could set up automated testing systems right when the new code is done, but usually this is not possible. The new iterations need to be tested manually first. And even before that, you need to know exactly what you want to test.

Sometimes it is evident: here is a new button, you push it. It’s also clear that in the case of a web UI, you need to take a look at it with several devices and browsers, before you can automate the process for further testing.

Sometimes though, it’s not so clear. Even during a simple registration process, lots of test cases can occur. A few examples:

– email address does not exist

– email address has bad format

– email address too long

– unsupported special characters in the form

It’s impossible to know everything in advance, that’s why you need preparation before automation.

A clear example when manual testing is superior is when you identify fringe cases, situations that have a very low chance to occur. You don’t want to automate the tests for these, because it would take more work than manually testing them, case-by-case.

Another situation where automation is not feasible is due to timing issues, when you’ve got no reason to automate testing on the build, because changes will come too quickly.

What are the major types of manual software testing?

It’s almost always worth it to include a manual testing phase. Manual testers usually studied testing theory, which is not always the case with test automation experts. Experienced manual testers can possibly have a deeper and more well-rounded view on the project.

Testing tasks are gathered into ‘sets’, and these sets are defined by which development phase they are used in.

The smoke set is a bundle of very minimal, very fast kinds of tests that are used to perform the most important system checks. The outcome of these tests decide if the current build is even usable. This can save you from situations like scheduling a 3-day test and encountering a major issue in the first hour of it, forcing you to reschedule completely.

The regression set is bigger. It is used to check how different elements cooperate, to identify the unexpected network effects that can occur when you change anything in a complex system. Sometimes these effects can be beneficial, but more often than not, they present a new challenge.

There is the performance set, which gathers non-functional tests together. A typical one of these is checking load times.

And Judit performed acceptance sets which emulate longer, complex user stories. You choose an entry point where a typical user would start using Labster and go through the whole user experience, with different roles and levels of access, like teachers, students, or outsiders — their experience can vary greatly.

As more and more functions are added, test cases need to be modifed as well — we call this ‘maintenance’. Both in automated and manual tests, the tester must recognize that the scenario should be changed. Smoke tests have a tendency to grow too big, taking up too much time if left unchecked.

How to make sure the tests run well?

When a tester joins a project, one of their first tasks will be to familiarize themselves with the currently automated tests, because they will need to keep those in good condition and up-to-date. According to Judit, the tester should be able to understand what the test does just by its title, without looking at any code.

Another important thing is to harmonise cooperation between the developer and the tester. One of the key responsibilities of the tester is to identify what to test, create checklists and test scenarios. If project management helps with managing this, the developer should be able to form opinions and suggest test cases for the tester.

Team spirit can and should be kindled in remote teams as well

We already mentioned that the Labster team showed very advanced project managament skills, using some interesting methods.

Judit felt that it’s important to talk about the human side of this project. Small things, which, in her opinion, push forward the cooperation between team members a lot:

The team leaders paid close attention to how the overtime hours are looking. If hours started ramping up, they warned the developers themselves and stepped in to change the schedule and lower the burden on the individual team members.

They organised online team chats, not just meetings, especially for team building purposes. The goal was to help team members get to know each other more, personally, to overcome the distance of online work.

It was also a nice touch that after the launch, Labster sent small physical presents to everyone as a token of appreciation.

Things like this build the team spirit and commitment immensely, according to Judit.

This is just one more reason we are happy to take part in the Labster-story.

If you are interested in more stories from the strange world of IT, follow our Medium!

We share everything we learn about the latest industry insights, the many aspects of the remote lifestyle, the key challenges of project management, and we dive deep into the diaries of our developers!

You keep your softwares always up-to-date, right? Be informed about the IT & Remote work news and click on the link: https://email.rolloutit.net

Check Rollout IT among the best software testing companies here: https://www.designrush.com/agency/software-development/software-testing

Book a call or write to us

Or

Send email

By clicking on ‘Send message’, you authorize RolloutIT to utilize the provided information for contacting purposes. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

When you're entrusting important business operations to an external partner, the stakes are high. Your success depends not just on their technical or operational capabilities, but on the strength of your relationship with them. Long-term outsourcing partnerships require more than transactional interactions and they demand a foundation built on trust, transparency, and mutual respect.
Evolution of cloud computing has significantly transformed how modern applications are designed and deployed. For organizations aiming to scale their operations and designing a robust and scalable cloud-native architecture is no longer optional. What is Cloud-Native Architecture? Cloud-native architecture refers to an approach for designing applications that fully exploit the inherent benefits of cloud computing. These benefits include scalability, flexibility, high availability, and cost efficiency. This model leverages advanced concepts like microservices, containerization, and orchestration to ensure applications are optimized for dynamic, distributed environments.
What Are AI Chatbots, and Why Should We Care? AI chatbots are more than just smart assistants. They’re tools powered by artificial intelligence to interact with humans naturally. But beyond their basic function, AI chatbots are transforming how education works and are providing the way for a more personalized and accessible learning experience for every student. Let’s take a closer look at how they’re reshaping the educational landscape.
Did you know that the global EdTech market is projected to surpass $404 billion by 2025? That’s not just a big number. It’s a sign of how education is being reimagined. The rise of billion-dollar EdTech unicorns has proven that technology isn’t just complementing education but it’s completely transforming it. As businesses look to tap into this rapidly growing industry, we need to understand what makes Ed-Tech unicorns successful. What are they doing right and how can their strategies inspire us to create impactful solutions for learners and institutions alike? Let’s explore the trends, tech and tactics reshaping education.
In this guide, we'll show you how to use Cursor AI along with Superwhisper to create a Python script without typing code. This means you can build, edit, and troubleshoot your code just by using your voice.  Whether you're someone who prefers hands-free interaction or just looking to boost productivity, these tools make coding more accessible and efficient.
Did you know that Gartner has predicted that 80 percent of customer service and support organizations will be using some form of generative AI technology to support agent productivity and customer experience (CX) by 2025. For B2B companies integrating AI chatbots isn't just a futuristic idea but it’s the key to scaling customer communication effectively, all while keeping costs in check. In this article, we’ll break down how AI chatbots can transform basic communication in industries, simplify operations and offer a stellar customer experience.