Telerik blogs
Real Life Guidelines that Deliver Results

You’re reading the fifth post in a series that’s intended to get you and your teams started on the path to success with your UI test automation projects:

1. Introduction
2. Before You Start
3. People
4. Resources
5. Look before You Jump (you are here)
6. Automation in the Real World
7. Improving Your Success

Important note: After its completion, this series will be gathered up, updated/polished and published as an eBook. We’ll also have a follow-on webinar to continue the discussion. Interested?


Chapter Five: Look Before You Jump

By this point you should have a clear picture on the business-level problems you're hoping to solve, the team you'll need to build and the tools/infrastructure you'll need in place. Now it's time to stitch everything together and build some tests.


How's Your Process?

Before you jump, think carefully about your delivery process. I've already laid out a number of things about communication, tooling and infrastructure. Now's the time for you to sit back, as a team, and get serious about how your workflow will run.

Take the time to sit as a group and actually walk a feature through your flow. Figure out where you will include discussions about testing. Use a white board, Post-Its, notepads—whatever it takes to diagram the flow and all the discussions that should happen.


Clarify Expectations

As you walk through those discussions, see if there are any clarifications or modifications to expectations you've already worked on. Are there any new conversations you need to have with stakeholders and sponsors? Does your team have a good grasp on what's expected of them?

Make sure everyone's on the same page regarding initial expectations. Also, ensure everyone knows the expectations will likely be tweaked as you actually roll in to your work.


Start With a Pilot

If at all possible, teams should start automation efforts with a pilot, prototype or spike. Regardless of what you call it, carve out an area you can focus on for two weeks of full-time work. The goal of this pilot is to identify and resolve any problems your team may run into in the "regular" work. You'll be getting a feel for where you need to have conversations, what the initial impact on velocity might be and places to tweak your infrastructure.

As with all pilots, setting the scope is critical. You can't bite off a huge amount of work. The pilot has to be small enough in scope that you can get work done, while discovering whether or not your tooling, process and communication will be effective.

Let's take a proposed feature addition as our pilot. We'll work it through our process and see where things work out or don't. We'll use adding in a feature that recommends additional related or interesting products to you as you're checking out in your shopping cart. The cart's built already; we're extending it to include the recommendations as part of the checkout flow.

Disclaimer: Of course this example will have lots of holes in it. It's not fleshed out; it's simplistic. Please cut me some slack here and focus in on the conversations around testing.


1. Envisioning: Stakeholders need to understand the costs around testing, both manual and automated. Testers should be giving a high-level, rough idea of what the impacts of testing would be, so that stakeholders can make an informed decision of the total cost of the feature.

"We'll need additional datasets, we'll have to modify the build flow to handle those, we'll need a modest number of new automated scripts, and we'll need to do quite a bit of exploratory testing. I'd guess we'll need two weeks' time, plus support from the DevOps folks for the data work."

Things you might discover at this point:

  • You don't have the right people in the conversations
  • You're spending too much time getting too much detail and too many edge cases
  • You may not be able to get as much support from your infrastructure folks as you thought

2. Early UX Design: Testers should work with the UI/UX members to ensure everyone knows how to keep the UI as testable as possible.

"I'll need good IDs on these four areas of this screen so I can write good locators. What asynchronous actions will be happening on this page so that I can figure out the right conditional waits?"

Things you might discover at this point:

  • Your UI may have technical limitations on what you can do to make it testable
  • You may not have access to all the UI (think legacy or third-party systems)

3. Early Architecture Design: Testers need to understand the high-level architecture to test effectively. This is also a great time to start discussions with developers around what testing will happen at the unit or integration level, and what tests will happen at the UI level. Discussions are generally still high-level and conceptual.

"The recommendation engine is a separate component that handles determining recommendations. It is sending results back to the cart via web services, right? Will you be testing the recommendation logic via those services? If so, then I can just write tests to ensure we're getting recommendations pulled back and rendered on the UI. I won't have to write tests to check that all the various combinations are creating the proper recommendations. I can also help you with those web service tests!"

Things you might discover at this point:

  • Parts of the system may not adapt easily to automated testing. You may be relying on more UI tests than you would prefer—but you do what you can with the tools you have at hand

4. Starting Work on the Feature: Now's the time for the team to get into details. Discuss the specifics of the data you'll need. Talk about any backing APIs you will require. What edge cases and combinations get tested at which levels? Also, this is the time that you're able to start writing the scaffold of your tests, even before the UI is built—you can do that because you talked with the UX/UI people early in the game, remember?

"Can we work together to build a method to help me set parameters for recommendations? I could then use that as a setup for UI tests. I'll need some way to load these specific products into the recommendation database. Can I help you build up combinatorial/pairwise tests to run through the web service tests you're building? That way, we could cut the number of iterations you'd need to write, and I'd be able to focus on the main flows with the UI tests."

Things you might discover at this point:

  • Time constraints may force you to make hard choices about what parts of the feature to work on
  • Code and architecture constraints may force you to do more UI testing than you'd prefer
  • You may not have easy access to helper APIs, which may mean more UI code than you'd prefer

5. Working the Feature, Iterating on Feedback: Hopefully you're at a point where development and testing is happening nearly concurrently. This means testers can get feedback to the developers very, very quickly. This enables teams to quickly fix issues EARLY in the game. This feedback happens best when teams are communicating directly to each other, not just relying on bug reports to percolate through the process.

"Hey, I realized we'd missed a couple edge cases with our test data. When I added them in, I found we're recommending motor oil instead of cooking oil when someone's buying a stir fry kit. I don't think that's what we meant. We need to modify the recommendation logic to pay better attention to an item's category."


Things you might discover at this point:

  • Your team's skills may not be at the level you need for effective automation. Take a tangent to shore up skills as needed. Additionally, make sure you're adjusting schedule/velocity predictions as needed to account for the slower learning.
  • It's often hard for testers to keep pace with developers. You may need more testers, or you may need more collaboration between your developers and testers.
  • Some members of the team may not value the fast feedback and increased collaboration. It's a culture change that's hard to adapt to, but it's worth the effort.

6. Rolling Test Suites into the Build Process: You should be adding all your automated tests (unit, integration, UI) into your build process. This means configuring your CI builds and scheduled jobs to run your tests, as appropriate. Ensure your team has access to the reports they need.

"OK, so we're ready to add our integration and UI tests to the regularly scheduled jobs. All the UI and most of the integration tests are too slow to add to the CI build, but I think we should add these two relatively fast integration tests to the CI build to ensure we've got this part locked down. These five UI tests should go in the hourly UI job, and the remainder need to get added to the nightly job. Jane the stakeholder will see them showing up in her overall trend report, and we team members will see them in our detailed reports."

Things you might discover at this point:

  • You may need to expand your infrastructure to support the additional testing. More VMs, bigger build servers and so on
  • You will likely need to adapt reports for all your data consumers—stakeholders, business sponsors, PMs and others

7. Maintaining the Suites: Your automated test suites are a metaphorical living, breathing creation. You're going to have to spend time in the care and feeding of them. You'll need to fix them when the tests break (and take away lessons learned), update them with the systems change, and refactor or outright re-architect them on occasion.

"We've had two tests break last night due to changes in the helper APIs. That's roughly four hours to fix. We also had another four tests break due to changes in the checkout workflow. We think that's a day's work. Finally, we think we've got some duplication in a number of tests around the cart and recommendation engine. We want to take a half-day to pore over the tests and weed out any that are unnecessary."

Things you might discover at this point:

  • Your system may be more brittle than you expected—small changes may break multiple tests
  • Your test suites may be brittle; learn to keep them as flexible and concise as possible

Learning from the Pilot

A pilot project is a gift—seriously! Dedicating time to trying out significant changes ensures you'll have the best possible chance to succeed.

Step back and have the entire team evaluate how things went. Some questions you might consider:

  • Did the pilot appear useful to helping solve the business problems you identified up-front?
  • Does the team have the right skills and ability to become adept at the technical aspects of the tools?
  • Can your build/deploy process support the automation toolset?
  • Does the increased communication help with the process, or is it too much of a cultural hindrance?

These are hard questions, but they're critical. Asking these gives your team the best chance to figure out if you're on the wrong track and need to completely re-assess; or if you're on the right track that can be adjusted to make everyone successful.

What do you think of the pilot evaluation process we've laid out here? Have you tried something similar yourselves? Let us know in the comments.

Next up: Chapter Six


About the Author

Jim Holmes

Jim is an Executive Consultant at Pillar Technology. He is also the owner of Guidepost Systems. He has been in various corners of the IT world since joining the US Air Force in 1982. He’s spent time in LAN/WAN and server management roles in addition to many years helping teams and customers deliver great systems. Jim has worked with organizations ranging from startups to Fortune 100 companies to improve their delivery processes and ship better value to their customers. When not at work you might find Jim in the kitchen with a glass of wine, playing Xbox, hiking with his family, or banished to the garage while trying to practice his guitar.

Related Posts

Comments

Comments are disabled in preview mode.