(or how we got our ci builds green again)
Everyone (at least almost everyone I know) writing Selenium or webdriver tests knows it's not easy to keep all these tests stable. Most of the time, they run perfectly fine on your development machine and start failing after a while in your CI build (and I'm not talking about regression here).
I've felt this pain already in quite some projects and it's not always easy to find what's causing your tests to fail. On my current project, again we are having some pingpong playing webdriver tests (for those who don't understand: it means they run sometimes and sometimes they don't).
Of course your tests should be reproducible and consistently working or failing. And that should be your first step in solving the problem: find the cause of the instability in your test and fix it. Unfortunately when webdriver tests are involved this can be sometimes hard to achieve. The webdriver framework highly depends on references to elements that can become stale. When AJAX calls are involved, this can become a real headache-causer. I'm not going in too deep on all the possible causes of instability, since this is not the main subject of this blog.
So, how do you make your build less dependent upon some hickup of your CI system? At our current project we didn't have an auto-commit for about 3 months because of pingponging webdriver tests. Our 'solution' was, when a build had run and some tests failed, run them locally and if they pass, you can tag your build manually. It cost us a lot of time that could be better spend on more fun stuff.
While trying to find a solution for this problem, we had this idea: maybe we could just ignore the pingpong-playing webdriver tests by excluding them from the main build and running them separately. In that case our main build is no longer dependent upon the vagaries of our tests. That way we would have more auto-commits again, but we would introduce the risk that one of the pingpong-playing tests this time fails for the right reasons. When deploying this strategy, one could ask himself whether you shouldn't throw away the pingpong tests entirely, since you would be completely ignoring them.
Then, we came up with another solution which turned out to be our salvation. What is this magic drug we are using? It's quite simple actually: we created our own (extended) version of the JUnit runner we used before and let it retry pingponging tests. To accomplish this, we mark pingpong tests with the @PingPong annotation, using a maxNumberOfRetries property to define how many times the tests should be retried (default is one). The @PingPong annotation can be used both at method and class level to mark a single test or all tests in a test class as playing pingpong.
An example of a test class using the @PingPong annotation looks like this.
@RunWith(MyVeryOwnTestRunnerRetryingPingPongers.class) public class MyTest { @PingPong(maxNumberOfRetries = 2) @Test public void iWillSucceedWhenSuccessWithinFirstThreeTries() { // ... } }
With MyVeryOwnTestRunnerRetryingPingPongers defined like this.
public class MyVeryOwnTestRunnerRetryingPingPongers extends SomeTestRunner implements StatementFactory { public MyVeryOwnTestRunnerRetryingPingPongers (Class aClass) throws InitializationError { super(aClass); } @Override protected Statement methodBlock(FrameworkMethod frameworkMethod) { return new StatementWrapperForRetryingPingPongers(frameworkMethod, this); } @Override public Statement createStatement(FrameworkMethod frameworkMethod) { return super.methodBlock(frameworkMethod); } }
You still need the implementation of StatementWrapperForRetryingPingPongers. You can find this one here.
I am conscious this is just a pain killer, but it's one that helped us getting our builds green again and gives us extra time to fix our instable pingpong tests more thoroughly.
Please let me know if this post was helpful to you. What do you think about our solution to our instability problem?