Analysing Formula 1 testing performance can be a little like reading tea leaves, only even less accurate.
Last year the consensus among pundits and teams – even the ones involved – was that Ferrari was on top in testing and would go to the season-opening Australian Grand Prix as favourite.
That proved to be misleading, as Mercedes dominated and the top Ferrari finished almost a minute adrift in fourth, with Max Verstappen’s Red Bull proving to be the leading challenger in the race in third place.
So, were the predictions completely wrong?
Well, yes and no. Ferrari’s testing advantage pre-season last year was real. Despite what some claim, this wasn’t a case of just looking at who set the fastest time – Sebastian Vettel – because to all intents and purposes Lewis Hamilton set an identical time for Mercedes just 0.003s slower. So a fastest-time reading of testing would have put them neck-and-neck.
Headline times aren’t what matters. Instead, evaluating testing pace is a blend of the real, tangible data combined with a multitude of assumptions concerning run programmes and fuel loads, trackside observations, tyre use and snippets of information gathered from inside the teams. It’s a similar exercise for the teams themselves to evaluate pace, although they have far more data and the number-crunching capacity to do it even more deeply.
The conclusion Ferrari was fastest last year was down to several key things – long run pace, the pattern of laptimes over testing as a whole, the fact its theoretical best lap time was actually faster than the one Vettel posted and the struggles of Mercedes. Technical director James Allison admitted late last season that his team expected to go to Australia and be behind Ferrari – just as Ferrari confessed to being confident it was ahead.
Testing, by its very nature, is a limited data set, but it’s the best data set available until we get to Melbourne. All analysis is caveated and presented as what might be termed a ‘provisional’ model
This wasn’t a question of ‘sandbagging’ as some claim – because this isn’t really something teams do in testing. Sure, they don’t all conduct flat-out, ultra-low-fuel qualifying simulations but simply eschewing that is not what’s conventionally called sandbagging.
With testing so heavily restricted, teams have better things to do than indulge in a shady game of cat-and-mouse. Instead, they are following their own programmes, which are not well served by going out and battering their fastest time ever-lower. The objective is to get the car to work as well as it can and no team will suddenly reach into a magic bag of extra performance to bolt onto the car simply because somebody posts a fast lap time.
So what was the reason for the 2019 testing picture being ‘wrong’?
Simple. Not only did Mercedes run the first test with a very basic car aerodynamically, one signed off several months earlier in order to allow production of the ‘real’ package with the maximum research and develop it, but it also struggled to make it work.
Much of the second test was trying to get this package to work, which Mercedes finally started to crack on the final morning. That led to improved lap times, but even then Mercedes didn’t believe it was ahead. In fact, all of the teams headed to Australia expecting Ferrari to be in front, while Mercedes was quietly confident that any ground lost early in the season would be made up – and then some – over the balance of the campaign.
The feeling that Mercedes wasn’t really in front was kept alive by a troubled weekend for Ferrari in Australia, where it struggled to get the best out of its car. But still the Melbourne result was an apparent reversal of testing and, seemingly, a victory for Mercedes ‘sandbagging’.
The reality was more complicated than that. Mercedes had understood its car, gained on the last day of testing, gained again through data analysis before Australia then got everything right on the weekend. It’s what great teams do.
Testing, by its very nature, is a limited data set, but it’s the best data set available until we get to Melbourne. All analysis is caveated and presented as what might be termed a ‘provisional’ model, a model that will be refined dramatically in Melbourne and continue to be modified and evolve over the rest of the season.
What’s more, sometimes there are things that conditions don’t reveal. The 2019 Haas, for example, had a strong front end and worked very well in cooler conditions. That’s why it went well pre-season and in Melbourne, then things started to go badly wrong in the race in Bahrain.
It will be the same this year, and all performance interpretation should be treated accordingly. But at this stage of the year, it’s the only way to get a feel for the season.
The important thing is to take it for what it is, a signpost, a map of a landscape that’s ever-changing and that doesn’t offer direct observation. Testing is not supposed to be an infallible predictor of what is to come, but it does point you in a direction. And sometimes the deviation from that direction is part of the story of the season.
That’s the fun of testing. And just because it isn’t and cannot be definitively accurate, doesn’t make it any less worth doing.