IPF (Individual Performance Factor) is the output from Shell’s performance ranking system.
GPA and IDP are documents for tasks and targets and development goals.
Most things in the world of work have changed radically since I joined the workforce. Computers and communications have changed so much, and so have social changes such as deference for authority or experience and the multi-cultural revolution. We even used to go down the pub on Friday lunchtimes and get properly sozzled - those were the days!
Strangely, some of the things that have changed least in 25 years concern HR systems. The acronyms may have changed, but we always had GPA's, IDP's, and staff ranking. The latter is one of the most abhorred of all Shell processes - I have listened to rant after rant from the lowly and the mighty, from the victims and the perpetrators - but it is still there, and largely unchanged.
Perhaps that is because it is a good system. It is obvious that the company has to put limits around the amount it spends on salaries and bonuses. In dividing up the pot, it is also pretty clear that not everyone should get the same, and that individual performance should have some bearing. Furthermore, Jeroen van der Veer can't be expected to have personal knowledge of 80,000 people. Finally, targets cannot always be hard, deterministic, measurable.
Think about it. Given all that, what else can we do but have some sort of relative ranking sessions? We have divide people into blocks big enough to allow enough difference to emerge yet small enough that decision makers can have oversight of the population to be ranked. And, basically, you have to limit the pot available to each of those populations by imposing an average outcome. Finally, you need a few rules about the spread and benchmarks to ensure some consistency.
Sounds just like the system we have. And, with tweaks, the system we have had since I first attended a ranking session in 1990.
In case you don't know, a typical ranked population is between 30-80 people. Their respective bosses and bosses bosses, with an HR facilitator and maybe a few others, sit down for 2-3 hours and have an almighty row about who should get what score. The mean score is controlled at just over 1. While scores down to zero are allowed, in practice below 0.7 means disaster and 0.7 itself is usually a tough "message". Scores up to 1.5 are permitted, and a good ranking session will reward the top 5% or so with 1.3's, 1.4's and 1.5's. The rest have to fit between 0.8 (small "message") and 1.2. The skew towards the higher numbers means that the mode of the distribution should be 0.9, and, even though it is below "average", 0.9 should mean a good performance hitting all targets and even achieving more, and should never be associated with anything negative...apart from the simple fact that 0.9 is not 1.1 or 1.2.
There is some mythology about all the above, and, sadly, some inconsistency in application. Because there can be good reasons for a different type of spread, HR don't impose it strongly, but the overall result is too much inconsistency. There are sessions I've attended where it almost impossible to get above 1.2 or even 1.1, and others where it is far too easy to get 1.0. Occasionally, I've seen sessions where the more senior people get too many of the higher scores, even though HR track this and try to flag it. It is true too that those doing the more sexy or visible jobs tend to do a bit better.
There is a whole art to managing a ranking session. I've always thought it was one of the most important meetings of the year, both because a lot rides on it for people so we need to put in effort to achieve fairness, and also to make sure my own team does well enough. Yes, we are all enterprise first and try to get a fair outcome, but it is inevitable (and right) that individual line managers tend to think fair means a bit higher for their staff and a bit lower for others. Think about it, a typical subordinate will rank their own performance at 1.2. Your own assessment in an unconstrained world might be 1.1. But you think that for others too, so in order to meet your salary cap, you probably have to go into the meeting at 1.0. If you have a "good" meeting, you might end up with an outcome of 1.1, but a "bad" meeting might finish at 0.9. That is one heck of a disappointed employee you have just created there! And, by the way, you then have to go back with a message that is probably inconsistent to what you have been saying all year. No wonder we all think that, on balance, a "good" meeting is one where our own staff get slightly better scores.
In my view, a well run ranking session has various features. Managers should have submitted pre-rankings, and these should be close on average to the salary cap. A pre-meet with HR can help with this. Then, in the meeting, the discussion goes JG by JG (not team by team), starting at the junior end. There should be quite a few changes during this to the starting points submitted, as evidence is sought to compare the people at the same grade from different teams. As a result, some teams will end up higher than others, but with solid supporting evidence. The line manager's support is critical, but supporting evidence from others is very important.
I was lucky in my first ranking. I was a raw boss of West Midland region in the UK, and before the session I was called by my opposite number in East Midlands. We would both deny any collusion, but somehow by the end of the call we had some shared tactics, about who from each other's team we could support with what evidence. Wow, this worked a treat. Thanks Arthur! Over the years, I have gradually taken the senior partner role in these discussions. Is it cheating or just smart management? You decide. All I know is it works, though of course you have to set your own ethical compass around it.
There are other tricks I have learned. Knowing when to shut up and when to let someone else be derogative about others - because you know you will pay for it when your staff get discussed! How to try to shape the order of discussions to get your "tough" cases out the way first. Knowing which of your staff are likely to gravitate up during the discussion and which have a risk of downward pressure, and amending your pre-ranking accordingly.
It is an art all right. Maybe even a dark art. And personally I love it. Except when I lose!
Is it fair? Overwhelmingly yes. Shell managers take these things seriously and are fair people. HR generally moderate well. Even though you never have perfect evidence and we all have plenty of biases, usually we have enough to make a judgement. Nearly always, I can look at the overall ranking at the end and feel good that we had a good process and a fair outcome - albeit with a few lucky ones and a few casualties. It is not perfect - but I don't think we can do much better.
Also be sure that your IPF track record matters. When I'm recruiting I always look at the 3 year record of applicants. A row of 0.8's and 0.9's needs to be checked out because people can change jobs or be unlucky. But then, a row of 1.2's doesn't generally lie.
So, finally, a couple of tips. Perform. Perform especially well in December or in the week before the ranking. Make sure you impress a couple of peers of the boss or the boss of the boss. And make sure your boss has been round the block a few times, or at least has a good mate in the East Midlands!
No comments:
Post a Comment