How to Prevent Benchmarking From Going Wrong

Earlier this month, I had the pleasure of attending the Global Diversity Leadership Exchange at the New York Stock Exchange in New York City. During the event, I had an enlightening exchange with a conference attendee from a prominent service organization on the topic of benchmarking.

He shared that his organization is focused only on diversity and inclusion scores that meet or exceed industry benchmarks or norms. Therefore, any scores below such are viewed as irrelevant, since the competition even failed to meet the mark.

I found this view troubling. An overwhelming issue with using norms as the focus rests on the organization striving to be “as good as the others.” When this is the case, using benchmark data becomes an excuse for inaction and may result in mediocrity over time. Diversity initiatives may be particularly vulnerable to such practices, since they may be set in contentious organizational environments where creating positive change is already challenging.

Nonetheless, if an organization truly wants to outdo its competitors, knowing that its score is average or slightly above average can be a powerful motivator for change. The only caveat is that the benchmark data itself must be relevant and of high quality.

Based on a review of current best practices for evaluating benchmarking sources, I offer the following tips to ensure that you use benchmark data wisely. Regardless of what kind of benchmark source used — consortium (Mayflower Group), population samples (U.S. workforce), convenience sample (company X’s consulting database), or internal norms (department, workgroup, job level, etc.) — the following should be considered:

• The wording between your items and the benchmark items must be nearly identical. For example, “I trust in management” is not equivalent to “My manager is trustworthy.”

• The ordering of items in your survey should be similar to the ordering of items in the benchmark survey. For example, if you place a job satisfaction item at the beginning, you will get higher scores than if you place it at the end — using the same set of employees.

• To ensure accurate comparison to a benchmark database, request information on its composition — for instance, company type or industry. Other good questions to ask: What criteria were used to select organizations? Are all items based on entire organizations or just one department or location? Is the data for each company a one-time survey or part of an ongoing survey program?

Keep in mind that organizations participating multiple times also tend to have increasing scores over time as they take action to improve their baseline.

• Ensure that the benchmark database consists of current data. Economic trends and major events may account for differences between your data and the benchmarks. However, there are no hard rules for what is “current.” A study found that no difference in job attitudes existed between pre-9/11 and post-9/11.

• Make sure that the benchmark data is weighted correctly. For example, if you are comparing your company’s aggregate standing on an overall organizational factor — for instance, service climate — to those of other companies, be sure that benchmarked companies are represented by one overall average score. If they aren’t, then it means that your individual employees are being compared to a pool of employees.

This is a problem, because larger companies, contributing more employee data, will unfairly influence the overall average score.

• Realize that employee motivation differs when responding to different types of surveys, as differences in motivation can impact the results. For example, employees responding to a company survey may have personal agendas, whereas employees responding to a national survey probably do not.

It is no surprise that organizations want to apply benchmarking data to their survey results. This desire is for good reasons, since norms are essential for determining how “good” or how “bad” one’s scores are on a particular set of items.

Incorporating these suggestions will jump start current or new benchmark endeavors and ultimately yield success.