This Assessment Pudding Lacks The Proof I Was Promised
I spent an hour on the phone with my buddy Tex yesterday (names have been changed to protect the “innocent”). His employer recently asked him to complete the DISC personality assessment and he really wasn’t sure what the score report was attempting to communicate. Given my background, he asked me to explain a few things.
I am pretty familiar with personality assessments like the MBTI and the NEO 5 along with a bunch of other non-clinical psychological assessments), but the DISC report threw me for a loop. A dug around the web for a little bit and I must say I was underwhelmed with what I found. Please note, I am not an expert on the DISC; I never heard of it prior to yesterday, but given the volume of proprietary psychological assessments out there, I can’t be surprised. I will also acknowledge that I did not take the time to engage in a literature review. However, what little I was able to find raised some red flags. Here is what my admittedly non-comprehensive search revealed.
- Very limited psychometric evidence. One website reports reliability data on one sample. That’s it. No evidence for a factor structure is presented. In this case, factor structures evidence tell DISC users that there are in fact 4 relatively independent personality variables. Without this evidence, we have no way of knowing if the test assesses a single personality dimension or 12.
- No validity evidence. Is this test appropriate for use in personnel selection, retention or development? I don’t know. The “validity” evidence reported was actually test-retest reliability. Whoops! Elementary mistake, folks. Validity data tells us that the test in question actually measures what it says it does. At minimum results of the assessment should be correlated with some measure of job performance.
- Get the theory straight. Every assessment is a function of an underlying theory. This theory tells us a number of things, but at minimum it describes the variables involved and how they related to themselves and others. It is the framework that gives meaning to the numbers reported. In this case, the DISC claims to be a 4- factor model of personality. This is not necessarily a bad thing, though a tremendous amount of research suggests human personality structure is composed of 5, near-universal dimensions (I say near-universal because I am pretty sure we haven’t studied Wookies or Klingons, though it apparently holds in dogs too). Sadly, some of the marketing materials on this site manage to plot these 4 dimensions onto a single, two-dimensional graph. It would be an impressive feat of mathematical genius, if it wasn’t wrong.
Why am I fretting about this test and its flaws? It’s not like this is the first (or sadly, the last) assessment that needs work. It’s probably not even particularly high-stakes. What is really under my skin is that this test may have been used for personnel selection. The best case scenario here is that this organization is spending a few thousand dollars on a test that yields no value. The CEO probably spent more on an executive-grade mouse pad. Of course, the worst case scenario is that using this test made this organization vulnerable to pricy litigation. I don’t work for this company, nor am I a shareholder, so what do I care, right?
I care because this poor decision is a symptom of a larger, more serious problem. I can’t begin to count the number of vendors selling assessments and other personnel-related services purported to enhance their client organizations. I am confident my HR buddies will back me up on this one; you al probably shoo us away on a daily basis. We pretty much all market our services on the basis of research and scientific evidence. Sadly, increased exposure to this industry has led me to conclude that it is much easier to say our claims are backed by research than actually perform said research. Apparently few bothers to check and those that do are confused by impressive-sounding jargon.
It’s a problem, seriously. You wouldn’t buy a microwave that didn’t heat your food, nor would you buy a car that won’t start (unless you like Gm products… LOL! I Keed!) Why then would you buy a survey, analysis or any other assessment that may not work? That, my friends, is a rhetorical question. You should not.
Poorly researched assessments and interventions are a burden to both clients and vendors. They put our credibility and hence our industry at risk. I feel that every shoddy piece of work reflects on me personally, which is why I get so worked up about this.
So what do we do about this?
- HR clients, demand proof. Don’t be satisfied with claims of research support. Ask to see the research. More importantly, be prepared to be in a position to understand that information. I am sorry to tell you, but neither a degree in HR or business, nor years of experience in the field will give you the tools to understand research results. You either have to obtain some decent training in psychological assessment, or you must seek the advice of someone with this background on your decision team. The latter is much easier (My advice is free to my friends and cheap to people I like).
- Service providers. You have to step up your game. We have an ethical and professional responsibility to back up your claims. If any of your marketing materials or sales people refer to research evidence, make sure you actually have it. It is not that hard to do. If you don’t have the expertise on hand, get it. The employment market is still soft and you can probably pick someone up on the cheap. Once you have them, set them on the task and get out of their hair; I have seen too many managers and marketing people constrain the science of their experts.