(The US Department of Labor has published an excellent guide for employers buying tests. Accessible at
http://www.ipmaac.org/files/ONetasmtguide.pdf#search=%22US%20Department%20of%20Labor%20Assessment%20%22.
But the guide is long and technical and does not quote actual examples. Here is a simpler interpretation. )
Step 1: Ensure the Test is relevant to the job?
If you are buying a Test to test job applicants, establish that the Test measures a skill or ability required for the job.
To establish whether a Tests measures a required skill or ability, study the Job Description and list out the skills and abilities listed in the Job Description. Then read the Test documentation to what skills and abilities the Test measures. The Test should be chosen ONLY if the skill or ability it measures is listed in the Job Description. The assumption here is that the Job Description is well written. If this is not the case then a Job Analysis will have to be conducted which is covered in a seperate section on Job Analysis.
Step 2: Ask the Test publisher to explain sections of the Test Manual
Tests publishers are required to publish a Test Manual. This is a document that has sections on what the test measures, how it was created, who is the appropriate audience for this test, what are the scoring patterns of test takers with different educational or work backgrounds, what is the reliability of the test and what is the validity of the test. Ask the test publisher to present the information in the Test Manual in a conversation or in a PPT and insist on simple language, real world examples and the business impact of the information provided. Some of the things to watch out for
- What does the Test measure - a Test of spoken English skills for instance, can measure grammar or accent or fluency or articulation or rate of speech etc. Its very important to know if the test measures only grammar or only accent or both etc. This affects Step 1 - is the test relevant to the job. For instance, choosing a spoken English test that does not measure accent may be disastrous for a voice BPO where employees are expected to interact with international customers through the work day.
- How was it created - there are several technical details that a Test publisher has to keep in mind while creating tests. In general a good Test publisher creates Tests against defined competencies (skills or abilities), uses experts in the subject that is being tested, uses actual job samples to create questions in the test and runs trials before releasing a Test.
- Who is the audience for the Test - this defines whether the test was created for specific audiences based on parameters such as education, geography, age, job context etc. For instance a C Programming Test created for people passing out of Engineering college will be different from that created for Tech Architects at a large software company. As the Test buyer, ensure that the Test is appropriate for the audience you are hiring.
- Scoring patterns - this defines the historical performance of Test takers on this test. This is critical to define the selection criteria or cut off mark on the Test. You can refer to the scoring patterns to make sense of the the score that a Test taker obtains on the Test. For instance, if a job applicant scores 34% on the C Programming Test it is hard to draw any conclusion from it. But if you knew that 90% of all test takers have scored 33% or less on the Test it tells us that this job applicant is probably good at C Programming. A better indicator would be if we said 90% of all software programmers from 5 of the best software companies scored less than 33% on this Test, it would tell us that the test taker is certainly a very good C Programmer.
- Reliability - this is a measure of whether the test is consistent. If you used a weighing scale to weigh yourself, you would expect it to show the same weight if you measured yourself 5 times in a row during a single day (assuming you dont play football!). Similarly, a Test that measures competence is expected to be consistent. While most Tests cannot be as consistent as a weighing scale, it is expected to be reasonably consistent. Read the reliability measure reported in the Test Manual and ask the Test publisher to explain what it means. Ask the Test publisher to explain to you why the reliability measure is acceptable. A reliability coefficient of 0.7 or above is considering acceptable though this the acceptable range varies widely depending on the test and the circumstances in which it is adminstered. A simple rule of thumb is - reliability is higher if the test is longer. And any Test trade's off reliability with time since we cant expect Test Takers to take an 8 hour Test.
Step 3: Insist on a Trial Run
While it is possible to prove through Test Manuals that a Test is appropriate for your context, it is always best to do a trial run of the Test by adminstering it on people currently in the job role that you are hiring for and analyzing their performance. This is owing to the fact that a Test is ultimately as good as the questions in it. For instance, an Analytical Ability Test that has been administered on 20,000 fresh Engineers across the country may seem to be appropriate for your use considering the track record of the Test. But if your company hires only IITs and if the 20,000 engineers on whom it has been adminstered in the past are not from IITs you may have a situation where all the IITians score 100% on the Test. Which would embarass you on the IIT campus and more importantly would make it hard to decide which candidates you want to interview since the scores are all the same. Which is why a Trial Run is critical.
Step 4: Discuss and understand Test Adminstration
A Test works only when it is adminstered right. All Tests have clear instructions on how they have to be adminstered. Tests have to be adminstered with supervision (with rare exceptions) to ensure there is no copying, the instructions should be unambiguous and unbiased. For instance, if you are adminstering a C Programming Test created for an Indian audience in China its important to understand that instructions that are clear to Indian test taker may or may not be clear to Chinese test takers. Also test adminstrators have to be trained so they adminster tests in the right way. A poorly adminstered test returns unreliable scores.
Step 5: Lastly, ask for several sets of Question Papers
Test takers would do anything to do better on a Test. Its important to have enough sets of equivalent question papers to ensure that the Tests are not public information. Ask the Test publisher how many questions they have in the question bank and how they can generate equivalent sets.
For a more technical discussion on buying Tests refer to http://www.ipmaac.org/files/ONetasmtguide.pdf#search=%22US%20Department%20of%20Labor%20Assessment%20%22
Happy buying!
No comments:
Post a Comment