We’ve become quite interested recently in being able to use our data to make predictions. As a bit of context, the test we administer to potential incoming employees has become increasingly expensive to administer and score (including the cost of the test itself, as well as the rooms required to administer it, the staff we pay to ensure there isn’t any cheating going on, and so on).
Early in our application process, we get a measure of their trait anxiety as part of their application package (it’s administered quickly and inexpensively online). We think this test is likely to be predictive of two things that are costing us money:
- The candidate’s test performance itself (our hunch is that the more anxious candidates will not perform as well).
- The time each candidate spends revising their test answers (which costs us money in testing staff and room rental fees).
We’d like your help in determining statistically if a candidate’s trait anxiety is predictive of these two outcomes.
More specifically, can you please do the following for us?
- Run the statistical analyses that make sense and share whatever the relevant test statistics are that help you evaluate how useful they are (our boss with the I/O background likes to see them).
- Graph the data so that we can see how anxiety scores relate to these outcomes.
- Provide the actual prediction equations for us with an example or two each to help us comprehend what the model is suggesting. I know this sounds kind of odd, but we’d like to have our HR IT team use any specifically useful prediction equations as part of our plans to revise our candidate testing practices and policies.
- Summarize all of this for us in a paragraph or so – what you saw, what you learned, anything you changed, and anything you’d recommend to us for now.
If you can get this back to us in time to review for our standing team meeting next week, that would be awesome.