Pairing research assessment reforms to faculty search procedures

By Omar Quintero (University of Richmond)

When you imagine a search procedure to hire a new colleague, you may picture a stick that is whittled down to a sharp point, and that point is the finest applicant of all applicants.  However, there are a lot of steps before you end up with a pointy stick, the most important being choosing the appropriate starting material.  The same is true in faculty searches.  Before a decision is made about an individual, it is important to assess the appropriateness of the applicant pool.  Much of the emphasis of DORA’s initiatives has revolved around appropriate metrics and assessments.  Equally important is designing mechanisms that employ those assessments at decision-making steps.

It may not be enough to decide you need to hire a new colleague.  A successful search becomes more likely if there are discussions before the search that establish what favorable types of expertise, experience, and characteristics will serve as evidence of the candidates’ merit for this position.  This kind of discussion can help to match the responsibilities of the position being filled with the evaluation of the candidate pool, and can also inform the creation of the job advertisement so that it reflects the position.  Here’s a bit of a silly example.  Suppose that you were a general manager in the National Basketball Association, and your team desperately needed a skilled rebounding center.  Your team president might say “We want the best player out there.”  How do you define a vague term like “best?” In this position, it is unlikely that you will try to bring James Harden to your team.  Sure, he scored more points than anyone else in the 2018-2019 regular season, but that attribute is not the appropriate measure of “best” for your team’s need.  Similarly, if your department needs to hire a faculty member who will be primarily responsible for developing undergraduate courses, then perhaps focusing on their research publication record might not be the best evaluation of their potential to excel in the tasks they are being hired to complete.

Assessing the appropriateness of the applicant pool does not begin with just identifying one person to pursue, it begins with choosing which applicants that show evidence of real potential for the open position.  That selection step is akin to moving from the shortlist to the invite list.  In a pool of applicants it is possible that only a handful of application packets contain evidence that those candidates are appropriate.  Alternatively, the number of candidates appropriate for an on-campus interview might exceed the number of interview slots.  Going back to the basketball example where you are a general manager in need of a strong rebounder, either Andre Drummond (15.6 rebounds/game) or Joel Embid (13.6 rebounds/game) would be top choices to pursue based on their 2018 stats.  However, the next 7 players with the most rebounds/game averaged between 12 and 13 rebounds/game.  Additionally, if instead of rebounds/game you compare players based on rebounds/48 minutes played (the length of a complete NBA game), 4 of the top 10 players on that list are not in the top 10 for rebounds/game.  If this were like a faculty hiring process, how can a search committee make an informed decision about which candidates to invite to campus?

One possibility is to make these decisions about the pool, and not about individual applicants.  This can be accomplished by generating data about the candidate list that do not contain identifying information about the specific candidates themselves.  Suppose that each member of the search committee (or department) reviews the application materials of all candidates, and compares the evidence for each candidate to the description of what a successful hire would look like (the description that was decided on at the beginning of the process).  Each member then votes for 3 candidates to interview out of a slate of 15 candidates on the shortlist.  The vote data are tabulated for each candidate and shared with the department, but with the individual candidates deidentified.  Instead, the data are reported to the department as “Candidate #1” through “Candidate #15,” with the numbers assigned randomly so that they don’t correspond to something like first letter of the surname or some other identifiable characteristic.  With the candidates deidentified, the data provide a measure of whether any subset of candidates have generated a lot of departmental support.  Suppose that candidates #4, #11, and #13 get noticeably more votes while the others do not.  The department can then decide that those three candidates should get a closer look.  If the department agrees that the data indicate strong support for on-campus interviews for those three candidates, then their names are revealed and the process moves forward.  Multiple other possibilities exist for how these data can be distributed.  The other extreme would be if there is even support for all candidates.  This will be revealed as an even distribution of votes across the candidate slate.  This could mean that none of the candidates are particularly well-suited and it was hard to distinguish them from each other.  Alternatively, it could also mean that all of the candidates are well-qualified and it is hard to distinguish them from each other.  Either of these outcomes would be a signal for a search committee to hit “pause” and talk about what the data are telling them about the candidates for the position.  Whatever the outcome, the department will have data that indicate whether the search should move forward, or if more discussion is required first.

By deidentifying the candidates at this decision-making step, a search can gain some benefits.  It might be possible to evaluate the strength of the entire pool.  These data might reveal that the job advertisement did not trigger applications from the intended applicant pool, or that the search was too narrow in scope to get an appropriate pool.  Depending on how these data are analyzed, it might be possible to determine if a particular group of applicants is being unintentionally privileged by the search process.  Additionally, it is worth remembering that we are all human, and that often an applicant’s material will resonate strongly.  Making decisions based on the entire slate of candidates rather than treating the applicant pool like individuals in a contest can minimize the influence of one particularly charismatic candidate (or one vocal supporter) on the evaluation of the entire set.

In the future, search committees should continue to be intentional in the design of their search procedures.  Implementation of mechanisms carefully designed to identify appropriate qualities in the applicant pool increases the likelihood of identifying individuals who will become successful colleagues.

Share This

Copy Link to Clipboard

Copy