top of page

Analysis of AAC’s job board data

Updated: Nov 22, 2022


Analysis of AAC's job board data

Summary

Animal Advocacy Careers’ (AAC’s) job board is populated primarily through systematic searches of the websites of a select group of effective animal advocacy nonprofits, so the job board provides relatively objective data about which roles organisations most struggle to hire high-quality candidates for.


We analyse various categories of role types via three methods: (1) the average number of weeks that roles are listed the job board, (2) the average number of times that roles have been clicked on from the job board, and (3) the percentage of roles that have been taken down and put back up.


Some findings corroborate the implications of AAC’s previous research, such as that leadership roles seem to be the most difficult to hire for. However, some findings seem contrary to some previous research, such as that middle or junior management roles and “other technical” roles (e.g. web or software development) are also difficult to hire for.


Additional analyses touch on questions for which we previously had little evidence, suggesting, for example, that it is easier to find high-quality candidates for remote roles than location-specific roles and for roles in research or “meta” charities than in other effective animal advocacy nonprofits.


Introduction

In order to decide which services to offer and which opportunities to prioritise in order to help animals most cost-effectively, Animal Advocacy Careers (AAC) needs to understand what the largest talent bottlenecks are in the effective animal advocacy community. That is, we need to understand: which sorts of skills and expertise does the community need more of most urgently in order to achieve its fullest potential positive impact for animals?


An understanding of the largest talent bottlenecks in the effective animal advocacy community is also useful for individuals thinking about how to plan their careers so that they can help animals as much as possible. All else equal, individuals can have more impact by working in role types that tend to be difficult to hire high-quality candidates for.


Our previous research has led AAC to tentatively conclude that the main talent bottlenecks for effective animal advocacy nonprofits were in leadership and senior management, fundraising, and lobbying roles. To improve and update our understanding of the community’s talent bottlenecks, AAC analysed data from our job board (set up in October 2020), which is populated primarily through systematic searches of the websites of a select group of effective animal advocacy nonprofits. Analysis of the roles listed on the job board presents an opportunity for collecting objective data about which roles organisations most struggle to hire and retain high-quality candidates for.


Methodology

Data collection

Every two weeks, we search for advertised roles at 38 animal advocacy organisations, looking at the relevant section of their website. All paid roles identified through this method are added to the job board, as are unpaid roles that seem like fairly substantial and formal opportunities.[1] The analysis below is of all the roles included on the job board between 9th October 2020 and 1st November 2021; of the 500 included roles, 478 were identified via this systematic methodology. An additional 22 roles were included for less systematic reasons, such as being submitted to us for consideration, or seen by AAC elsewhere and included because they seemed to have unusually high impact potential for animals.


Included variables

We collected information for a number of variables. The following table describes those that we report on in the “Results and discussion” section below.

Table with included variables

We first counted the number of roles in each of the categories above. Each category was then analysed using the three analysis methods described below.


First analysis: Average number of weeks on the job board

If an organisation advertises a role for a long time, it suggests that they are struggling to find a candidate who they think seems sufficiently well-suited to the role that they are willing to hire them.


Given that the job board was updated every two weeks and we tracked whether and when each role was added or removed, we were able to calculate the approximate number of weeks that each role was listed for.[4] We then calculated averages for roles that fell into each of the categories described in the section above. If roles in a particular category are, on average, listed on the job board for an unusually large number of weeks, this suggests that it is unusually difficult to hire high-quality staff for that type of role.


The number of weeks that a role is listed for could partly reflect the perceptions and expectations of the hiring organisations, rather than being a truly “objective” indicator of the quality of candidates.[5] For example, an organisation might leave a particular role up for a longer period to collect more candidates if they think that the role is especially important, even if they are receiving many excellent applicants. Nevertheless, this methodology enables us to evaluate organisations’ hiring behaviour directly, rather than relying on self-report through surveys.


Besides, each of the three analyses used here has their own strengths and weaknesses; we are primarily looking for convergent evidence between multiple analyses (including inputs from surveys and other evidence sources) rather than seeking to treat any one analysis as providing strong evidence.


Second analysis: Number of Bitly clicks

Since 10th August 2021, we have been adding individualised Bitly links for each role, where possible. This enables us to track the number of times that visitors to the job board have clicked for more information about each job role.


Although a click on a Bitly link does not mean that someone actually applied for a role (at present, we have no way of tracking applications from the job board), it is an indicator of interest in the role, and we would expect that the number of Bitly clicks would be positively correlated with the number of applications. All else being equal, we should expect that more applications for a role will mean that a better-suited and qualified candidate would be hired for the role, since the hiring manager will have a wider range of candidates to select from.


So if roles in a particular category receive, on average, an unusually low number of Bitly clicks, this suggests that the role may have had an unusually low number of applicants, which in turn suggests that it is unusually difficult to hire high-quality staff for that type of role.


There are a number of limitations to this analysis:

  • The evidence is quite indirect, so we should expect there to be quite a lot of random noise in the data.

  • The number of Bitly clicks could reflect who AAC’s job board users currently are, so may not be a very good indicator of interest in the roles.

  • We only recently started tracking Bitly clicks, so we have fewer data points for this analysis than for the average number of weeks on the job board analysis.

  • If an organisation does not have unique URLs for each individual role, then it is not possible for us to create a unique Bitly link for the role. This led to a handful of organisations being arbitrarily excluded from the analysis.

  • This analysis likely has some overlap with the average number of weeks on the job board analysis, since if a role is listed on the board for longer, there is more time for candidates to click on the role. However, the correlation between the average number of weeks on the job board and the number Bitly clicks is very small (r = 0.09) and nonsignificant (p = 0.289).


Third analysis: Percentage of roles that have been taken down and put back up

If an organisation extends an offer to a candidate who seems like a good fit for a role, and that offer is accepted, the organisation will presumably remove the job description from their website. If, however, the organisation subsequently realises that the candidate isn’t a good fit after all and seeks to break ties with the candidate, the organisation might publicise the role on their website again. If this happens, it suggests that it is difficult to hire high-quality candidates for that role. Alternatively, if the candidate themself decides to leave the role, the organisation might publicise the role on their website again, and this would suggest that it is difficult to retain high-quality candidates for that role.


Although we can’t necessarily know the reasons why without speaking to the organisations, we can note when certain roles are taken down and then put back up (perhaps with a slight change to the job title or description) within the period of our analysis.


If an unusually high proportion of roles in a particular category are taken down and then put back up again, this suggests that it is unusually difficult to hire or retain high-quality staff for that type of role.


There are a number of limitations to this analysis:

  • Like with the average number of weeks on the job board analysis, this analysis could reflect the organisations’ own impressions of the importance of certain roles if they put them back up because they are being unusually picky about candidates.

  • There are very few data points for this analysis, since we only noted that 38 of the 500 included roles (7.6%) were taken down then subsequently put back up.

  • Ideally, we would have supplemented this analysis with an analysis of the number of roles that had their deadlines extended or removed, but we did not monitor these changes.

  • This analysis likely has some overlap with the average number of weeks on the job board analysis, since if a role was put back up again, it would be listed on the job board for longer than if it was not put back up.


Sensitivity analyses

For the “Category of work” variable, we conducted a number of sensitivity analyses. These sensitivity analyses did not change our views much,[6] so we do not report the results in the rest of this post. Because the sensitivity analyses were time-consuming, we did not conduct them for any of the other variables.


Colour-coding

In the tables in the “Results and discussion” section below, we use a five-part colour-coding system to visually represent what each analysis suggests about difficult it is to hire (or perhaps retain, in the case of the second analysis) candidates for various types of role:

Colour-coding key

These categorisations are subjective, so we encourage you to look at the numbers too.


Results and discussion

Category of work

The table below displays the results of the three main analyses by category of work.

Table of results: Category of work

The distribution of identified roles in each category is fairly similar to the distribution identified in our 2020 spot-check, except that campaigns, corporate engagement, or volunteer management roles look less common than they did in that analysis.[7]


The evidence from the above analyses is consistent in suggesting that leadership roles are the most difficult type of role to hire for. This suggests that it could be promising for AAC to focus some of its services on leadership roles and that individuals might be able to have unusually high impact by applying to leadership roles if they could have a good personal fit with this type of work. The caveat should be borne in mind that the leadership category had a very small number of data points. However, these findings are consistent with our 2020 bottlenecks survey — where respondents rated “leadership or senior managers” as the most difficult to hire high-quality candidates for, with an average score of 3.6 out of 5 — and with some anecdotal evidence available to AAC.[8]


Some of the other findings above are also consistent with other available evidence. The findings suggest that fundraising or development and campaigns, corporate engagement, or volunteer management roles are difficult to hire for, relative to other role types, whereas research and marketing or communications roles are not very difficult.[9]


There are, however, some surprising findings:

  • The analyses above are fairly consistent in suggesting that “other technical” roles, e.g. web or software development, are (very) difficult to hire for. In the survey, this was rated as one of the least difficult role types for hire for (average 2.6 out of 5). To a lesser extent, the same is true for natural sciences roles.

  • At the other end of the spectrum, the analyses above suggest that government, policy, lobbying, or legal roles are only moderately or not very difficult to hire for whereas the survey respondents had rated this category as one of the most difficult.[10]

  • The evidence seems surprisingly mixed for operations, administration, or HR roles, which the survey respondents had rated as the least difficult to hire for.[11]


We wondered whether the surprising findings might partly represent our choice of categories being different from survey respondents’ understandings of the terms. To gain some insight into this, we conducted the analysis again, with a more granular set of categories, as shown in the table below.

Table of results: Category of work (more granular)

This reanalysis does not alter many of the above findings, although the social sciences and strategic research category seems closer to average than the previous “research” category. It also helps to clarify that the “Other technical” roles that seem difficult to hire for are “IT or software” roles.


Management responsibilities

Given the above finding that leadership roles seem to be difficult to hire for, it seems especially important to further analyse roles by management responsibilities.[12]

Table of results: Management responsibilities

Interestingly, these findings suggest that both senior management and middle or junior management roles are somewhat more difficult to hire for than roles with no management responsibilities. This contrasts to the findings of the bottlenecks survey, where only “leadership or senior managers” had a score (3.6 out of 5) that was notably above the average (2.9); “middle or junior managers” had a score (2.6) slightly below average.


It is also surprising that the findings suggest that volunteer management is not very difficult to hire for — and less difficult than roles with no management responsibilities — given that in the “category of work” variable, the wider grouping of campaigns, corporate engagement, or volunteer management seemed to be quite difficult to hire for.[13]


We again conducted the analysis with a more granular set of categories, as shown in the table below.

This reanalysis does not alter many of the above findings, but suggests that there is more of a gap between C-level leadership roles and other management roles than between head/director of department roles and more junior management responsibilities.


Role remote or not

The table below displays the results of the three main analyses by whether the roles are remote or not.

Table of results: Role remote or not

Our spot-check had previously suggested that just under half of available roles were remote. However the proportion has soared to 68% (74% if optional, partly remote, or unclear roles are counted as 0.5, as they were in the spot-check), perhaps reflecting a wider shift in working patterns in the wake of COVID-19. Note, however, that only 21% of roles were listed under “Anywhere” for the “Country” column, and that many of these roles still had some restrictions on location, e.g. anywhere within Asia, or a preference for candidates from a specific location; only 11% were fully remote from any location with no stated preference for one location or another.


The evidence above is quite consistent in suggesting that it is easier to find high-quality candidates for remote roles than location-specific roles. However, this is quite weak evidence that making a role remote makes it notably easier to attract high-quality candidates. For instance, it may be that the sorts of roles that need to be in a specific location tend to be roles that are more difficult to hire for for other reasons.


Other variables

We conducted information on a number of other variables for which we aren’t detailing the findings in the main report here. A number of methodological difficulties make the findings for paid versus unpaid roles[14] and role location[15] especially difficult to interpret. The “Organisation mostly targets one country” category (used in comparison to the “International” category) is disproportionately focused in the Global South, so shares some of the same limitations. Additionally, for several organisational variables, most of the categories looked quite similar to each other or had mixed and unclear findings.[16]


However, with regards to the “Organisation work type” variable, the category for research / meta charities stood out, with the findings suggesting that it is relatively easy to find high-quality candidates for this type of work:

Table of results: meta vs others

The analyses for organisation size also yielded surprising findings, suggesting that it is easiest to find high-quality candidates for small charities and hardest to find them for large charities:

Table of results: small vs large charities

Note, however, that there is substantial overlap between the research / meta charity and small charity categories, which might confound this comparison. It seems intuitively more likely that being a meta charity makes it easier to attract high-quality candidates than that being a small charity does so.


Footnotes

[1] This includes internships but excludes less formal volunteering. We suspect that we collect relevant unpaid opportunities less reliably, e.g. if they are only advertised in separate sections of the website.


[2] Around April 2021, we started the “category of work” and “management responsibilities” categorisations. Roles that were taken down prior to this point therefore did not have categorisations. For the purposes of the “category of work” analysis, we retrospectively categorised roles that were taken down prior to this point. We were unable to access full job descriptions for the roles, however; this is usually not a problem for “category of work,” which is often quite obvious from the role title, but prevented categorisation by management responsibilities, which requires close reading of the job description.


[3] Most groups were included in more than one category. Where we had categorised an organisation for analysis of our 2020 bottlenecks survey, we kept the categorisations the same. As we noted in a footnote on the writeup there: “Where we identified financial reports or reviews from Animal Charity Evaluators with clear breakdowns of spending by programme type, we included an organisation in a category if it spent 10% of its budget or more on a particular intervention type. If we did not find this sort of financial data, we relied on descriptions on the organisation’s website.” This involved some subjective judgement calls; certain organisations did not clearly report their spending by intervention type. Categorisations of organisations not included in the bottlenecks survey were added fairly quickly and subjectively, since the affected organisations tended to have relatively few roles each.


[4] If a role seemed to have been taken down and put back up (including perhaps with a small change to the title or details of the role), then it was treated as a single entry, but the intervening weeks where the role was not publicly visible were not counted.


[5] Arguably it is impossible to objectively evaluate how well-suited a candidate is to a role anyway.


[6] In terms of the colour-coding system described below, the only change that we might have made based on these sensitivity analyses would be to change “research” from dark blue (“very low difficulty”) to light blue (“low difficulty”) for the “average number of weeks on the job board” analysis.


[7] Note that the methodologies in this analysis and the spot-check are quite different, so this does not necessarily represent change over time.


[8] Our spot-check did not identify this as a particularly difficult area, but the spot-check used a fairly similar methodology to the analyses in the present report that just had fewer data points available and required some greater logical leaps. Hence, we do not discuss the spot-check in the main text in this report for understanding which roles are the most difficult to hire for.


[9] The finding that campaigns, corporate engagement, or volunteer management seem quite difficult to hire for is consistent with the survey, although we were surprised by this result at the time of the survey, partly because the spot-check had suggested that this was one of the most oversubscribed role types.


[10] It is interesting to note that roles in this category are unusually well-paid, averaging $75,776 compared to the overall average across all role types of $49,379. Very speculatively, this could suggest that organisations perceive these roles to be important or difficult to hire for, so they offer above-average salaries, which in turn make hiring easier. However, the average salaries for this category are inflated by the unusually low proportion of roles in the Global South and by two outliers with exceptionally high salaries, both at Open Philanthropy. With these two outliers excluded, the average is reduced to $65,127.


[11] Speculatively, the low number of Bitly clicks could reflect the relatively low interest in this type of work that has been identified in the effective altruism community more widely.


[12] Note that there is substantial overlap between the senior management category here and “leadership” in the “category of work” analysis above, so the two tables should not be interpreted as entirely separate pieces of evidence.


[13] Very speculatively, this could be because the roles that have volunteer management responsibilities tend to be junior roles, whereas roles with no management responsibilities may either be quite specialised (e.g. technical IT work, lobbying work) or just working towards more substantial management responsibilities if the team grows.


[14] The limitations for were that:

  • A number of internships were left up for unusually long by us on the job board because we just listed a generalised option of “various internships,” rather than take the time to list each internship separately.

  • The number of unpaid roles was very small and there were only three datapoints for the bitly clicks, all of which seemed like unusual examples; two were “various internships” options, and one was AAC’s own online course, which is not a job or internship and was promoted more prominently at the top of the job board.


[15] The limitations were that:

  • Countries differ somewhat in whether they tend to advertise roles with fixed deadlines or not. This affects both the first and second analysis methods.

  • Regional differences in hiring practices and culture may affect how likely a role is to be readvertised; this may or may not match up to the idea of how difficult it is to fill the role.

  • AAC may have been more or less successful at building up an audience in some locations than others, which would affect the Bitly clicks analysis. For example, both of AAC’s co-founders are from the United Kingdom, so we have a relatively strong network there, and lots AAC’s content is more relevant for people in the Global North than the Global South.


[16] Note also that there is substantial overlap between the research / meta charity, not reviewed by ACE, and small charity categories.

bottom of page