
Abstract
There is limited research on the effects that career advice can have on individuals’ expected impact for altruistic causes, especially for helping animals. In two studies, we evaluate whether individuals who receive a one-to-one careers advice call or participate in an online course provided by Animal Advocacy Careers (a nonprofit organisation) seem to perform better on a number of indirect indicators of likely impact for animals, compared to randomly selected control groups that hadn’t received these services. The one-to-one calls group had significantly higher self-assessed expected impact for altruistic causes and undertook significantly more career-related behaviours than the control group in the six months after their initial application to the programme. There was no significant difference between the two groups’ attitudes related to effective animal advocacy or their career plan changes. In contrast, the online course group made significantly higher levels of career plan changes in the six months after their application than the control group, but there were no significant differences for the other metrics. A number of supplementary analyses were undertaken which support the main conclusion that the one-to-one calls and online course likely each caused meaningful changes on some but not all of the intended outcomes.
A PDF version of this report is available from https://psyarxiv.com/5g4vn
Introduction
Mentoring and coaching have been found to provide a number of psychological benefits[1] as well as to support positive behavioural changes.[2] Such interventions can also have positive effects on career-related outcomes. For example, Luzzo and Taylor (1993) found that a one-to-one career counseling session including persuasive messages focused on increasing college students’ career decision-making self-efficacy (i.e. their belief that they can successfully complete tasks necessary to making significant career decisions) was successful in increasing participants’ self-efficacy at posttest compared to the control group.[3] Tarigan and Wimbarti (2011) similarly found that a career planning program was effective at increasing graduates’ career search self-efficacy.[4] Likewise, longitudinal studies have found evidence that careers courses can increase career decision-making self-efficacy or certainty and decrease career decision-making difficulties or indecision.[5]
However, these measures are not sufficient for individuals seeking “to maximize the good with a given unit of resources,” as people involved in the “effective altruism” community are.[6] For example, if an individual seeks to maximize their positive impact for animals over the course of their career, it is not sufficient for them to have high self-efficacy; they must also apply this to career-related behaviours that enable them to find roles where they can most cost-effectively help animals.
Some studies have evaluated career-related behaviours, but the outcome measures have tended to be generic, rather than impact-focused. For example, Tansley et al. (2007) developed a scale based on questions about career exploration activities like “researching majors, visiting the career development center on campus, completing an interest inventory,” or spending time learning about careers.[7] Reardon et al. (2020) identified 38 studies of college career courses that evaluated “job satisfaction, selecting a major, course satisfaction, time taken to graduation from college, [or] cumulative GPA.”[8] Researchers interested in assessing whether particular career interventions help to “maximize the good” must also assess whether the changes caused by the intervention increase the positive impact on altruistic causes that the participants are expected to have over the course of their career (hereafter abbreviated to “expected impact”). By analogy, experts estimate that, among charities helping the global poor, the most cost-effective charities are 100 times more cost-effective than the average charity;[9] clearly, not all altruistic actions have equal expected value.
80,000 Hours is the primary organisation providing career support and advice to the effective altruism community, the community striving to help others as much as possible using the best evidence available.[10] 80,000 Hours is optimistic about the usefulness of its advising,[11] which it assesses using a system of surveys and interviews to estimate the career plans and trajectory that advisees might have had, were it not for its advising intervention.[12] However, rigorously evaluating the effects of career advice on participants’ expected impact on altruistic causes is difficult. For example, there is evidence that constructing counterfactuals from people’s self-reports does not produce accurate estimates.[13] 80,000 Hours has revised its own evaluation methodology in the light of some these difficulties,[14] and bringing additional research methodologies to bear on the question seems valuable.
Until recently, 80,000 Hours did not operate an online course, although they are optimistic about the usefulness of their online content.[15] Others in the effective altruism community have offered online courses, including The Good Food Institute and the philosopher Peter Singer, though neither course is explicitly focused on influencing career outcomes.[16] Our impression is that the only available evidence on whether these courses have successfully influenced career outcomes or not is anecdotal.[17]
There is even less evidence on the impact of careers advice interventions on participants’ expected impact for animals specifically.
The main aim of the two studies reported here is to assess whether a one-to-one calls intervention, provided by the nonprofit organisation Animal Advocacy Careers (AAC), or an online course designed by AAC increase participants’ expected impact for animals. The studies also provide evidence for the wider question of whether career advice interventions can improve participants’ expected impact on altruistic causes.
Ideally, in order to assess whether a career advice intervention genuinely increases a participant’s impact for animals, an evaluation would measure:
Whether the intervention causes changes in a participant’s career plans,
Whether the participant’s new intended career plans have higher expected impact for animals than their previous career plans, and
Whether the participant actually follows up on those plans and takes actions that have high impact for animals.
Unfortunately, there are many complications that would make this ideal evaluation very challenging and expensive.
With regards to the first step, 80,000 Hours has found that there are “long delays between when someone interacts with one of our programmes and when they make a high-value change that we are able to verify and track. The median delay in the current sample [for “top plan changes”] is two years, and only one has ever been tracked within one year.”[18] Hence, to capture all plan changes made as a result of a career advice intervention, and the implementation of those plans, measurement would have to occur after a substantial time delay. With regards to the second and third steps, there are very few clear and rigorous evaluations of interventions that affect animals’ wellbeing, even on short timeframes.[19] After taking longer-term effects into consideration, these evaluations become even more uncertain.[20]
To address these issues,[21] the confirmatory analyses in the following studies use a number of outcome measurements that we expect will be correlated with genuine increases in a participant’s impact for animals. That is, rather than confirming that the interventions were or were not effective, the studies are only able to provide indirect indications of whether the interventions seem likely to be effective or not. These quantitative, confirmatory analyses are supplemented by an analysis using participants’ LinkedIn profiles that takes a more all-things-considered approach but is greatly limited by the above difficulties.
Methodology
The methodology, analysis plans, and predicted results for the two studies were pre-registered on the Open Science Framework.[22] A number of small modifications to the methodology and analysis plans were made after the pre-registration which are viewable in the appendices.
Participants and Procedure
The surveys were hosted on Google Forms. Our sample pool was sourced via participants who voluntarily signed up for AAC’s one-to-one calls or online course programmes, which were advertised to the effective altruism and animal advocacy communities, such as via relevant newsletters, talks, and Facebook groups. Participants filled out an application form, after which they were randomly assigned to either the intervention group (one-to-one calls or the online course, depending on which they applied for) or a no-intervention control group on a rolling basis. Participants were assigned to the groups using randomised block design that ensured that the intervention groups and their respective control groups did not differ substantially in terms of the extent to which they saw impact for animals over the course of their career as an important priority; randomisation was conducted separately for each possible answer to the relevant question on the application form.[23] They were randomised with an equal allocation ratio.
One-to-one calls were provided by both of the co-founders of AAC,[24] with most participants receiving a single session lasting approximately one hour,[25] with around one hour’s worth of additional time spent on preparation and follow-up support by the advisor. The online course was designed by AAC to focus on what we believed to be the most important content that should affect the decisions that individuals make when seeking to maximise their expected impact for animals over the course of their career. The content draws on research by a number of relevant organisations, most notably 80,000 Hours, Animal Charity Evaluators, and Sentience Institute, as well as AAC’s own original research.[26] The course design was informed by AAC’s research into the “The Characteristics of Effective Training Programmes.”[27] For example, the evidence that distance education is often similarly effective or more effective than face-to-face learning[28] informed the decision to host the course online (on the platform Thinkific).[29] The first cohort of the course culminated in a workshop, which participants were encouraged to attend if they had completed substantial amounts of the online course. The workshop was hosted on Zoom, an online platform supporting group video calls.[30] The second cohort of the course did not include a workshop, though comparable content about career planning was added to the online course. Participants randomised to participate in the online course were invited to start the course at the same time as the rest of their cohort — 116 participants in the first cohort, 45 in the second — these participants were encouraged to interact and support each other, such as through various channels on Slack, a digital workspace platform.[31]
The application form contained all the same questions as the follow-up form (with minor adjustments to wording so that they made sense in that context), plus some additional questions that were used to help structure the one-to-one calls themselves or to help introduce the online course participants to one another. Hence, the application form serves as a pretest on each of our key outcome measures (see below) for comparison to the follow-up survey (posttest), which was sent approximately six months after each participant submitted their application.[32] As an incentive to complete the survey, people who applied to either service were told that they would be entered into a lottery (for $100 for one-to-one calls, $200 for the online course) to be sent to one randomly selected respondent (from each service) who completed the follow-up survey.
Meta-analyses tend to find only small effects from interventions intended to modify attitudes or behaviours in general,[33] small effects from advising and mentoring interventions,[34] and small effects from career choice interventions,[35] though career courses specifically may have slightly larger effects.[36] Our calculations suggested that a sample size of 102 would be sufficient to detect medium-sized effects,[37] which we used as our target sample size for practical reasons; we did not expect to be able to collect enough participants to detect small effects, besides which an intervention which only caused small effects on our outcome measures would seem less likely to be worth investing in.
There were 134 valid applications to the one-to-one calls intervention; of these applicants, 81 (61%) also completed the follow-up surveys sent approximately six months after their application. There were 321 valid applicants to the online course, 112 (35%) of who completed follow-up surveys.[38] Although the number of valid applicants allocated to intervention groups were very similar,[39] the response rates to the follow-up surveys were different; only 28 of the one-to-ones control group (43%) completed the follow-up survey compared to 53 of the one-to-ones intervention group (78%). For the online course, only 49 of the control group (31%) did so, compared to 63 (39%) of the intervention group. Table 1 shows descriptive statistics (based on questions from the application forms) for participants who completed both surveys.[40]
Table 1: Descriptive statistics

If certain types of people were more likely to have completed the follow-up questions, then the differing response rates could affect the results. Indeed, comparisons of the pretest answers between those members of the intervention groups and control groups who completed both surveys suggests a number of differences.[41] As such, it may be best to interpret the confirmatory analyses as if from longitudinal, observational studies, rather than randomised controlled trials,[42] although several supplementary analyses (described below) were undertaken to account for biases potentially introduced by differential attrition.
Instruments
The appendix contains the full text of the main follow-up survey questions. The metrics used in the confirmatory analysis are summarised in table 2 below.
Table 2: Summary of analysis plans

Where the individual components of a metric are weighted, the weightings are based on AAC’s intuitions about the relative importance of these components and the likelihood that these changes will translate into substantial changes in impact for animals.
The attitudes metric includes component questions about:
The importance to participants that their job has high impact for animals (assessed via a five-point scale from “Irrelevant for you" to "The most important factor for you").
Participants’ confidence that there are career paths that are viable for them that will enable them to have a high impact for animals (assessed via a five-point Likert-type scale from “Very low confidence” to “Very high confidence”).
Whether the participants are focusing on the causes that seem likely to have the best opportunities to do good (scored as a 1 or 0).[43]
Whether the participants hold beliefs that seem likely to enable them to focus on the best opportunities within those causes, assessed via seven-point Likert-type scales.[44]
To assess career plans, we asked whether participants had changed their study plans, internship plans, job plans, or long-term plans. The options provided were "changed substantially" (scored as 1), "changed somewhat" (scored as 0.5), "no change" (scored as 0) or "I wasn't previously planning to do this and still am not” (scored as 0). These questions are not specific to altruistic causes, but are supplemented by a question about whether, all things considered, the participants expect that their impact for altruistic causes has changed (assessed via a five-point Likert-type scale from “substantially lower impact” to “substantially higher impact”).
Given that we did not expect the intended outcome of the intervention — career changes that increase the participants’ expected impact for animals — to occur within the six-month follow-up period employed in the study, we ask about career-related behaviours that seem likely to be useful intermediary steps. For example, rather than evaluating the direct usefulness of the roles that the participants are working in at the time of the posttest survey, we ask them whether they have secured any new role, applied to one or more positions or programmes, or had two or more in-depth conversations about their career plans (each scored as a 1 if they had or a 0 otherwise), since some roles and actions might be beneficial for animals longer-term by helping the participant to develop career capital that they can later apply to helping animals or test their personal fit with certain career paths, rather than being immediately helpful for animals. We also included a number of questions of specific relevance to careers in animal advocacy, such as asking the time they have spent reading content by AAC or 80,000 Hours (scored as a 1 if their answer indicated that they had spent 8 hours or more doing so, otherwise scored as a 0), whether they had changed which nonprofits they donate to or volunteer for (scored as a 1 if so, or 0 otherwise) or changed substantially and intentionally the amount of money they donate or the time they spend volunteering (scored as a 1 if so, or 0 otherwise), and manually checking whether they had joined the “effective animal advocacy community directory” (scored as a 1 if so, or 0 otherwise).
For each of the main metrics and their component questions, we made predictions about the mean differences that we expected to see between the intervention and control groups, and recorded these in appendices of each pre-registration.[45]
Results
Confirmatory analyses
This section focuses on the pre-registered analysis plans. However, these are arguably not the most appropriate analyses given the differential attrition between the intervention and control groups. Alternative analyses are discussed in subsequent sections.
Tables 3 and 4 shows the pretest and posttest results for participants who completed both surveys for the one-to-one calls and online course interventions, respectively. For full results, including for the components of each metric and for the full set of pretest results (those who did not complete the follow-up survey as well as those who did), see the “Predictions and Results” spreadsheet.
Table 3: Results at pretest and posttest for the one-to-one calls intervention and control groups

Table 4: Results at pretest and posttest for the online course intervention and control groups

The four metrics were tested for normal distribution and homoscedasticity.[48] Three of the four metrics were found not to be normally distributed, so one-tailed Wilcoxon rank tests were used for the confirmatory analysis. In each study, a controlled false discovery rate (FDR) of 0.05 was used, meaning we are 95% sure any given significant difference has not occurred just by random chance, for the four tests in the confirmatory analysis.[49]
In both the study of one-to-one calls and the online course, the differences in attitudes between the intervention and control groups at six months’ follow-up were not significant (p = 0.33 and 0.75, respectively) and the Mean Difference (MD) on this standardised score was close to zero (0.03 and -0.10, respectively).[50]
The difference in career plans was not significant (p = .16) in the study of one-to-one calls at six months’ follow-up. However, career plan changes were found to be significantly greater in the online course intervention group than the online course control group (p = 0.04). The MD was 0.67 on a score from 0 to 6, though this falls short of our predicted MD of 1.35. This is roughly equivalent to one in three participants “substantially” changing “the job that you are applying for / plan to apply for next” or their “long-term career plans (including role types, sector, and cause area focus)” compared to what would have occurred without the online course. The MD in the one-to-one calls study (0.48) is not far short of the MD in the online course study (0.67), and seems potentially meaningful if it is not due to random variation.[51]
Career-related behaviours were found to be significantly greater in the intervention than control groups in the one-to-one calls study (p = 0.03). The MD was 1.05 on a score from 0 to 9, which is about as effective as our predicted MD of 1.25. This is roughly equivalent to all participants making a small change such as in “which nonprofits you donate to or volunteer for” or half the participants making a more substantial change such as securing “a new role that you see as being able to facilitate your impact for animals” where such changes would not have occurred without the one-to-one call. There were no significant differences on this metric in the online course study (p = 0.75) and the MD was -0.14.[52]
Self-assessed expected impact for altruistic causes was also found to be significantly greater in the intervention group than the control group in the one-to-one calls study (p = 0.02); here, the MD was 0.48 on a scale from 1 to 5 (similar to our predicted MD of 0.6), equivalent to someone moving about half way from an answer of “No change or very similar impact” to an answer of “Somewhat higher impact” to their expected impact due to recent career plan changes from what they would have answered without a one-to-one call. For the online course, before adjustment, there appeared to be a significant difference on this metric (p = 0.04), though the difference was not significant after controlling the false discovery rate (p = 0.09). Here, the MD was 0.24 on a scale from 1 to 5.[53]
Exploratory analyses using survey responses
Exploratory linear regressions were carried out with the dependent variables being the participants’ attitudes, career plans, career-related behaviours, and self-assessed expected impact for altruistic causes at follow-up (see the “Predictions and Results” spreadsheet). This was conducted in order to better understand whether differential attrition rather than the interventions themselves best explain the observed differences on the outcome measures of interest, since it controlled for imbalances in observable characteristics that might have arisen from differential attrition and confounded the relationships between interventions and outcomes.[54] The 16 predictors included whether the participant was randomised to the intervention or control group and the answers to several questions from the application form, such as prior familiarity with effective altruism.
As in the confirmatory analysis for the one-to-one calls study, randomisation to the intervention group was found to have significant effects on self-assessed expected impact for altruistic causes (p = 0.02) but not on attitudes (p = 0.85) or career plans (p = 0.36).[55] The effect of randomisation to the intervention group on career-related behaviours was no longer significant (p = 0.08), though this could be due to the sample size being insufficient to detect medium effects.[56] For the online course study, the findings were also similar to the confirmatory analysis; randomisation to the intervention group was found to have significant effects on career plans (p = 0.01)[57] but not on attitudes (p = 0.87), career-related behaviours (p = 0.88), or self-assessed expected impact for altruistic causes (p = 0.24).[58]
For the online course study, further regressions were carried out using the same variables except where the online course variable was replaced with participants’ percentage completion rate of the course and the population was limited to the 62 participants who completed the follow-up form, had full complete demographic data, and were randomly allocated to the online course group. Comparably to previous tests, the completion rate significantly predicted career plans (p = 0.02) but not career-related behaviours (p = 0.11) or self-assessed expected impact for altruistic causes (p = 0.16).[59] Surprisingly, the completion rate significantly predicted attitudes (p = 0.02).[60]
To check the sensitivity to inclusion criteria, exploratory analysis was undertaken using only the applicants to the first cohort of the online course. None of the differences were significant, though this could be due to the sample size being insufficient to detect medium effects.[61] Additionally, exploratory analysis was undertaken where individuals randomised to the online course who completed less than 50% of the course were excluded from the analysis; the results were similar to the confirmatory analysis, albeit suggesting slightly more positive effects of participation in the course.[62] A comparable analysis was not necessary for the one-to-one calls study, since no one who skipped the call after being invited to participate actually completed the follow-up survey.
As a robustness check, non parametric two-samples Wilcoxon rank tests were carried out comparing the intervention group’s follow-up survey answers and their answers in the application form itself.[63] None of the primary metrics described above were found to be significantly different.[64]
Rethink Priorities conducted re-analysis of the data, using methods of their own choosing that they expected to be most appropriate, though they didn't expect that this analysis could overcome some of the difficulties with the data such as differential attrition. They carried out linear mixed modelling regressions that included both pretest and posttest data, allowed for random intercepts for each participant, and used bootstrapping to obtain robust standard errors. Randomisation to the intervention groups was not found to have significant effects on any of the four main metrics for either of the two services.[65]
Exploratory analyses using participants’ LinkedIn profiles
Supplementary analyses were conducted, comparing the roles that the participants (regardless of whether or not they completed the follow-up survey) appeared to be in at the time of their application to their roles in late July or early August 2021, i.e. 8 to 13 months after their application to a one-to-one careers call or 7.5 to 10.5 months after their application to the online course. This information was gathered from the participants’ LinkedIns, where this information was available.[66] The main results are reported in Table 5 below.[67]
Table 5: Analysis of role changes according to LinkedIn profiles in the 7.5 to 13 months after application

In the online course study, a smaller proportion of the intervention group than the control group had apparently not changed their role much or at all (54% compared to 65%).[68] Unlike in the control group (6%), none of the intervention group had undergone negative-seeming changes such as becoming unemployed or not making any change apart from stopping EA(A) volunteering. A larger proportion of the intervention group had made some change with unclear implications, such as starting a new position with no clear EA(A) relevance (29% compared to 23% in the control group). There did not seem to be comparable differences between the intervention and control groups for any of these categories in the one-to-one calls study.
Interestingly, in both studies, a larger proportion of the intervention group than the control group had made some sort of positive-seeming change relevant to EA(A); either starting a new EA(A) volunteering position (9% compared to 7% in the one-to-one calls study, 7% compared to 1% in the online course study), starting a new EA(A) internship (4%, 5%, 1%, 1%), starting a more substantial EA(A) position (11%, 6%, 7%, 2%), or starting in a new role in another path that seemed potentially impactful for animals, e.g. in policy, politics, or academia (9%, 7%, 2%, 2%). Logistic regression found that the difference in EA(A)-relevant, positive-seeming changes between the intervention and control groups was significant for the online course study (p = 0.018) but not the one-to-one calls study (p = 0.479).[69]
Update 4th January, 2021: A repeat of the above LinkedIn analysis was conducted with LinkedIn information checked exactly one year subsequent to each individuals’ application to the one-to-one careers call or the online course. The overall pattern was similar, though there were a number of changes within each of the categories given in Table 5, such as an increase in the number of people starting new EA(A) positions, rather than EA(A) internships or volunteering (see the “Predictions and Results” spreadsheet).
Discussion
The findings from comparisons between the intervention groups and the control groups (our pre-registered confirmatory analyses) provide evidence that career advice interventions can increase participants’ expected impact for animals. Although the two studies do not suggest that either one-to-one calls or an online course successfully caused change in all our intended outcomes — attitudes, career plans, career-related behaviours, and self-assessed expected impact for animals — the studies do suggest that three out of four of these outcomes were significantly altered in the expected direction by one intervention or the other.
The study of one-to-one calls provides evidence that the participants’ own self-assessed expected impact for altruistic causes will tend to be higher than it would be without a one-to-one call and that participants are likely to engage in more career-related behaviours than they otherwise would. Given that previous meta-analyses have found only small effects from advising and mentoring interventions[70] and our study had insufficient sample size to detect small effects, these findings are arguably quite impressive. Indeed, the mean differences (our best guess of the effect sizes of the intervention) suggest changes that seem intuitively meaningful. There is some suggestive evidence that the one-to-one calls caused career plan changes, but the difference between the intervention and control groups was not statistically significant. In contrast, the second study provides evidence that online courses may cause individuals to change their career plans. The identified effects fell short of our predictions for each of the main outcome metrics for both interventions. This is disappointing, but the MDs on several of the outcome metrics still suggest changes that seem intuitively meaningful.
Differences in the design and marketing of the services mean that the average scores from the participants in the two studies cannot easily be compared directly. Nevertheless, the two studies suggest different sorts of effects from the interventions. This weakly suggests that these services do not have equivalent, interchangeable effects and could be mutually complementary.
Quantitative analyses such as this are limited in a number of ways and one might reasonably believe that, even if the tests we designed failed to find any evidence of positive (or negative) effects, a one-to-one calls intervention could still be impactful (or not) for one reason or another. Although we carefully created measures that we expected would provide indirect indications of whether the interventions seem likely to be effective or not, our measures may have been imperfect indicators.[71]
The high attrition rates from pretest to posttest and differential attrition between the intervention groups and the control groups in both studies also makes interpretation of the results difficult; arguably these are fatal flaws in the study, since they remove many of the benefits of randomisation between intervention and control groups and mean that the uncertainty about the effects of the interventions is actually higher than the uncertainty that is implied by the 95% confidence intervals implied in the “Predictions and Results” spreadsheet. The mechanism that seems most plausible to us for explaining the differential attrition is that the intervention group participants felt (on average) more obliged to respond to our request for them to fill out a survey, given that we had provided them a one-to-one call or an online course free of charge. The responses we received from the control group might be disproportionately likely to only come from individuals who are especially mission-aligned with AAC, or especially likely to have made changes that they felt proud of (and wanted to report in a survey). This would presumably make the interventions seem less effective than it would have seemed if we had had 100% response rates, and the findings from the confirmatory analyses would be conservative estimates of effects.
One could imagine plausible alternative explanations for the differing response rates which might push in the opposite direction. For example, involvement in the service could have magnified social desirability bias, relative to the control groups.[72] However, our exploratory analyses using linear regression and controlling for other factors mostly had coefficients suggesting similar or more positive effects from the interventions on attitudes, career plans, career-related behaviours, and self-assessed expected impact for altruistic causes than those suggested by a simple comparison of the means of the online course and control group posttest surveys.
Furthermore, the exploratory analyses using participants’ LinkedIn profiles does not rely on judgments by the participants themselves or have differential attrition rates that could plausibly have been caused by involvement in the online course itself,[73] yet also found evidence (stronger for the online course than one-to-one advising and fairly independently of the confirmatory analyses) that the interventions studied here can have positive effects on career outcomes, increasing their expected impact for animals. The LinkedIn analysis suffers from the limitations described in the introduction which encouraged us to use proxy metrics like “career-related behaviours” in the confirmatory analyses. It is also very subjective, and difficult to independently verify without compromising participants’ anonymity.[74]
Finally, there was a small difference in favour of the intervention groups on the only truly “objective” measure of their effects — whether the participant had joined the effective animal advocacy community directory.[75] Unfortunately, this metric is likely not a very accurate indicator of expected impact for animals, since it has very low barriers to entry and someone could in theory join it even if they had no intention of pursuing a career that helped animals.
Each of these analyses has different risks of bias and readers may reasonably have different views about which analysis is most informative, though each of them points towards at least some positive effects from the tested interventions.
At first glance, it seems surprising that comparisons between the intervention group posttest and pretest answers found no significant differences. However, it should be borne in mind that the career plans, career-related behaviours, and self-assessed expected impact for altruistic causes questions (but not the attitudes questions) in both the pretest and posttest surveys asked about changes over the previous six months. Hence, nonsignificant differences between the posttest and pretest do not provide very strong evidence that the interventions did not have positive effects, they simply suggest that, on average, the recipients of the interventions changed their plans, behaviours, and expectations a similar amount both before and after participating. For example, mean scores of 2.08 and 2.30 on the career plans metric indicate that an average participant in the one-to-one calls intervention might have “changed substantially” their “long-term career plans (including role types, sector, and cause area focus)” or “changed somewhat” each of the studies programme, volunteering, and job that they were planning to apply for next — and they might have made changes of this magnitude both in the 6 months prior to the one-to-one call and the 6 months following it.[76] This seemingly high degree of change may be because people tended to apply for the interventions during periods where they were making important career decisions. For example, the proportion of people actively seeking a new role to improve their impact for animals fell from 93% and 71% to 50% and 57% in the one-to-one calls intervention group and control group, respectively, from pretest to posttest; for the online course, the comparable figures are 73% and 78% to 59% and 65%.
If we assume that the confirmatory analysis is correct — that the one-to-one calls successfully encouraged more career-related behaviours and more positive self assessments of expected impact for animals, whereas the online course successfully changed participants’ career plans, and neither intervention altered attitudes — which of these proxy outcome metrics is most important for increasing expected impact for animals? All these metrics seem like useful indicators, but our guess is that “career plans” is the only metric where (in most cases) we need to see meaningful changes within six months of the intervention in order to believe that the intervention has had positive effects.[77] One can imagine that an individual would change their career plans after an intervention, but only actually put these plans into action (i.e. undertake more “career-related behaviours”) after some time delay; 80,000 Hours have commonly found this sort of delayed implementation following their services (see introduction). It seems plausible that participants’ assessments of their own expected impact would be more accurate after an intervention, so even if the average score does not change much, it could be that the plan changes they make will be better plan changes than they otherwise would have made. And attitudes do not necessarily need to change for expected impact to increase. An individual might already hold attitudes conducive to careers that are highly impactful for animals, just not have identified the best career pathways. For example, they might not have thought about certain promising options before.
There is some weak evidence for this idea from the exploratory analyses using participants’ LinkedIn profiles. Although the confirmatory analyses in the online course study only provides evidence that that intervention successfully encouraged changes in participants’ career plans, the results from the LinkedIn analysis for this intervention seem more promising than the results from the equivalent analysis of the online course.[78] It suggests that a higher proportion of people in the online course intervention group than the control group had made meaningful changes to their current roles and responsibilities, including a larger proportion of individuals taking on new responsibilities directly relating to effective animal advocacy or effective altruism. In fact, the proportion of people making such EA(A)-relevant, positive-seeming changes was more than three times higher in the intervention group than the control group. By comparison, The one-to-one calls intervention group also had a larger proportion of individuals making such EA(A)-relevant, positive-seeming changes than its control group, but the increase in the proportion of people doing so was much smaller.
These two studies have a number of limitations that prevent a simple interpretation of whether the interventions succeeded or failed. Nevertheless, they bring important new evidence to the question of whether career advice interventions such as one-to-one calls and online courses can increase participants’ expected impact for altruistic causes, finding positive (albeit mixed) results.