4.0 Pipeline & Impact Tracking

The goal of this session will be to plan and track your 1-1s Pipeline. A number of considerations that can help guide you in creating your plan have been listed below.

Stages of the 1-1s Pipeline

Try to answer these questions as briefly as possible, and flag any major uncertainties. Refer back to the strategy-planning documents you filled out in Session 1, and when you defined your target audience in Session 2. You will now brainstorm the practicalities of actually getting them into your 1-1s pipeline.

Outreach & Leads Generation

  • Where and how will you advertise your service?
  • Are there events which will generate many leads (e.g. EAGx) which you need to plan for?

Filtering Applicants

We’ve already defined our target audience in the previous sessions. Now we need to just formalize the screening mechanism by which you select candidates. Even if you don’t have many candidates to pick from, it can still be useful to have a filtering process to make sure you know what their background knowledge of EA is and what topics you should cover during the call.

  • How will you screen candidates and potentially reject them if they don’t bring the right mindset and flexibility?

Running the career 1-1

  • How will you deal with demographic and cultural differences?
    • E.g. student/professional, early/mid/late stage careers.
    • You would need to consider different constraints that people face, and also the different ways they might interact with you.
  • How will you balance catering to the immediate needs of the people you are speaking to with making sure people think through careers carefully, especially regarding their cause prioritisation?
    • See the relevant section of the career consultation guide.
  • When do you advise someone yourself, and when do you forward them to other people or resources?
    • It’s good to familiarise yourself with existing career advice resources such as:

WANBAM, Animal Advocacy Careers, and EA Student Career Mentoring, as well as other people in the relevant cause areas.

  • When connecting advisees to more-senior people who could help them with their career questions, make sure you have checked these people's availabilities and wishes:
    • What type of people do they want to advise?
    • How many advisees can they take?
    • Is there something like an open application where they screen people and decide whom to advise? Or are they open to talk to anyone interested in their field/career path/expertise?
  • Do the organisers know the members well beforehand? Or not at all? (Or a mix?)
    • How much background knowledge do you need to get about the advisee?
      • And how will you elicit this?
    • How does this change the advice you are able to give?
  • How will you balance the short-, mid-, and long-term plans of your advisees?

Follow-up

Metrics and impact tracking

Software for Tracking & Workflows

CRM*

If your group already has a CRM (Customer Relationship Management), think about how you will track 1-1s in addition to the CRM. Keep in mind that a proper CRM may not be necessary if your group is in very early stages, and may add unnecessary overhead. You will need to have some reliable data-management strategy.

The links below may help you determine what the best strategy is. You won’t have time to set up the CRM within this workshop, but this could help with future planning.

Airtable

  • Many groups are finding Airtable a good alternative to a a spreadsheet-based CRM
  • Free up to 1200 records, cost $10-12/month per user past that. Click here for more details on the pricing. (can get CEA funding)
  • EA Israel. Comments from Edo Arad:
    • We also started working on a different table for managing connections with academic researchers and other potential partners, if that's also relevant.
    • Generally, Airtable feels a bit like a beta version of something amazing. There are many minor issues that we had to go around, and I feel especially limited in working with forms. I think that we might use the API and write some code to do it better someday, which will be much more likely if it will seem like something that might help other local groups as well.
  • EA Oxford. Notes from James:
    • Oxford also uses Zapier to automate lots of things in the CRM.
    • They used free for a long time, and they think its fine
    • They mostly use Airtable forms, instead of GForms and importing
    • Our Zapier use cases are kinda niche, so I don’t think there are many useful examples I can share. But just getting familiar with Zapier is a great skill because it opens lots of possibilities to solve problems you have
  • EA Philippines (feel free to duplicate this Airtable). Notes from Brian:
    • Our Airtable is similar to EA Israel’s in some ways
    • In Sept. 2020 I made EA Philippines’ CRM using Airtable. It took me like ~8 hours though to set it up and paste over all our directory and 1-1 and attendance data, but I now feel we’re at a good spot.
    • I would also be willing to use $12-48/mo. of our group funding for this CRM after (because of the ease it allows for tracking how many 1-1's and EA events people have gone to). I also just hadn’t found a way to set up a GSheets CRM well enough, so I think Airtable is better
    • I also spent 1-3 hours trying to do task management and budgeting/expense tracking. It could be a good task management system but it would be expensive (1200 record limit) & pay per user. Asana would be better since it’s free for up to 15 people.
  • EA NYC

Spreadsheet-based

Custom

Professional CRMs

  • Salesforce (free for any registered non-profit for 10 users)
    • Salesforce For Good: Matthew Poe (OPP)
    • Salesforce Contact: Matthew Poe from Open Phil
    • Groups: EA London (David Nash) EA Geneva (Konrad Seifert)
    • Seems a bit unwieldy for large groups
  • Zoho, or many other paid alternatives on the market which may offer nonprofit discounts for single local groups

Other EA groups’ CRM and metrics by Janique (EA Zurich) - Please do not share this document publicly.

Questions

  • What softwares do organisers use for each stage of the pipeline?
    • E.g. calendly, google forms, etc.
    • Brian (EA PH): I use Calendly to have people schedule 1-1’s with me. I also set up a Zapier so that a new Google Doc with a minutes template is created every time a meeting is booked through my Calendly. 80,000 Hours’ has some custom scripts to auto-populate things into the Google Doc, but I haven’t figured out how to do that (or how to do it via Zapier). I think it’s more work than worth it - Brian Tan

Impact Tracking

By this point, you should already have a sense of the goals you want to optimize in your career 1-1, such as trajectory changes, speed-ups, and more. This section will be about how to track those changes, both positive and negative.

DIPYs

[Public] 80k annual review Dec 2019 on Impact Tracking. Read pages 15-18 and 59-53

Summary by Brian Tan (EA PH)

  • I think a key insight I learned here about 80K’s new metric, DIPYs (Discounted, Impact-Adjusted Peak Years) is that how many years that the person you’re advising has left in their career, and how many years it will take before they reach their peak, matters in this metric. The less years they have left in their career, the lower their DIPYs. So maybe EA groups would want to focus on doing career 1-1's and outreach with people who are younger (maybe 18-25 yrs old?) so they have more ability and time to enter a priority path.
  • I am trying to make a Google Sheet to re-derive / recreate 80K’s top plan change evaluation system that they talk about in page 50, but I’m not done with it and it’s a bit difficult. I wish we could just ask 80K for their model. Sadly they put here “The technical details are covered in docs we don’t intend to release publicly.
  • They are tracking top plan changes & criteria based career plan changes
    • A criteria-based plan change is when someone makes a career change to something which satisfies some set of predetermined criteria. We haven’t yet finalised the list of criteria, but it will likely include changes which we think we’re >20% likely to have caused and which involve things like accepting any role advertised on the job board, accepting any role in policy focused on problem areas we prioritise, receiving >$20,000 for a project from an EA aligned funder, or a range of other changes. The aim of the criteria-based system is to capture a wider range of changes with less evaluation time, using surveys rather than interviews, and with more transparency to external stakeholders. We expect they will be on average 10% as valuable as a top plan change, though this is highly uncertain. This means they very roughly correspond to rated-10 plan changes on our old system.

Related 80K Readings

Examples

Animal Advocacy Careers (AAC)

  • They started advising around summer 2020
  • They have a pre-registration document to evaluate the effects of advising as an intervention
    • They will do a study of ~100 participants (the size they estimate to find medium-sized effects) to evaluate the effect of their advising on the participants. They will do statistical analysis of the data.
    • They will compare between a control and intervention group.
  • Metrics
    • They are doing immediate, 6 months and 2 years follow-ups.
    • I recommend reading the planned analysis section to see how they plan on measuring & calculating the impact from each metric. Briefly:
    • The survey (see the full text of their 6 month follow-up survey questions) has been designed in order to assess whether there have been changes to the following metrics (the application form will contain all the same questions as the follow-up form):
      • participants’ attitudes
      • career plans
      • career-related behaviours
      • self-assessed expected impact for altruistic causes.
    • They break down each key metric into components (e.g. for career plans the components are: study plans, internship plans, job plans & long-term plans are components). They ask participants each of these in their follow-up survey.
    • They will score the applicant’s using different methods (see the table in the document for details). Some components are weighted unevenly, this is based on AAC’s intuitions on their relative importance and the likelihood that these changes will translate into substantial changes in impact for animals.
    • They list several robustness checks at the end of the section which I recommend reading.

Student Career Mentoring

  • They have been advising since late 2018/early 2019
  • They don’t have anything formal written up, but I asked Huw/Alex for their thoughts.
  • Follow-ups & Feedback: Immediate, 6 month, expect to do 1-2 year follow-ups
    • Immediate Feedback Form: https://airtable.com/shrr8EDXYCxfAqe48
    • 6 month follow-up feedback:
      • Would you mind sending over some quick updates on your progress since our last chat?
      • I'm most interested to hear about any course or study program changes, internships or jobs you've applied to or taken up since we spoke, especially if you think participating in the program played any part in you doing this.
      • I'd also be interested to hear about (again especially if you think the calls had some influence):
        • Have you taken on any smaller projects? (eg. attending workshops, attended a new class, started a research project, took an online course)
        • How many calls with mentors did you end up having? How useful would you say the calls were from 1-10? - How much time would you estimate you spent reading (eg. research agendas, papers, 80,000 Hours articles) as a result of the calls? Would love to hear about any specific pieces you found particularly insightful, or spent a lot of time on.
    • The primary questions try to get at key outcomes (see below) that have come from the calls, and their best guess of likelihood that they'd have done this counterfactually
    • Response Rates (estimated): I think above 50%, and closer to 100% than 50% with the people we've prioritised getting feedback from (probably in part because these are the most engaged people, and/or the people we've impacted most, and so likely most cooperative people)
  • Metrics
    • Participant feedback (entirely a lead metric)
    • Key concrete outcomes
      • Getting a job
      • Started a project that'll help them prep for a job in future or similar
    • Our best guess of whether this person is on a good trajectory towards an impactful careers in 3-7 years
      • Based on eg. EA engagement, level of fit with their current plan, level of detail of their current plan
  • Relative importance of the different metrics:
    • I think the key thing that funders probably will want to see, and the way we 'sanity-check' our own judgement, is seeing people take concrete opportunities.
    • So something like: We think a lot of the impact will come on a longer timescale, and for that we fall back mostly on 'does this person seem to understand EA? Does their career plan seem strong/likely to succeed, and fit their strengths?', but I'd become concerned if this was the only kind of outcome we were tracking, as it's too dependent on our personal judgement, so we also look at concrete outcomes
    • Our best guess of whether this person is on a good trajectory towards an impactful careers in 3-7 years (based on eg. EA engagement, level of fit with their current plan, level of detail of their current plan)
    • We expect most of the impact to come from a few individuals, and we also expect people to be pretty bad at gauging the counterfactual likelihood of outcomes produced by the program
      • In cases where we expect we had a particularly large impact, we'll spend a lot more time talking to them about variables that affected the outcome, and try to make our own best guess estimate of the likelihood we affected them
      • We'd also maintain the raw data from this, as we expect ourselves to be biased, and we'd share this with eg. CEA when applying for CBG rounds and encourage them to make their own estimates
  • Timescale of impact
    • I think we expect to see plan changes on much longer timescales than 80k and AAC. Eg. they can track plan changes in the past year, but a lot of our impact might be observable 3-10 years after we speak to people. (In some cases it's 0.5-2 years but those are probably rarer)
  • Key Uncertainties
    • Gauging counterfactuals: Most of my key uncertainties are around gauging counterfactuals, and deciding how much to weight explicit outcomes (jobs) vs. eg. a person improving in EA engagement and building a more robust long-term plan, which is a common output of our process, particularly given that we work with students.
    • Judging Harm: Another issue is judging harm - eg. it's difficult to tell if we were the cause of someone counterfactually 'bouncing off' EA, or specific ideas within EA. It's also likely on priors that some of the advice we've given has led to people ending up in worse positions - I think it might be too early to tell this just yet though
  • Recommended Resources: I think just reading 80k's annual review tends to be a long but very useful step here.

Questions

  • How does our theory of change affect the way we measure impact?
  • General strategies for longer-term impact tracking
    • How long are you planning on being a group organiser? Will you be around long enough to see your impact? If not, how can you ensure your group does know the impact you’ve had?
  • When should you collect feedback? What is the benefit of feedback at each stage?
    • E.g. immediately, 1 month, 6 months, 1 year?
    • How many times should you collect feedback?
  • What are some failure modes of collecting good feedback?
    • Goodharting
    • Vanity Metrics
    • Social desirability bias
    • Measuring lead metrics only, and not just lag metrics, e.g. actual changes in career path etc.
    • Not discounting impact if someone doesn’t actually change their career — the correct counterfactual may be that without a career 1-1 long-term retention is less.
  • It might be useful for groups to coordinate and use similar questions (with additional questions as variations to see what’s useful), so we can start comparing across groups & improve our knowledge of career activities
  • How much of a positive outcome can you claim?
    • RE: 80K’s discounting on their new DIPY metric
  • How long to spend measuring impact vs doing the call itself?

References

  1. How to Measure Anything

What I’d like to see from other groups:

  • Short para on their impact evaluation strategies