Interview with Ryan Carey

Some people have committed a great deal of their lives to trying to best make the world a better place. I’m trying to sit down with some of these people and learn more about their thoughts and motivations.

Today, I sit down with Ryan Carey, who has spent a lot of time working on various projects to make the world a better place, from setting up the utilitarian forum, Felicifia, to co-creating an “effective altruist” meetup group in Melbourne, Australia. He is now currently collaborating with Leverage Research and the Centre for Effective Altruism on research into the far future and associated risks. Ryan has recently been blogging about his thoughts and experiences on his personal blog.

PH: Hi Ryan, good to talk to you. Let’s start off with your origin story. How did you first get interested in effective altruism?

RC: Ah, I learnt about effective altruism through the idea of utilitarianism. I had been reading philosophers like Daniel Dennett, and neuroscientists like VS Ramachandran. I wanted to know about big questions relating to consciousness and ethics.

I was in high-school, and I had mentioned to my mother some of the things that I had been reading, and it came to her mind that there was an old professor who had once taught her some lessons in bioethics and she had a book for me by that author… That book was Practical Ethics by Peter Singer!

When I read his ideas about animals, poverty, and ethics in general, it just clicked for me - he seemed so reasonable and rational. I went on to learn about this philosophy and gather others who shared that interest online. Many years later when the effective altruist and rationality movements developed, that just crystallized my thinking about how to make the world better and drove me to take more concrete actions.


PH: Cool! I was influenced from utilitarianism to effective altruism myself. But I know a lot of other utilitarians who aren’t effective altruists and just are content to speculate and philosophize without doing anything. It frustrates me. What do you think compelled you made this jump to action?

RC: If only I knew – then we could just swing by all of the utilitarians and activate them!

What I know is that knowing that suffering is bad, or that you would like people’s experienced to be improved, is very far from feeling empowered to do something about it. I’d speculate that there are personality factors involved - some people are naturally conscientious. A cynic might say naturally compelled by guilt, although that’s not my experience.

Personally, every time I’ve been involved with other people who are motivated, then I become socially engaged with that group, and it drives me to do more things that I think that they will find exciting and impressive. So I’ve passed through Felicifia, my utilitarian forum, and then into LessWrong, and through the effective altruist community as well, and at every step I’ve found a lot of people who I like and enjoy being around.

That would certainly be the Hansonian explanation - that charity is not about doing good, it’s about getting social points. And I guess my experience is pretty consistent with the idea that social points have played a factor at driving me to do more every step of the way. But I’ve just been lucky in that the people I surround myself with and respect tend to care very much about productivity. So I get social points by starting projects that have a large expected impact.

And I guess Robin would agree that we can signal an expectation of effectiveness from others - I’ve mostly been activated by enjoying the company of other EAs - social signalling used for good, and I think there should be more of it!


PH: A brutally honest and self-aware answer, as much as I’d like to hope there’s more to EA than runaway signalling! Even so, do you think there’s a way we could harness this desire for social points to get more people to want to do good?

RC: Yes, well I think it already happens, and it’s really good. I think that effective altruist groups are popping up all over the world. Some of them are branded by EA in general, like my new group in Melbourne, but most of them are broadly under the umbrellas of Giving What We Can and 80,000 Hours.

I think these groups deliver a lot of value to attendees, people learn a lot of fascinating ideas and get to build really close social relationships. But at the same time, these groups challenge people to integrate a lot of challenging ideas into their behaviour, like the idea of earning to give and the idea that researching the future is important and neglected.

I think of LessWrong as an effective altruist group too. It’s nominally a group for rationality, but almost everyone is altruistic when you get down far enough. LessWrong shares EAs’ affinity for the scientific method and for evaluating consequences. In turn, EAs frequently share LW-esque views about biases and epistemology. They’re sister groups in Melbourne, and I think everywhere, so I think all of these communities are tremendously important for encouraging people to do good.


PH: It’s certainly cool that you’ve got to be a part of the effective altruism wave with EA Melbourne. But that’s a pretty big initiative. How did EA Melbourne get it’s start?

RC: It started when we were contacted by Giving What We Can Central. I was notified that Rangi de Silva was in Melbourne - another doctor in training who was interested in starting a group there. So I contacted absolutely everyone I could think of - the central coordinators of Giving What We Can, 80,000 Hours, THINK, Leverage Research, everyone I knew from the Monash University’s philosophy society, and from the local LessWrong group, as well as some old friends.

A few of us met in a coffee shop to decide what we wanted to do and the very first social event was well attended right away, which was excellent. I’m glad to see that they’ve continued to go from strength to strength after I’ve left and my co-collaborator Brayden stepped down, by hosting CFAR, who spoke to an audience of 70.

It takes regular, reliable timely work to coordinate things and a network who will attend, I think those are the main things. That’s what I would suggest to someone trying to launch an EA group, and it’s glad to see that EA Melbourne certainly still has those going on.


PH: Do you think anything else made EA Melbourne a success, or was it just sheer hard work? What would be some tips you might give to people who want to organize EA groups in their own local communities?

RC: I think that the amount of work required is pretty patchy. You can’t just disappear from the planet and spend two months without organizing anything. In particular, you need to be present to arrange things on a weekly basis at the start. But it’s mostly just basic logistics like setting a venue, a topic, and maybe a speaker, and inviting people.

You just need to make sure that you give people a reason to come, they need to know somebody who will be there, and the group need to be nice people - as of course most EAs are - and they need to have something interesting to talk about.

To start a group, I recommend contacting the central EA groups to get them to introduce you to anyone interested who lives locally and gathering a few people who are willing to throw their social networks into the project. The Central EA groups can offer a little support remotely, but mostly having a kind, sociable and gender-balanced seed-group seems to often lead to a virtuous cycle that grows itself. It just takes strong existing social networks and a couple of months of good events to bootstrap that. I’ve elaborated on these tips in an article on my blog.


PH: That’s great advice. Thanks! But you’ve moved on from EA Melbourne. What lead you to give all that up? What are you doing now?

RC: The thing with EA groups is that people always ask you questions:

  1. What is the purpose of this group
  2. How can I volunteer for you or others?
  3. Who should I donate to?
  4. What is the most important work to do, apart from recruitment. i.e. we can build social networks for so long, but what should we do afterwards?

These questions are interesting, important and complex. So I set out to develop better answers to these questions. I feel that the answers lie in the area of improving the far future. However, it’s very unclear what we can do to best prepare society to flourish in even 10 or 100 years from now and so I decided that I had to find out an answer to that.

My best plan for doing this was to travel to the hubs of EA and rationality to get up to speed on others’ progress. I’ve begun 2013 by being in San Francisco where I have collaborated with The Center for Applied Rationality and Leverage Research and I will do a similar thing in Oxford later this year.

All the way the goal is to learn how to produce a better future, including a major subquestion: what kind of people we need to do that work? I am currently doing some research on some of those questions while also trying to build a movement of people to help. I’ll probably be split evenly between research and movement-building over the coming six months.

So basically, EA Melbourne was a valuable project but now I’m running out of new ideas to share with them and I think I need to meet experts who have thought about things for longer in greater detail to keep learning. Also during the time when I was doing EA Melbourne I was working as a doctor there and now that I have finished my internship year I have the option to take on other responsibilities.


PH: What made you think that producing a better future is more important than, say, working on global poverty or nonhuman animal rights?

RC: The best version that I have heard of the argument is a lecture by Max Tegmark at an old Singularity Summit. His idea is that future is so vast, in terms of time and space, and yet its value comes from the fact that we are here to see it. Now we can determine whether our civilization will grow across the universe or conversely whether nothing interesting happens again anywhere ever.

I value the lives of people in a civilization a million years from now the same as lives of people now, so it seems so much worse if we muck things up cosmically.

Although it’s grandiose to want to save the world, it is egotistical to think that everything important is going to happen in one time and space, when we live in a universe that is so much larger - to think that our current experiences are the best that the universe has to offer.

How exactly to secure our future, and ensure that everyone has better lives, including non-human animals is a very complex question, but i think that it is also the most important.


PH: What do you think of the current work being done on this topic?

RC: I think that Eliezer Yudkowsky and Nick Bostrom have done the most in performing and popularizing this research. I think they have robust arguments that artificial intelligence poses the largest extinction risk over the next century. Consider that one by one, computers have basically taken on tasks that are more impressive and complicated like chess, go, jeopardy and now driving. The extent to which we are using computers, the diversity of jobs for which we turn to them in our day to day lives has greatly increased over the last decade. Our computers are networked much more closely and we spend much more time with them.

If computers continue to get more intelligent in slightly more abstract ways year upon year for another few decades, the world will already look very strange to humans today. If these systems take on ever more of the properties of autonomy and goal-directedness of high-frequency trading systems and military robots, then we can envisage machines overtaking the intelligence of humans.

If you reached such a situation, it very quickly becomes important that the robots are embedded with some kind of ethics or human values and I think the folks at MIRI have good arguments that we can expect computing to be increasingly goal-directed, that this could happen quickly, and that it would take some extreme kind of precision to create a machine intelligence with values that create a happy outcome for humans.

Artificial intelligence is one issue that plays a big part in many people’s models of the future, and I think it’s an important one. However it’s only one issue. There are a range of other global catastrophic risks including asteroids, supervolcanoes, bioterrorism, and nuclear war.

Holden has recently argued that we should consider our developmental trajectory as a civilisation to be anomalous by historical standards and that any of a number of global catastrophic risks could cause civilisation to take humanity off its favourable developmental trajectory. I agree with him that research on Global Catastrophic Risks in general is important.

He also advocates that a bunch of strategies like alleviating poverty can have positive flow-through effects that will set our civilisation on a better long-run trajectory and I think that there are lots of conceivable interventions that Nick Beckstead would describe as broad-based including improving collaboration and rationality, that could conceivably make things go a lot better in the long run. We could also enhance human intelligence or human morality. There are almost endless possibilities.

At this stage, I find the narrower research on global catastrophic risks and artificial intelligence has the most compelling case for improving the far future, but I’m not sure if I have good reasons for that and I think that the field could easily be broken wide-open by an academic is informed and able to think more clearly about these things.


PH: Do you think you could be that kind of informed academic who breaks the field open? If not, what do you expect to do?

RC: I doubt I can outdo all of Eliezer, Paul Christiano, Carl Shulman and all of FHI. But I hope that I can understand the field enough to know what kind of people are required, what kind of infrastructure can be provided to support their work, to publicise it and to reach out to gifted young people who might be interested.


PH: Do you have specific plans of action yet? Or are you still thinking?

RC: Both! And I think I always plan to be! Actually, mostly the second one, because my plans mostly involve gathering more people together to help me think.

MY two projects are an EA Handbook and an Online Community called The Most Good.

The EA Handbook is a document that will introduce people to effective altruism and bring them up to speed on these questions. I’m preparing this with Will MacAskill. So far, we have collected many existing essays and a few authors have agreed to write some more to describe how everybody can contribute to existing organisations. This project should be finished by the middle of 2014.

The online community The Most Good will provide a central location for EA blog posts, discussion and meetups. To avoid duplicating work, we’re using Trike Apps, who will build the software using the LessWrong codebase. This website is currently in the design stage, but hopefully it will be completed by mid-year.


PH: _If you don’t mind me asking, why are you doing all this yourself? Earlier you said you were training to be a doctor. I’ll bet on a doctor’s salary you could “earn to give” and hire multiple people to work on these projects.

RC: I might end up going back to being a doctor, but there are three reasons that I’m not doing that right now: (1) I don’t know who to fund, (2) I think that many of these future-oriented projects are more time than money constrained, and (3) my uncertainties around my skill and fit are greater for research and activism.

On (1), in global poverty reduction, we think that charities vary in up to 100x effectiveness. Center for the Study of Existential Risk, Machine Intelligence Research Institute, FHI, Global Catostrophic Risk Institute and Centre for Effective Altruism are all working on issues relating to the far future, and I currently have very little idea which is the most effective. In research in particular, we know that some like Norman Borlaug can be enormously effective.

In general, in terms of citations, we know that some researchers outperform others by many orders of magnitude. In future-oriented research in particular, this seems to remain the case. Some futurists clearly have very poor intellectual methodologies and so I expect that in future-oriented research, I think this should remain the case. Moreover, there are no good metrics for the impact of future-oriented research. Nick Bostrom and Eliezer Yudkowsky are not merely aiming for publications and citations, and neither should they be. So these can only be a small part of my assessment. I can ask experts that I respect whose research they value the most, and I think that this is an important part of the solution, but it is also somewhat circular. It is hard to decide who to respect without any technical understanding, and this can be biased by halo effects that come from their attractiveness, demeanor or social competence. So I think at this stage, I need greater technical understanding in order to decide who to give to.

On (2), I think that many of these future-oriented projects are more time than money constrained. I know less than a hundred people who are interested and able to perform research into the far future and build a surrounding movement. By contrast, I know several people who have demonstrated their willingness to support ambitious research agendas, such as Peter Thiel, Jaan Tallin, Matt Wage and Matt Fallshaw.

One has to ask what we know that Peter Thiel doesn’t that is preventing him from giving all his funding to the MIRI or Jaan to the CSER. I think they know that their ability to turn cash into recruiting and employing valuable researchers is limited, or at least the marginal returns will decline at some point. Another line of reasoning is that many people are currently pursuing earn-to-give careers in finance and entrepreneurship that will take 3-10 years to pay off, and so it should soon become easier to get effective projects funded. Also most thought-leaders and organization-leaders who I have talked to consider that their organizations are more constrained by talent that funding. Of course, these statements are misleading or false if taken as absolutes. They make sense only when one talks about how much funding weighed against how much talent or recruitment ability.

On (3), given this information about the wide range of usefulness of far-future oriented research, I am highly uncertain about my impact there. I am also pretty uncertain about my value as an entrepreneur, but less so. Medicine is by far the most reliable option. Currently, it seems best to narrow my uncertainty on my impact as a researcher or in performing recruitment and movement-building. Then, I might move on to start-ups. If all this falls through, then I’ll happily be a doctor and pool my funding with the rest. Although even then it seems good to live in or visit the EA hubs to stay motivated and inform my donations.


PH: How were you able to quickly integrate yourself with the San Francisco and Oxford EA communities in order to get involved? What could an aspiring EA do to follow in your footsteps and also work on making a better future?

RC: Professionally, I think EAs and rationalists can easily see your value if you are conscientious or have achieved some broad competencies. I recommend asking to be introduced to some of the leaders of these organizations by your best connected EA friends or cold-emailing them.

Socially, I think EAs and wonderful and inclusive people. If you have some social skills, or even if you don’t have many social skills, but you have interesting ideas, people will tend to like you.

My advice regarding integrating into the San Francisco and Oxford communities would be - just do it - I think that travelling to Oxford and San Francisco’s Bay Area are key steps in my intellectual development. Interning at the Center for Applied Rationality or the Center for Effective Altruism are good steps. Or, one can attend a CFAR workshop or get career counselling from 80,000 Hours. And I am always happy to be contacted to answer questions and help with introductions.


PH: What is a typical day for you like these days?

RC: Every day I read about topics that I think might improve my strategic understanding. I try to research techniques recommended by people I respect. Recently I’ve been researching intelligence - I take creatine every day and measure its effect with a 20 minute battery of cognitive tests. For the last fortnight I have been practising speed-reading, because I think that it is important to front-load the acquisition of skills that will help further productivity and research. I am doing emails to prepare the handbook. I discuss with researchers at Leverage and the Center for Applied Rationality regarding to how we can be smarter, more emotionally stable, and less biased. When I learn something new or useful, I write it down on my blog.

Of these professional jobs, I’m most confident in the importance of speed-reading, movement-building, and researching creatine. My ongoing plan is contingent on what works and I expect I’ll answer very differently two months from now.

Outside of EA, I like to see blues, funk and soul music, and comedy.


PH: What do you think is the most important thing you’re not doing?

RC: Probably what Gaverick Matheny is doing. He is the project manager at IARPA responsible for the Aggregative Contingent Estimation program in which the good judgement project are competing and aggregating opinions to improve probabilistic forecasts of real events. I think that improving collective prediction and collective decision-making is important for stability and cooperation between governments and that it’s important to improve the sanity level of populations and large govenrments, and this work seems like a step in the right direction.


PH: Well, I think I’m out of questions. It was great getting a chance to talk with you and learn more about you. Do you have anything else you want to pass on to EAs before we leave?

RC: I would say congrats on being one of the special few who is keen to change the world. By meeting others in your hometown and in EA hubs, you can learn faster and become more productive. Try to learn about the questions that expert EAs are asking themselves, and think about how you might fit in. EA is in a fascinating sociological and intellectual position, and I hope there are great things to come.

And thanks for the interview Peter!

PH: Anytime. It was great to hear more about you and I look forward to talking with you again sometime!