For years, I wanted to write about what the aspiring rationalist community meant to me. Seeing a lot of people criticizing it, sometime with argument I agree with, often with ones which does not represents the reality I've seen, I made a twitter thread about it. Threads are helpful because I find it more acceptable to write whatever comes through my mind randomly, which makes it easier to write than a blog post. I'll translate and post it here too.
For a little while now, I have been exploring the notion of Effective Altruism - EA for short. My readings on the topic so far have been very interesting, and I would like to add my own idea that I deem important and have yet to read elsewhere. If ever this has been written down somewhere, I can at least attest to it being all well too hidden. Personally, I believe that it should be discussed in introductions to the EA topic.
This blog post is cross-posted from less wrong. It will touch on two related topics:
1. Why and how I applied to MIRI and failed to secure an internship.
2. My experience at the AI Risk for Computer Scientists workshop (AIRCS).
Trigger warning: Death, suicide, and murder. Trolley problem.
This is quite the conventional and ethical conundrum: You are near train tracks, and a train is rolling down the hill. It is going to run over 4 people who are tied to the rails of the main track. However, you can change the train's direction to a secondary track by pulling a lever; so that it runs over only one guy, also tied down the rails. Should you pull the lever?
I do believe there is a more interesting way to frame it: What would you choose if you are yourself tied to the rails, alone, while the train is not heading toward you yet. My own answer is very simple: I want the person deciding where the train should go to have _no doubts_ they should pull the lever! Because, for lack of context, I assume that the other four people are just me, or rather copy of mes. That's a bit simplistic, of course they are not perfect clone. But as far as concrete predicates go, they are indistinguishable. That is to say I have odds of being on tracks alone of 1 in 5, and odds for being in the group of 4 in 5. And tell you what, I prefer dying with 20% probability because of what someone did, rather than to die with 80% probability because no one was ever willing to take the burden of responsibility.