Saturday 31 January 2015

Figuring Good Out - January Master Post

For this month's Figuring Good Out on "origin stories", we received 5 submissions.

I wrote about my transition from atheism to pop philosophy & science to LessWrong to EA.

Michelle Hutchinson wrote about how her meeting with Will Macaskill in graduate school led to her joining Giving What We Can.

Bernadette Young argued that the literary character Dorothea Brooke is an alternative example of an EA advocate.

Tom Stocker explained the many different factors that guided him toward EA.

Peter Hurford covered how the connections he made in the EA network influenced major life decisions of his.



February's topic, as suggested by Ben Kuhn & Ruthie, is "writing about explaining effective altruism." This isn't a call for new definitions or explanations of EA - it's a call for thinking about how to explain EA, especially in person.

Monday 26 January 2015

I'm On Gratipay

You can now find me on Gratipay.

I'm hoping to continue looking into the relationships between EA, marketing, social movements, and art. I think I'm relatively non-replaceable here in that nobody else associated with the EA movement seems too interested in personally researching these areas.

Sunday 25 January 2015

Cellphones As Paperweights

I'm skeptical of most art's ability to do much in terms of making the world better. This could easily be interpreted as me being skeptical of art itself as a tool for improving the world. I could easily be accused of overlooking how art provokes discussion, challenges norms, entertains and inspires audiences, and motivates empathy.

But it's not that I don't recognize these benefits. It's that I believe art can be so much more powerful than this.

We're in the early stages of art being used to save lives and reduce suffering in a predictable and scientific way. Compared to this, I don't much care for provoking discussion. Most art probably does more good than harm but using a life-saving tool for the self-actualization of the rich is like using a cellphone as a paperweight.

Paperweights do more good than harm too but if you found out yours could connect you to Google Maps, you'd pick it up off the desk and start keeping it by your side. 

I'm not skeptical of art as much as I am of the artists - those who would rather challenge gender norms in the West than eradicate malaria in the South.

While the artists create on firmly weighted down paper, the third world's living in poverty and the rich are missing their calls.

Saturday 24 January 2015

Will Effective Altruize For Food

I'm hoping to quit my job and start part-time or full-time paid EA work for an EA organization or private funder(s) in the near future.

If you're reading this post, you likely already know what I have to offer and how much you value those skills. I can write, research, summarize, philosophize, organize, social psychologize, and do various other tasks like Google Grants optimization and data entry. I also have media skills from my BA and MA: videography, editing, sound/radio, etc.

Let me know if you have a project in mind. We can Skype. :)

Potential Projects:


At Ben Kuhn's suggestion, I'm listing some projects I could work on.

One possibility is to continue applying EA-style ideas to artistic production. I'm interested in untangling the messiness of the arts ecosystem and determining how to yield the greatest cultural value from artistic output. My impression is that this question isn't considered to be high-value among EAs.

A second possibility is to write more in-depth book summaries. I was contracted to summarize Cialdini's Influence recently and I could do the same thing with various other books and resources if that's considered useful. I also wrote a summary of Daniel Gilbert's Stumbling on Happiness and David Allen's Getting Things Done. I'm not sure how useful people would find these summaries.

A third possibility is to delve more into marketing, communications, and outreach. I've done some of that on this blog but I definitely don't have professional-level marketing skills & knowledge. I recently began summarizing the textbook Principles of Marketing and I think it might be high-value for me to continue learning marketing. Marketing is one of the things the EA community wants to improve at but nobody is really bothering to learn the subject.

Saturday 10 January 2015

The Choose-your-Charity Tax

Boalt Hall suggests a Choose-Your-Charity Tax. This would provide a tax credit of one dollar for each dollar of charitable giving, up to a certain limit, perhaps 10% of one’s income. So if a taxpayer grossing $100,000 a year donates $10,000 dollars to the charity of his or her choice, he or she would receive a $10,000 tax credit.

“A decision to vote for the Choose-your-Charity Tax expresses a willingness to endure significant personal sacrifice, but only willingness to do so if others match that willingness.  
“And if others match, then the sacrifice is not a drop in the bucket, but a great wave of change – and the taxpayers know that. By matching contributions with over 100 million US taxpayers, it would be possible to solve vast problems. Roughly speaking, supporting such a policy is equivalent to being willing to give $1 when the personal price of giving that dollar is one-hundred-millionth of a dollar. This is the ultimate expansion of the proven “matching” tactic that has already helped increase donations to charities.”

For the average taxpayer, 10% of their income equals roughly $4,000 per year. Even those that agree with Singer’s ideas on charity might struggle to take that $4,000 dollars out of their own pocket and give it to a poor person. But with the Choose-your-Charity Tax, your $4,000 donation is matched by 100 million other American taxpayers, producing $400 billion/year in extra charity.

Wednesday 7 January 2015

Kosher Consequentialism

In trying to prove that consequentialism faces difficulties, my thesis supervisor gave the example of self-driving cars.

Suppose you're riding in a self-driving car. Suddenly, you wind up in a desperate situation with a minivan full of passengers. Your vehicle has two options: (1) to collide in such a way that you survive but that several people in the minivan die or (2) to collide in such a way that you die and everybody else survives. If programmed with consequentialist instincts, your car would intentionally kill you in order to save the majority.

From this, I was supposed to gather that obviously, this would be horrible and that obviously, this suggests a problem with consequentialist ethics.

Whether self-sacrificial cars are good for society, I have no idea. It isn't as obvious to me as it is to my supervisor that it's wrong. But even presuming that it is as horrible as he suggests, this is only a problem for the most naive forms of consequentialism.

There is a market for kosher meat because its believed by some to be more pious (it isn't), more humane (it isn't), more sanitary (it isn't), and healthier (it isn't) than regular meat. For all of these imaginary benefits, you have to pay more to get it. Some people - mainly the ones unconcerned with the piety factor - don't need to pay extra for the real thing but just want something with that familiar kosher taste. Supermarkets now offer "kosher style" meat. It isn't actually kosher but it tries to replicate kosher in the same way that a veggie burger tries to replicate beef.

A lot of thought experiments that try to prove consequentialism is awful really only refute "consequentialist style" theories of ethics. My supervisor's error was in confusing consequentialism ("seek the greatest good for the greatest number") with an action that seems to seek the greatest good for the greatest number but actually leads to more harm than an alternative action.

Kosher consequentialists are interested in the "actually" part of the equation. Just because sacrificing your life for five others seems like a very utilitarian thing to do, it actually isn't consequentialist at all if that action leads to horrible outcomes (which my supervisor believes it does).

If a world where self-driving cars make naive "consequentialist style" decisions to kill their owners is worse than a world where cars make other decisions, then consequentialism prefers the alternative. Kosher consequentialism favors whatever works. If you can imagine a nightmare scenario caused by consequentialist actions, then you are very probably imagining actions that are only superficially similar to consequentialism.

You could make a consequentialist justification for the car saving the minivan and you could make a consequentialist justification for the car protecting its owner. Attributing consequentialism to the option that superficially appears to serve the greater good isn't quite kosher.

Monday 5 January 2015

Changing

For this month's Figuring Good Out, the topic is "EA origin stories." I don't think of myself as "an EA" and I can't remember how things happened very well. Looking back, the years blend together and I can't quite remember the cause-and-effect of it all. But here's a story.

I've always been interested in philosophical questions and arguments. I think my main obsessions as a teenager were art (what makes "good art" good?), morality (what makes "good deeds" good?), and atheism. Coming from a broken home where I spent alternating weekends with my orthodox Jewish father and my basically-secular Jewish mother, I grew up in a uniquely good environment to produce anti-religious views. I'd go from eating McDonald's one day to waiting six hours between meat and dairy the next. One Saturday I'd watch cartoons all day and eat in front of the TV. The next Saturday I'd have to go to synagogue and sit silently through kiddush. Who could grow up in these conditions and see the religious bits as anything other than unnecessary inconveniences?

I never lost my faith, I just never believed to begin with. I remember one of the teachers at my Jewish high school asking the class about their religious beliefs. I was surprised at all the hands going up. I remember walking home as a teenager and the thought occurring to me for the first time that everyone else actually believes this stuff. When hearing the stories from the Old Testament as a kid, I had always thought of them as fables like Jack and the Beanstalk and The Boy Who Cried Wolf. It had weirdly never really occurred to me that everybody around me, including the adults, thought Jonah and the Whale was non-fiction.

When the New Atheism movement emerged, I was an easy sell. Dawkins and Hitchens led me to Harris and Dennett, who led me to Pinker, Krauss, and Singer, who led me to the Churchlands and to science communicators like Michio Kaku, Neil Degrasse Tyson, Brian Greene, and Dan Ariely. And the arguments against religion brought me to David Hume and Bertrand Russell and the circle kept expanding. 

I took some philosophy electives in school: Intro to Philosophy of Science, Philosophy of Mind, and then another course that talked exclusively about Kant for the first half (and then apparently talked exclusively about Hegel for the second half but I stopped going by that point). I kept reading. Sam Harris's book The Moral Landscape got me to change my mind about one of my favourite topics. At the time, I had a softer stance on morality where nothing could be objectively good or bad because something-about-culture. Still though, becoming a consequentialist had no real effect on my behaviour. I read his book, agreed with his arguments, changed my beliefs, and did nothing.

Fortunately, I had already read two books by Peter Singer in my first year of university. The first, The Life You Can Save, I read after my Philosophy of Science teacher told our class about an argument that one moral philosopher makes for giving to the poor. He confessed that although the argument had initially made him feel guilty, he'd found possible weak points in the philosopher's premises after further consideration. I had never heard of Peter Singer before but I, for one, had not been able to find these weak points and was still feeling guilty. I read his book, agreed with his arguments, changed my beliefs, and did nothing.

The second book I read by Peter Singer was Animal Liberation. I had always been kind of sympathetic to animal suffering but never really cared enough to inconvenience myself over it. Singer's book was an eye opener for me. He kept quoting this amazing passage by Jeremy Bentham (who I'd never heard of) that made everything clear. Although I became very confident that eating animals was immoral, and tentatively thought that I would like to become a vegetarian in the future, my general stance on the issue was: eating meat is bad but I just don't care. I read his book, agreed with his arguments, changed my beliefs, and still I did nothing.

This sort of pop philosophy and pop science became one of my main past times. I read articles from my favourite philosophers on a daily basis. A friend and I began a pact where every day we'd send each other one new article to read. I remember the excitement I'd found when, after about a week of doing this, I discovered Daniel Dennett's Tufts University page online. It contained the PDFs for what seemed to be every article he'd ever written, spanning back several decades. I decided to read through all of them in order, a few each day. I never got more than a few articles deep. My friend called me one day to tell me that everything had changed - that he had found enough reading material to keep us both busy for ages - it was endless - it was all compiled into neat sections on this one website - and so my attention turned toward LessWrong.

My friend had been in a phase of figuring out what he wanted to do with his life, so he was emailing all sorts of folks in various professional fields to ask for advice. One of these people worked for an organization called the Singularity Institute. The employee (Malo), sent my friend a long list of reading material, including the Sequences. I began reading and it was right up my alley.

There's a specific feeling I get when I have more exciting reading material than I have energy to stay up any longer. I can't remember how long it took me to read the Sequences but I know that I was reading for hours a day and I have particularly warm memories of the How to Actually Use Words section.

Through these, I got into "rationality" and became more interested in cognitive biases. I read Thinking, Fast and Slow and it became my new favourite book. I read Nudge by Thaler and Sunstein. I read Luke Muehlhauser's Common Sense Atheism blog basically cover to cover within a month. I read mountains of material, agreed with many arguments, changed my beliefs many times, and still I did nothing.

At some point, I heard the term "effective altruism." It may have been on LessWrong. It may have been on Luke's podcast. It may have been from Peter Singer. Or from my friend. Or from following Eliezer Yudkowsky on Facebook. The origin of my EA story is the fuzziest part of my EA origin story. At any rate, I discovered the movement and felt no resistance to any of its ideas. As mentioned earlier, I had already bought Singer's argument for EA from the moment my teacher told it to my class, years prior.

The ideas had grown on me to the point that I was becoming more interested in all this rationality and philosophy talk than I was in the topic of my career and education. When I entered my MA in Media Production, I had no ideas at all for the sort of creative video project I had planned to produce. My mind was in philosophy mode. A couple of epiphanies about art later, I decided to write a thesis paper about what an effective altruist worldview has to say about artistic value. My paper transformed many times in the telling but it sort of stuck to that theme.

In the early days of researching my paper, I began to realize how big my topic was. If I wanted it to be as good as my favourite books and papers, I'd have to learn a lot more. I emailed Brian Tomasik, whose blog I had been reading. In hindsight, I don't know why I emailed him given that my project had so little to do with his subject matter. He responded that no, I didn't need to learn game theory and to my surprise, he added me on Facebook. For the past year or so, I've asked Brian a totally random question on Facebook about once a month ("hey, so how does eating meat compare to buying from Nike?"). He played a big part (along with the documentary, Earthlings) in me finally becoming a vegetarian.

Now that I think about it, when I first emailed Brian, I asked him if I could join the Foundational Research Institute, naively thinking that I was qualified. He responded that I should create my own blog to showcase my abilities. I started A Nice Place To Live and posted 1+ post a day for the first month with Brian as my only reader.

Slowly, in incremental steps, I approached the cluster of properties associated with effective altruism. I started reading GiveWell's blog. I familiarized myself with more theory on existential risk. I completed my thesis paper. I once donated $50 to the Against Malaria Foundation. But really, I've changed very little.

Now, I participate in the EA blogging carnival that I created, post on the EA forum, have a bunch of EA Facebook friends, and I think a lot of EAs know who I am but I'm not sure about that. If at some point in this story, I crossed the threshold of EA-dom, I cannot point to any explicit point of origin.