Skip to content

On Paternalism

(This post is in response to having just read The Storrs Lectures: Behavioral Economics and Paternalism)

My views about paternalism have changed over the years. I used to be a staunch Millian, and I would have rejected any attempt to coerce or force someone to do something for their own good. But it’s a difficult position to maintain, because it turns out that sometimes people behave in ways that, even according to their own standards about what is best, will be worse for them. And being forced to behave otherwise might be better for them. And even if not, “nudges” might be better for them – actions which will influence their behavior without forcing or even coercing them into doing it.

A Millian, for example, would not allow someone to be forced to go to rehab. And yet, clearly, sometimes this is the right choice. If a person really values the pleasures of heroin over their health, it may not be justifiable, but many people really do judge their health as more important, but are unable to stop using anyway, and are unable to force themselves to go to rehab. For such people, forced rehab is better for them according to their own standards, and it doesn’t seem objectionable to me to do this. (I’m much more hesitant about coming up with “objective” standards.)

So I’m coming around on paternalistic actions. Sunstein is focused on government actions, and in general is in favor of a soft paternalism mostly involving information and default rules. But the concepts occur just as much in corporate and personal spheres, both of which I’m more interested in because I can directly do something about them (and have more experience with them). For example, at my work there is a default rule for the 401k plan which is paternalistic, and I’m glad it exists – namely, they automatically enroll you in the 401k at 3%, and then increase it each year until it gets to 6%. This is just a default – you can change the plan as you like, but by setting up a default it makes it quite likely that you’ll end up actually putting some money in your 401k (whereas if I had to actually enroll in the program, i might not).

He also mentions the concept of “autopaternalism”, which is when I set up a situation for myself in which I am more or less forced (or at least coerced) into taking the right action even though my normal inclination would be to do the wrong thing (again, by my own standards). For example, I use habitrpg.com to track various habits and goals, and my character in the game (and possibly those of my friends) will be damaged if I fail to do one of the tasks. This has the effect of getting me to do certain tasks even if I don’t feel like it – for example, getting a certain number of steps each day. Left to my own devices, I would often not do it, but the minor coercion of the game is enough to force me off the couch. There are other applications in which you can put money on the line for similar purposes.

There are some kinds of freedom I don’t even want to have. For example, tortilla chips are demonstrably bad for me (many experiments have shown this to be the case). And yet, in spite of this knowledge, if I am at a mexican restaurant and they bring out the chips, I will eat them. All of them. I would prefer not to even have the choice in the matter – if I could get something implanted into my brain which would automatically make me not eat tortilla chips, I would get that implant. Similarly, I am usually grateful afterward if a waitress neglects to ask me whether I would like dessert – I have trouble saying no, but I usually regret having had dessert.

And yet…although I would be perfectly happy to make these choices to constrain my future behavior, I would not be ok with the government, having read this post, declaring that no restaurant could serve me tortilla chips any longer. Even though it would make me better off, by my own standards. I’m not sure why that would bother me, but I suppose it’s something like this: I prefer not to be treated as someone who cannot make his own decisions. And insofar as a policy treats me in that way, it’s going to be harmful (to some extent) to me. Possibly it is justifiable nonetheless, but the harm is still there. (Seatbelt laws might be justifiable despite this problem, for example.)

So my views on paternalism remain mixed and probably confused, but I’m becoming convinced by the sorts of things the Sunstein discusses in the article.

On Deep Rationality

This is a response to Deep Rationality: The Evolutionary Economics of Decision Making

In this article, Kendrick et al. try to explain various forms of “irrationality” by appealing to what they are calling “deep rationality”. The basic point is that standard rationality requires that we maximize our expected utility – we do the action which, given the various probable results, is likely to yield the greatest overall value to us. We often fail to do this, in various predictable ways, and so the standard claim is that we are irrational in various predictable ways. But, say the authors, these actions do, in fact, make sense, if one looks at them the right way, and so these actions aren’t actually irrational after all.

The reason that these actions are irrational, on the standard view, is that they bias some decisions over others. For example, when a person is risk-averse, that person prefers $20 over a gambling coin toss in which heads yields $40. Rationally, these two options are the same, but to the risk averse person the $20 safe option is preferable, so to that extent the person is seen as irrational. According to deep rationality, this preference is in fact rational in some way, because a risk-averse strategy (for some contexts) will maximize “fitness” (roughly, the number of offspring or kin).

One of the successes of the article are the number of different examples in which the way we make decisions changes based on context. If a person is in “mating mode”, her attitude towards risk is different than when in “status mode”. That’s really interesting to me – decision making strategies are not, then, fixed in all contexts. This means that personalities are, to some extent, contextual.

It also seems somewhat successful in using evolutionary psychology to try to explain some of the results that we see, in which people are irrational in various ways. I’m in general skeptical of evopscyh, because I don’t think we know enough about what life was actually like during the evolutionary past, and I worry that we just make up stories about it to fit the narrative we want. However, the stories they tell here make some sense, and go some way to explaining why we would behave the ways we do in these contexts, despite the irrationality of the actions and attitudes.

Least successful to me is the idea that this evolutionary perspective constitutes a type of rationality. Being able to explain a behavior doesn’t make it rational, even when you use a term like “deep rationality” to try to make a distinction from regular old rationality. One could explain a criminal’s criminal behavior by discussing how terrible his home life used to be, but that doesn’t make the behavior any more rational, only explicable. Behaving in the ways they are talking about isn’t rational in any sort of standard sense – it doesn’t accomplish the agent’s goals in the most effective way. Instead, these behaviors accomplish genetic goals. If genes had goals, and could pick a strategy for the organism that carries them around, then they would pick the behavior. But that’s a far cry from claiming that the organism itself is rational in any way, especially “deeply”.

One might think I’m just quibbling on language. But it matters, because we care about being rational. And the fact is, even if being risk averse is explicable by genetic factors, one should still try to stop being risk averse, because it predictably will lead to worse results for you. In other words, insofar as talking about rationality is prescriptive and trying to help people make decisions, we should be encouraging them to drop the “deeply rational” behaviors in favor of actual rational behavior.

The dissertation I wanted to do, and why I didn’t, and whether I could have

I’m currently going through A Beginner’s Guide to Irrational Behavior on Coursera. Last night I read Behavioral Economics: Reunifying Psychology and Economics as part of the course, which is an article discussing the relationship between economics and psychology and how behavioral economics is bringing them back together.

The article itself is interesting, but in particular the discussion about bounded rationality and, in general, the entire concept of behavioral economics, makes me realize that the reason I had so much trouble doing my dissertation (and doing it about what I actually wanted to do it about) was because I was writing for the wrong department. Or, perhaps worse, what I really wanted to be doing is in between departments.

The project I wanted to be working on was about minimal rationality for decision theory. The idea is that decision theory, as normally done, requires an awful lot of an agent: a highly detailed preference ranking for every possible state of the world, a probabilistic belief about every possible state of the world, as well as a probabilistic belief about every state of the world given every other state of the world (for example, how likely I think it is that there are space aliens on the moon given that we have been there and not seen them, but also how likely it is that there are space aliens on the moon given that I just ate a banana). These are, of course, unreasonable assumptions (surely our brain doesn’t really have all of this stuff in it), but normally decision theorists don’t care – they are interested in ideal agents, not actual agents. And insofar as they are interested in actual agents, there’s a story you can tell in which you project onto them these belief functions and desire functions based on their behavior.

But I do care about actual agents, and I wanted to see how stripped down we could make these assumptions and still get decision theory to work. In fact, the best case scenario would be a decision theory that could actually be used by somebody, and also one which accurately determined whether they had acted rationally or not, given their actual beliefs and preferences. So I was interested in figuring out how realistic we could actually make it – do we really need beliefs about every state of the world? And preferences? Or can we transform decision theory to only require some minimal set of beliefs?

In fact, it might even turn out that decision theory will work even if (as all real agents are) some of my beliefs are contradictory! My basic theory would have been that only beliefs and desires directly relevant to the decision would need to be consistent, and they would only need to be minimally ordered. It doesn’t actually matter which order I put the options I don’t want to do, as long as I can select the top one, for example.

There’s more to it, but that’s the basic thing I wanted to do my dissertation about. Unfortunately, my advisor told me that this would not be a very good topic, because basically nobody else was doing that kind of work, and so it would be inadvisable to my career (if nobody else is doing it, then it’s unlikely to be interesting to prospective universities when I’m applying for jobs). And so I ended up trying to write it on something I was a lot less interested in, but would be “better for my career”. The topic I ended up working on was decisions about information gathering, which was interesting, but not as interesting to me as my original topic. I eventually quit writing it in favor of being a programmer.

I have to wonder, though, if I was just in the wrong department. Even though philosophers would be uninterested in the topic I’d picked, from what I can tell economists (especially behavioral economists) would probably be very interested. I hadn’t even really considered that as an option, probably because I didn’t realize that what I was really working on was an economics paper in philosophical jargon. Possibly. In any case, I look forward to learning more in this field so I can see whether I can explore more of that passion I had had. I may not end up with a dissertation, but I might end up with some good material to write about anyway.

Theory and practice

One of the many types of anti-intellectualism is the idea that theoretical understandings and training is useless. One hears claims like “business school is no substitute for actually running a business”. Similar things are said about most other types of schooling unless it is strictly practical, and the implication is supposed to be that, therefore, it’s useless to go to school and one should, instead, go get practical experience.

The claim as strictly interpreted is true: schooling and theory really aren’t a substitute for experience. But conversely, practice is no substitute for theory. Let’s take a tremendously practical endeavor: shooting a basketball into a hoop. Now, obviously, if you wanted to be able to do this well, you’d spend some time actually shooting baskets. There’s no substitute – no amount of sports physics can allow you to do this well without practice. However, if you really want to be good, you ALSO need to study the theory (or someone has to explain it to you). An understanding of the physics of getting the ball in the hoop will improve your practice and help determine correct form.

The same thing goes with business, only more so. Managing a business without theoretical background is fine and will work and the experience is valuable. And you might eventually figure out everything you need to know. But understanding economics and a theory of firms will shortcut your learning – you don’t HAVE to learn everything by trial and error.

Software engineering, too: a study of algorithms or the theory of object orientation or Turing machines won’t replace practical programming experience. But learning these things on top of experience will improve your abilities faster than a little more experience – it allows thinking about what you are doing in a structured, organized way.

My main point is this: theoretical understanding not being a substitute for experience is no argument against it. What I propose instead is that the best understanding comes from combining theory with practice and using then together.

The Witcher: not that exciting

I installed the Witcher: Enhanced Edition about 6 months ago, and it’s been the primary game I have played on my PC since then. I’ve gotten roughly 3/4 of the way through the story, and I’ve decided it’s just not worth my time to complete.

The standard claim about the game is that, although it is misogynous, it’s worth it because the story is so great and there are some really tough choices to make. But in fact, I think it does none of these very well. The misogyny is there, but it’s not very interesting. It mostly consists in some characters (who we certainly aren’t supposed to be looking up to) saying various derogatory things about women, which is somewhat shocking in its vulgarity. But this kind of talk dies off quickly (I don’t remember coming across it after a chapter or two), and again, it’s put in the mouths of not-very-admirable people, which is hardly an endorsement of the misogynist views.

The other source of misogyny is the “sex cards”. Basically, if your character manages to have sex with someone in the game, you get a sexual (usually nude) picture of that character. The pictures are not exciting. Actually, the worst part of that process is that usually the way to get women in the game to have sex with the witcher is to give them some item, often an annoyingly specific one. “No, I wanted an amber and silver ring, not a ruby and silver ring!” So it’s just another fetch quest.

The story is fine as it goes, but there’s so much running back and forth (through hordes of annoying monsters to fight) that it takes a long time to get the story to progress. And when it does, it often doesn’t make that much sense. For example, the game informed me that I “knew” various characters were not a suspect because of various pieces of evidence that I don’t remember getting, and which hardly qualified as a convincing case. I think maybe there was a decent story there, but too much boring fights got in the way of me being able to appreciate it.

But the main draw for me had been the choices. There was all this talk of how you’d get to make really interesting choices, even those the morality of which was very grey. As with most (all?) games that make this promise, the result was pretty underwhelming. And there are definitely some, but again, there is so much boring fighting that you have to do in between making those choices, so they are unable to save the game from being boring.

As you can tell, the main complaint I had with the game was how much running around back and forth through the same areas over and over you have to do. It takes a lot of time, it’s not fun, and the fights are all pretty much the same. Upgrading your character doesn’t change the way battle happens much, it mostly just changes how strong you are. And since they make you run through monster-infested areas over and over, the majority of my playing time was spent in something that is not at all fun to me.

If there were a quick-travel system, or just something to make me not have to constantly fight, I would have liked the game well enough to continue through and finish the story. But there’s just too much not-fun work to do to get that next chunk of story, so it’s not worth the bother.

Installing Mongodb on Ubuntu 10.10: A gotcha

There are already a lot of guides on how to install mongodb on Ubuntu. They all have basically the same instructions (which DO NOT WORK at a certain point, at least for me). Here is one version:

http://yoodey.com/how-install-mongodb-ubuntu-maverick-meerkat-1010-easy-steps

The tricky part is this step:

Add deb http://downloads.mongodb.org/distros/ubuntu 10.10 10gen into /etc/apt/source.list.

If you’re like me, what you do is open up your computer window, go to etc, go to apt, and then double click on sources.list. Then you go to the Other Software tab, and you click the Add button. In the dialog box that appears, you fill in deb http://downloads.mongodb.org/distros/ubuntu 10.10 10gen

If you then try to close the sources list, you’ll find that it is unable to update, and everything goes wrong. You can’t get mongodb. But the fix is easy!

It turns out that when you go through the add source process above, you end up adding two sources: a binary version, and a source version. So you’ll see, in the list, two identical sources that say “http://downloads-distro.mongodb.org/blahblah”. You only want the binary one, so if you delete the one that says something about “source”, and then try closing the list, everything will be work.

So that’s how you solve the gotcha that plagued me for an hour today.

Some worries about Class Oriented Programming

There have been a lot of posts from people lately concerned that what we have been calling “Object Oriented Programming” has, in fact, been implemented as “Class Oriented Programming”. In OOP languages, particular objects (instances) get almost all of their interesting properties from the class they belong to, which itself can derive much of its properties from the class it belongs to, all the way back to some basic Object class. We spend most of our time programming the methods and defaults for those classes.

The problem, which most people eventually run across, is that the world is not so neatly divided into such classes. In the real world, I am both a philosopher and a programmer; if for some reason you had both of these classes in your program (presumably inheriting from a Person class), which one should I belong to? It’s not uncommon to have just this problem in various web applications: you might have an Admin class and an Uploader class, but then sometimes you want someone to belong to both.

One (I think bad) solution to this issue is to try to have a multiple inheritance system, whereby an instance (or a class) can inherit from multiple classes. Personally, I think this would be messy quickly, although it is a step in the right direction. [One problem is that you would want to try to Classify everything, and so you would end up with an outrageous number of classes]

I think the tree system of classes is helpful, as far as it goes, but you should not try to make a class for every bundle of traits you can think of. Instead, reserve classes for times when you have proper subsets that are mutually exclusive. For example, BookCitations are a proper subset of Citations (there are non-book citations out there), and it is mutually exclusive of ArticleCitations (nothing can be both). So these, I think, are a good case for the current class system.

The Admin/Uploader division, however, is not mutually exclusive, so it is a bad use of classes. What we really want is to define various traits, and then give these traits to individual objects (or classes). One way you see this done is to create a boolean Admin column in the database, but I think this is bad practice. In Rails, I’ve seen some projects which deal with this through Roles, whatever exactly this means. In any case, people are seeing the problem when it comes to users, and I think we need to open this up to other classes/objects as well. We need to be building a Trait based system, not a class based one (except, again, where this is helpful).

I still have to think about exactly how to accomplish this, but it seems to me to be the right approach.

What a Liberal Arts class ought to be like

I’ve been reading Why Read? by Mark Edmundson. In the first part of the book, he criticizes humanities classes for catering to students’ demands to be entertaining. He then goes on to claim that “the function of a liberal arts education is to use major works of art and intellect to influence one’s Final Narrative, one’s outermost circle of commitments.”(31) In other words, a proper class within the humanities ought to use the literature of its topic (philosophy, history, whatever) in order to challenge and shape a student’s core values.

We (and certainly I) mostly fail at this. Instead, I have taken it to be enough to present various thinkers views, to talk about whether they work (and in particular, what their failings are), and move on. In fact, half the time all I really do is try to get the view right, and save analysis of whether it works to the student on an essay. And I’ve been thinking for a while, what’s the point of all this? I think Edmunson has given an answer, and I think it’s a valuable one. It’s not enough to simply present these views: we must defend them in a way that actually challenges the student to have to evaluate his/her own views on the topic. We must force them to engage in a proper examination of their own views. The point of all of it is that these are core values, the most important values that the student has. If they are not uncomfortable with our line of questioning, we must press harder, dig deeper, until they are.

It’s not clear to me exactly how to do this with my Philosophy of Art class that I’ll have in the summer, but I will certainly try. In any case, I’m reinvigorated in the purpose of it all.

Old Tests: A project smell

There are lots of books and posts about code smells and how to detect them. There are even automatic code-smell-detectors like Reek and Rails Best Practices.

I propose yet another metric to add, this time at the project level: old tests. When we are practising proper BDD or TDD, we write failing tests first and code to follow. Unfortunately, we do not always live up to these principles, and neither do our colleagues who are working on our programs. When someone else edits the program you are working on, it would be good to know if they also updated the tests (not necessarily in the same commit, but recently).

So I think what we need is a metric that looks at models and their tests and makes sure that the editing is consistent. As a heuristic, if the test for a model has not been updated since, say, a day before the code was edited, chances are the tests need to be updated.

It seems like that should be very easy to detect, so I am going to try to write a gem to do just that.

Find or Factory method for tests

Rails comes with a nice method called find_or_create which, as you might imagine, looks for an object given certain parameters, and if no such object exists, it creates the object.

But I rarely use that method, because I would much rather create objects in Factories during my tests. And then sometimes my tests complain that the object I’m trying to create already exists. FML

So today I wrote a new helper method to fix the problem, namely, find_or_factory

(Put this in your test_helper)


def find_or_factory(model, attributes)
..model_as_constant = model.to_s.titleize.gsub(' ', '').constantize
..object = model_as_constant.where(attributes).first
..object ||= Factory.create(model.to_sym, attributes)
..object
end

Now you can write:

@study = find_or_factory(:study, :name => "Default Study")