Tuesday, August 21, 2012

Utility

At the GNU Public Dictatorship we are nothing if not dedicated to the steady march of progress, which is why we have invested so much of our time in research that may seem crazy to some observers.  Our true supporters will recognize many of our pioneering efforts, some of which have failed and some of which are still delivering promising technologies.  We have spent a good deal of time trying to discover ways to create machines that can do our jobs as well as we can, so that we can expand our influence more quickly than ever.  Many of those machines have relied on artificial intelligence, and in turn, much of that artificial intelligence has relied on the concept of "utility".

While it might seem simple to define "utility" as how useful something is, you soon discover that you would then have to define what it means to be useful, and so forth.  As humans generally use money to portray value, many times utility is expressed as a price or a score.  We could say that the free market determines utility of items, but that would be unfair, as to suggest that a diamond, however exquisitely cut, is more useful than an ax when you are cutting firewood would be ludicrous, regardless of how much more the diamond cost than the ax.  Utility often depends on the context of the item as well.  A prize of US $100 has very different implications when distributed in a homeless shelter or a posh restaurant or to a group of wild orangutans.  To put it simply: utility is hard to define, and money is a poor proxy for utility.

Many artificially intelligent agents seek to maximize utility.  Most do so locally, or to put it another way, they seek to improve their own situation as much as possible.  Unfortunately, as many of you will already be pointing out, machines built with this sort of goal often turn against their creators and create horrible robot apocalypses.  Some attempt to communicate with other agents and maximize the overall utility (meaning society is the one that benefits), but this communication is difficult, and it is difficult to measure the utility to society of giving Fred an apple vs. giving that apple to Janet or to a deer.  Besides this, the communication often only serves to facilitate the horrible robot apocalypse.

So why am I writing this post, then?  Is the concept of utility hopeless?  Will we never make machines that can replace us (without pushing us out of the way and enslaving us)?  The answers to these questions are: for your benefit, no, and we don't know for sure, but strongly suspect the answer is no.  One thing we can say about utility is that it is highly dependent on fuzzy emotional ideas and not on the simple facts we can observe.  Also, while it is often easy to see what we should have done, it is much more difficult to reliably predict the outcome of any given situation.

Take, for instance, this situation, in which a man riding an ATV across railroad tracks appears to have improperly calculated the utility of the various choices available to him.  The woman he was with got clear of the train, but he, for some reason, miscalculated the utility of staying with the ATV for as long as he did.  Since we don't know what he was thinking it is difficult to understand the particular error, but our analysis of the situation shows that most likely he (1) underestimated the risk to him posed by the train (the potential negative utility), (2) underestimated the utility of surviving, and (3) overestimated the utility of saving the ATV (given that the ATV was probably not cheap and that he probably had some emotional attachment to it).  The combination of these factors led to a sub-optimal choice, both for society and for himself.  The sub-optimality of this decision is painfully obvious in retrospect.

Who is to say, however, that any of us, in that situation, with our imperfect measurement of utility colored by emotions, would have made better decisions?  At the GPD we are constantly refining our concept of utility and have made great strides that are redefining artificial intelligence as we know it, and in order to answer the question we posed ran 4,032 distinct scenarios (modelling the uncertainty in perception) for a run of 1,000,000 simulations each (to account for minor variations).  Most of our simulations had the individual make the optimal choice of leaving the ATV, but there were a few situations in which our simulations consistently had the individual stay with the ATV until it was too late.  These occurred in the extreme cases cited above (e.g. severe underestimation of the risk from the train, severe overestimation of the utility of the ATV, suicidal wishes) and also when we made the individual's perception extremely unreliable, simulating intoxication from evil office products.  Our operatives are checking into whether evil office products were involved, but early signs suggest that they were not involved.

We hope that our supporters will take a few moments to consider whether their measure of the utility of their choices is accurate and effective, but we warn you to please not do so while you are stuck on an ATV on the railroad tracks while a train is quickly approaching!

No comments: