“People don’t always respond to incentives in the ways you might predict. What distinguishes good economic thinking from bad is recognition of the subtle, creative, and often unforeseen ways that people respond to incentives.”
In the developed world, we like to think of slavery as a bad memory. But slavery persists to this day, particularly in some parts of Africa, most notably the Sudan. Raiding parties steal children from their home villages and transport them for sale in slave markets many miles away. In the 1990s, when news of this ongoing tragedy came to the developed world, well-intentioned people formed charitable foundations that raised money for slave redemption—that is, buying people out of slavery.

For more on slave redemption, see “Policy Debate: Do slave redemption programs reduce the problem of slavery?” at Southwestern College, and the reference to Richard Miniter’s article in the Atlantic Monthly.

Did these charitable efforts do any good? Certainly, some people are free now who might otherwise have lived their whole lives in slavery. But there is strong evidence to suggest that slave redemption made the overall situation worse. As journalist Richard Miniter reported in a 1999 article in the Atlantic Monthly, the high prices offered by relatively rich Americans increased the demand for slaves, turned the slave trade into an even more lucrative business, and thereby gave raiders an incentive to conduct even more slave raids. If not for the activities of Western charitable organizations, many of the redeemed slaves might never have been enslaved in the first place!

How did the slave redeemers err? They focused on just one incentive (to release people already in bonds) while ignoring another (to capture more slaves). The sad result was an incentive scheme gone awry.

With just an iota of economics training, most people catch on to the importance of incentives. “Aha! To get people to do what we want, all we have to do is reward the good stuff and punish the bad stuff!” Alas, the world is not so simple. People don’t always respond to incentives in the ways you might predict. What distinguishes good economic thinking from bad is recognition of the subtle, creative, and often unforeseen ways that people respond to incentives. Ignoring the complex operation of incentives is a recipe for unintended consequences.

The Bad Assumption of Fixed Behavior

Unless you’re careful, it’s easy to assume that people will continue doing what they’re doing despite changes in the costs and benefits of their choices. The slave redeemers, for instance, implicitly assumed the number of slave raids would remain fixed, despite higher returns from slave trading.

For more on this topic, listen to the podcast Bruce Yandle on Bootleggers and Baptists.

This kind of mistake is not uncommon. Policymakers and policy advocates seem especially vulnerable to the assumption that behavior is fixed. To take just one example, every state of the U.S. has “mandated benefit” laws that require health insurance policies to cover specified conditions and treatments, from cancer to mental illness to acupuncture. There are over 1000 such mandated benefit laws nationwide. Support for these laws is at least partially well-intentioned: healthcare advocates want to make sure people get good medical care. The campaign contributions of medical service providers surely play a role in generating legislative support, of course.

For more on mandated benefits, see Mandated Benefit Laws and Employer-Sponsored Health Insurance, by Gail A. Jensen and Michael W. Morrisey. January 1999, HIAA.

However, insurance companies have to raise premiums to cover the costs of the addition services—and then some customers choose to go uninsured because they can’t afford the higher premiums. As a result, they end up with less medical care, not more. The lawmakers who have passed mandated benefits laws and the advocates who lobbied for them apparently didn’t realize—or didn’t care—that insurance companies and their customers would not keep creating the same number of policies at the same prices.

Switching Effects

New rewards and punishments do not affect only the targeted activity. They can also affect the level of other, related activities. Punishing one “bad” thing can induce people to do more of other bad things; rewarding one “good” thing can induce people to do less of other good things.

For instance, increasing the punishment for the consumption of one illicit drug (such as marijuana) can induce users to increase their consumption of other drugs, both illegal (MDMA, cocaine) and legal (alcohol and tobacco). A survey of doctors who prescribe medical marijuana in California reveals that when patients can use marijuana, it enables them to reduce their use of prescription drugs, over-the-counter sleep aids, alcohol, and cigarettes. Re-exposing them to punishment for their marijuana use would, presumably, push them back to their original drugs of choice.

For more on the efficacy of school drug testing, see “Testing for Drugs of Abuse in Children and Adolescents: Addendum—Testing in Schools and at Home”. Committee on Substance Abuse and Council on School Health. Pediatrics, Vol. 119 No. 3 March 2007, pp. 627-630.

For a more subtle example, consider the effect of mandatory drug testing in schools. Drug testing is intended to reduce all illicit drug use. But some drugs are more easily detected than others. Marijuana can be detected in the human system for longer than other, often harder drugs. Some drugs, including inhalants and ecstasy, are not detected at all by standard drug panels. As a result, drug testing creates an incentive for students seeking a high to consume more dangerous drugs. (However, I do not know of any studies that have searched for this result.)

In short, people often switch from one activity to another in response to changes in their incentives. Policymakers who fail to recognize the possibility of such “switching effects” invite unforeseen, and often unpleasant, results.

Rewarding (or Punishing) the Wrong Thing

The Economist’s View of the World, by Steve Rhoads, is available at Amazon.com.

For a reward or punishment to be effective, it also must aim at the right target. That’s not as easy as it sounds. S. E. Rhoads, in his book The Economist’s View of the World, tells the story of the Italian town of Abruzzi, which had a problem with too many vipers. To motivate citizens to kill vipers, the town fathers created a viper bounty to be paid for dead vipers. “Alas, the supply of vipers increased. Townspeople had started breeding them in their basements” (p. 58). The problem, of course, is that the town fathers rewarded the wrong thing. What they wanted was not more dead vipers, but fewer vipers in the first place.

For more on the ineffectiveness of gun buy-backs, see “Buying Back Guns May Not Reduce Crime”, by Mike Dorning. Originally published June 25, 2000, Chicago Tribune.

Abruzzi’s story may be apocryphal, but its mistake is not. Consider the case of gun buy-back programs. These programs aim to reduce the number of guns on the streets by having authorities buy them up. Cities with gun buy-back programs tout their success by announcing the number of guns purchased. It’s possible, however, that people will bring guns to town just for the purpose of selling them—after all, if the city paid less than the market price, gun owners would sell their guns privately. The real question is not how many guns are purchased, but how many guns remain on the street. And this is setting aside the difficult question of whether reducing the number of guns actually reduces violent crime. Since criminals presumably have the greatest need for guns—their livelihoods depend on them—they are probably the least likely to sell them.

For more on the Lincoln Electric story, see “The Lincoln Electric Company,” by Norman Fast and Norman Berg. 1975. Harvard Business School Case #376-028. Available online at Harvard Business School Case Studies.

It’s easy to make fun of government, but private actors are not immune to using incentives in ineffective ways. Take the famous case of Lincoln Electric, a firm that had experienced great success in using piece-rates (instead of wages per hour) to motivate workers who made arc welders. Incentive pay for the production line employees worked so well, in fact, that the firm extended the policy by compensating secretaries based on their number of keystrokes. Eventually management discovered that a secretary had spent her lunch hour typing one key continuously. Private businesses do make mistakes, but at least they have a bottom line incentive to fix them. Lincoln Electric eventually rescinded the keystroke compensation plan. Gun buyback programs remain popular.

See “On the folly of rewarding A, while hoping for B”, by Steven Kerr. Academy of Management Executive, 1995, 9(1), pp. 7-14.

In a classic article, Steven Kerr reflects “On the Folly of Rewarding A, While Hoping for B.” Kerr discusses a wide range of policies that do just that, in areas ranging from government to business to medicine and sports. Improperly targeted rewards and punishments abound. In some cases they are unavoidable, because the things we really want to affect are difficult to observe and measure. But awareness of the problem is the first step toward fixing it—or avoiding it in the first place.

The Many Margins of Choice

Another source of unintended consequences is the failure to recognize the complexity of economic life. People have a wide array of options for changing their behavior, and that fact can stymie attempts to predict exactly how they will respond to new incentives.

Yoram Barzel offers the instructive story of government price caps on gasoline in the 1970s. A simple economic analysis (familiar to most students of introductory economics) says that price caps will lead to shortages, because consumers will demand more gasoline while suppliers of gasoline reduce the amount they are willing to sell. And indeed, shortages did emerge in 1970s, resulting in long lines at the gas pump. But that was not the end of the story. Barzel notes a plethora of other effects of the price caps: octane levels of gasoline fell (it’s less costly to produce lower octane gas), gas stations reduced their hours of operation, full-service stations switched to self-service, and so on. In some cases, service stations offered “free” tanks of gasoline with lube jobs—remarkably pricy lube jobs, of course. The price of the lube job included the true price of gasoline, and buying a lube job allowed customers to jump the queue. Says Barzel, “At no previous time in history had automobiles been so well lubricated” (p. 30).

The key insight—which applies to all kinds of goods and services, not merely gasoline—is that people do not only make choices about prices and quantities. There are many, many margins of choice that people can exploit to improve their situations and to evade regulations.

If you stare at a supply-and-demand graph, it’s easy to imagine that the products in question—gallons of gasoline, doctor visits, back massages, or what have you—are easily defined entities with well-known and immutable features. In reality, any good or service consists of a bundle of characteristics. There are gasolines with various octane levels and fuel additives; apartments with various levels of maintenance and amenities; back rubs of various lengths and intensities. All of these margins can be adjusted.

Likewise, the prices paid by consumers might appear to be simply defined amounts of dollars and cents. In fact, consumers pay for their purchases with a bundle of sacrifices: money paid directly to sellers, money paid indirectly in the form of agency fees and bribes, effort spent searching, and time spent waiting. These margins, too, can be adjusted in response to changing conditions.

As a result, policymakers can find it difficult, if not impossible, to escape market forces. Policies that force down the official price of a good or service trigger responses that push down quality, push up other aspects of the price (such as bribes), or both.

The important lesson for policymakers is that regulations will almost always have unintended consequences, because creative people continually find ways to exploit margins of choice that were not considered by the regulators. Take, for instance, the case of rent controls designed to make apartments more affordable. That such controls have led to a shortage of apartment housing in places like New York City is no surprise. More interesting is that the meaning of “apartment housing” has also changed. Landlords have reduced the maintenance level of buildings while cutting back on amenities such as free utilities, parking, and built-in appliances, thereby reducing the cost of providing the units. Meanwhile, customers pay for housing with more than just their rent checks; they also must pay “key fees,” bribes to resident managers, and exorbitant commissions to rental agencies just for the opportunity to view rent-controlled apartments. In short, people have dealt with housing regulations by adjusting the characteristics of both the product provided and the price paid.

Getting It Right, or Not Getting It Wrong

A person with little or no economics training often ignores incentives entirely, by treating people like robots who just respond to their programming. They keep on doing what they’re doing, however much we alter their surroundings. A lousy economist regards people as more sophisticated robots. They change their behavior in response to changes in their incentives, but only in specified and highly predictable ways. A good economist realizes that human beings are imaginative and clever. They change their behavior in response to incentives in both predictable and unpredictable ways, constantly seeking to improve their lives in light of new conditions. Failure to recognize this aspect of human nature makes us vulnerable to all manner of errors, in our businesses, personal lives, charitable efforts, and government policies.


 

*Glen Whitman is associate professor of economics at California State University, Northridge. He also blogs at Agoraphilia.