Monthly Archives: September 2014
“But the (Los Angeles) City Council never seemed interested in really examining the potential economic consequences of the ordinance…
…That’s unfortunate, because the results strongly suggest that such a steep increase in the minimum wage could result in a sharp decline in the number of jobs in the hotel industry. And that kind of job loss could mean that as many workers would be hurt by the law as would be helped by higher wages.”, Christopher Thornberg, economist and founding partner of Beacon Economics
How to kill hotel jobs in L.A.
By CHRISTOPHER THORNBERG
Last week, members of the Los Angeles City Council voted to increase the minimum wage for hotel workers within the city to $15.37 per hour by next year. Why? You’d have to ask them.
A study they commissioned me to do on the subject — then seemingly ignored — raised serious questions about the wisdom of the measure.
Months ago, the City Council asked three analysts to look at the proposed wage hike. One was decidedly pro-labor, one decidedly pro-business. My firm, Beacon Economics, was the third, selected with the approval of both business and labor groups. We had no preconceived ideas about the proposal; we merely agreed to analyze the available data and see if conclusions could be drawn.
But the City Council never seemed interested in really examining the potential economic consequences of the ordinance. We got our instructions about what questions to address just two weeks before the vote, and we were surprised to learn that the council intended to vote on the day after we turned in our final analysis, which suggests none of the members spent time looking at our findings.
That’s unfortunate, because the results strongly suggest that such a steep increase in the minimum wage could result in a sharp decline in the number of jobs in the hotel industry. And that kind of job loss could mean that as many workers would be hurt by the law as would be helped by higher wages.
There is a healthy debate going on in economics about the effect of minimum wage statutes on employment levels. The mainstream consensus is that such laws do reduce the number of jobs, but only slightly. Most of the studies that have been done, however, are based on much smaller minimum wage increases within much larger geographic areas. These studies don’t necessarily apply to what Los Angeles is doing: requiring one industry to institute what will amount to nearly a 50% hike in wages over the prevailing base. This is especially tricky for a city that directly borders 35 other cities as well as unincorporated terrain managed by Los Angeles County.
Because these studies cannot be directly applied, it is fortunate that we have a natural experiment by which to gauge the potential impact in Los Angeles. Since 2008, a living wage ordinance has mandated higher wages at hotels near Los Angeles International Airport but not elsewhere in the city.
We set out to look at the data, figuring that if the airport ordinance had only a small affect on employment, we could be partially reassured that this new broader statute would also have a minimal impact.
Some of the indicators seemed good. Occupancy rates in the LAX area remain some of the highest in Los Angeles County, with little change since the living wage rules went into place.
Unfortunately, the same cannot be said for the number of people employed. Indeed, we found job loss that was large enough and surprising enough that we went back through our data sources multiple times to ensure the analysis was accurate.
The data clearly show that hotels around the airport have seen a sharp decline in employment relative to hotels in Los Angeles County overall. Some 12% more people are employed at hotels in the county than in 2007. The increase is apparent not only at hotels in general but within individual hotels, which means the jump cannot be attributed to an increase in the number of hotels elsewhere in the county. But in the airport hotels covered by the law, hotel employment has declined 10%.
As for the seeming disconnect between steadily high room occupancy and fewer jobs, modern large hotels are far more than a place to sleep at night. They offer a variety of restaurants, bars, parking garages, banquet and conference halls and tourist information centers. Anecdotally we have heard that many of these secondary lines of business have been sharply curtailed or eliminated because of the increase in labor costs. If higher wages have made banquets, say, more expensive to hold at airport hotels, it would be no surprise if organizations have decided to hold their banquets elsewhere.
These findings are alarming and at the very least suggest that further study should be done before rushing into a citywide expansion of the hotel wage law. A number of important questions still need to be answered. Can we link these job losses directly to the wage hike? If so, what sorts of jobs specifically were lost? Were these replaced in other parts of the economy? And, most important, how might the citywide statute be amended to reduce the chance that these results will be replicated on a wider basis?
If the same kind of job loss we found in airport hotels were to occur in hotels citywide, Los Angeles could lose thousands of jobs over the next few years. Some Angelenos will undoubtedly benefit from higher wages. But how many others will lose jobs or not be able to find them?
Christopher Thornberg is an economist and founding partner of Beacon Economics.
Follow the Opinion section on Twitter @latimesopinion
Copyright © 2014, Los Angeles Times
“…the agency has touted its enforcement statistics as a gauge of its effectiveness for the past decade. But police chiefs often rate success in terms of falling crime rates…The SEC’s increasing numbers of cases and “ever-growing penalty amounts” could show a need to re-evaluate…”, Bradley Bondi, former SEC Commission Counsel, partner Cadwalader, Wickersham, & Taft, LLP
As SEC Enforcement Cases Rise, Big Actions Are Sparse
Tally Is First Year-Over-Year Increase Since 2011, But Some Cite Paucity of Big Cases
By Jean Eaglesham
The SEC, under Mary Jo White, said it wants to pursue wrongdoing on a broad range of fronts, on both Wall Street and Main Street. Bloomberg News
Mary Jo White will end her first full fiscal year running the Securities and Exchange Commission able to claim an increase in its annual tally of cases, the first year-over-year rise since 2011, according to people close to the agency.
It is an important benchmark for the SEC chairman, a former federal prosecutor who promised “aggressive and creative” enforcement soon after taking office last year.
But some SEC watchers said the heightened activity masks a scarcity of the blockbuster actions that should be a feature of an effective Wall Street cop.
“When the chairman testifies before Congress…she will have nice numbers to cite,” said Thomas Gorman, a partner at law firm Dorsey & Whitney LLP. “But she’s not going to have the really good cases that the SEC made its reputation on.”
Ms. White said the SEC’s enforcement division has been highly successful in the past year, and aggressive enforcement will continue. “What is most impressive is that the cases span the spectrum of the securities markets and that we demanded tough remedies,” she said in a statement.
The SEC is still completing its official tally of enforcement actions for the fiscal year ending Tuesday. But a recent flurry of activity has boosted total enforcement actions, one of the agency’s measures of success, to well over the 686 reached in the previous 12-month period, the people close to the agency said.
Ms. White can also point to recent progress on two of her priorities. Her policy of requiring admissions of wrongdoing to settle SEC allegations in some cases has produced 12 such pacts, the most recent last week, according to an analysis by The Wall Street Journal. And her “broken windows” strategy of pursuing even small legal violations led to 54 new cases this month, a handy addition to the annual tally, according to the analysis.
Senior SEC officials said the agency under Ms. White is moving on from its effort to punish misconduct related to the financial crisis, which dominated the five years after 2008. But such actions still accounted for more than half of its biggest-penalties cases in this fiscal year, according to the analysis by the Journal.
Four of seven cases with settlements of more than $100 million brought by the SEC in the past 12 months, including the three biggest fines, involve conduct dating back to the financial crisis, the analysis found.
Three of these seven $100 million-plus cases included admissions of wrongdoing by the firm involved, reflecting the agency’s determination to require admissions in some of its most important cases. But the 12 pacts with admissions reached under the new policy also include alleged misconduct that resulted in much smaller penalties, in which fraud wasn’t alleged, such as a computer-coding error that resulted in a discount brokerage giving incorrect data to the SEC.
“When you look at the [admissions] cases they’ve brought so far, it’s hard to understand why those have been selected and not others,” said Stephen Crimmins, a partner at law firm K&L Gates LLP.
Andrew Ceresney, SEC enforcement chief, said all 12 admissions “are important cases that warranted admissions, which enhanced the defendants’ acceptance of responsibility for their actions.”
The SEC has chalked up some notable firsts under Ms. White. This month alone, the agency made a record $30 million payout to a whistleblower and filed its first enforcement action against a high-frequency trading firm, as well as its first case against a brokerage firm for failing to protect customers’ nonpublic information.
Senior SEC officials said the agency’s postcrisis strategy is to pursue wrongdoing on a broad range of fronts, on both Wall Street and Main Street, rather than target any particular area.
“Now that we have completed nearly all of our financial crisis cases, we will increase our focus in a range of different areas,” said Mr. Ceresney. The types of cases being targeted include insider trading, market structure, microcap-stock fraud, pyramid schemes, municipal securities, complex financial instruments, and investment-adviser fraud, he said. Mr. Ceresney highlighted financial-reporting fraud as another area in which the SEC is ramping up efforts.
The numbers of these bread-and-butter, fiddling-the-books cases fell sharply in the aftermath of the crisis. That trend is now reversing: The SEC will this fiscal year be able to report an increase for this type of case for the first time since 2008, said a person close to the agency. Accounting and financial-reporting fraud numbers are up at least 25% on the 68 total for the year through September 2013, the person said.
But these cases are mostly on a fairly modest scale, in terms of the sanctions imposed. “We’re not seeing many really big actions; there is nothing to match the accounting fraud cases of the late 1990s,” said Thomas Sporkin, a former SEC attorney who is now a partner at law firm BuckleySandler LLP.
Some lawyers question the seeming pressure on the SEC to bring ever-bigger cases. Bradley Bondi, a former counsel to two former SEC Republican commissioners, said the agency has touted its enforcement statistics as a gauge of its effectiveness for the past decade. But police chiefs often rate success in terms of falling crime rates, said Mr. Bondi, now a partner at law firm Cadwalader, Wickersham & Taft LLP. The SEC’s increasing numbers of cases and “ever-growing penalty amounts” could show a need to re-evaluate how well it is deterring future misconduct, he said.
Arthur Levitt, the SEC’s chairman from 1993 to 2001, said Ms. White has “got off to a good start” on enforcement. But it is probably too early to judge the effectiveness of her drive to make the SEC a tougher agency. “I don’t think we’ll really know for a few years,” he said.
“The 8th Amendment bans cruel and unusual punishment. Yet it happens every day in prisons across the country…We are a nation of laws, and the Constitution is the highest law in the land. Do we (any American) really find this kind of treatment of prisoners acceptable?”, Martin Garbus, trial attorney
Cruel and usual punishment in jails and prisons
Alma Murdough, left, and her daughter Cheryl Warner hold a photo of Murdough’s son, Jerome. On Feb. 15, jail staff discovered a lifeless Murdough in a 101 degree jail cell. (Jason DeCrow / AP) By MARTIN GARBUS
Cruel and unusual punishment is unconstitutional. But it’s normal in U.S. prisons.
How can we stand by when a man ‘bakes to death’ in a jail cell in America?
The 8th Amendment bans cruel and unusual punishment. Yet it happens every day in prisons across the country. Putting aside capital punishment, which I would argue is cruel and unusual on its face, Americans are ignoring a host of horrific conditions that inmates are subjected to. This is not only morally and constitutionally dangerous; it is also bad public policy, because the prisoners being badly treated today will be back in our communities tomorrow.
As a litigator and constitutional lawyer, I have heard appalling stories from the nation’s prisons and jails. One prisoner described to me how he was handcuffed to the bottom of his bunk in his underwear day after day for months. Another described how his cell was located directly beneath broken toilet pipes, which meant the cell smelled horribly of urine and excrement. I’ve heard how cells are unbearably hot or cold and how four prisoners are confined to spaces intended for two, with only one set of bunk beds. I’ve heard about showers that produce only scalding or icy water and about how, when cell toilets overflow, staff are in no hurry to fix them or to clean up.
The health risks in prisons are also unacceptable. MRSA, a bacterial infection whose strains are often resistant to antibiotics, now runs through maximum security prisons. I contracted it myself after visiting such a prison in June and was hospitalized for three days. Sexual assaults and sexual activity are well known to occur in prisons, but prisoners rarely have access to protection, such as condoms, that can help prevent sexually transmitted diseases.
And then there is solitary confinement. It is hard to tell exactly how many prisoners are in solitary each year in the United States. Today, 44 states allow it, but many states do not report how many inmates are held in solitary. A 2005 report from the Vera Institute of Justice estimated the number at 81,622.
The New York City medical examiner found that Murdough, who was discovered dead in his 101 degree cell while incarcerated at New York City’s Rikers Island jail, died from an accident. (AP)
Reports from those who have been held in solitary make clear how inhumane the punishment is. Even the most optimistic lose hope. I have heard it described more than once as like being trapped in a coffin. Lights are sometimes kept on 24 hours a day. Prisoners often have no books or reading material. Visits from lawyers and family members, as well as phone calls, are severely restricted, leaving prisoners feeling totally isolated from everything and everyone.
For a recent report on solitary confinement at Rikers Island in New York City, the Bronx Defenders’ Solitary Confinement Project interviewed 59 prisoners who were being held or had been held in solitary at the jail. A majority of them were awaiting trial and had not even been convicted of a crime. The average age of the prisoners was 20.
The report noted that, although the United Nations special rapporteur on torture has concluded that more than 15 days in solitary confinement constitutes torture, more than 90% of the Rikers prisoners interviewed had been held longer than that, often far longer. Two prisoners were facing more than 1,000 days in solitary and had already served months.
The prisoners interviewed complained of being provided meager amounts of food, with one inmate describing his time in solitary as “the hungriest that he had ever been in his life.”
These stories are chilling, but they are also so abundant as to begin to seem almost normal to those of us involved in 8th Amendment issues. Still, every once in a while, a story comes along that is so horrifying I can’t believe any American would think our current system is acceptable.
The death this year of Jerome Murdough at Rikers is such a case. The 56-year-old homeless ex-Marine suffered from bipolar disorder and schizophrenia. When he was arrested on a misdemeanor charge, he could not make the $2,500 bail, and so was sent to Rikers, where he was confined in an isolation cell. Although it was February, the cell was extremely hot. He was found dead in his cell, and an autopsy released this month by New York’s medical examiner found that he had died of hyperthermia, with a body temperature of 103 degrees at the time of his death. As an anonymous official put it to the Associated Press, he “basically baked to death.”
We are a nation of laws, and the Constitution is the highest law in the land. Do we really find this kind of treatment of prisoners acceptable?
Trial attorney Martin Garbus has litigated 8th Amendment cases and is the author of books on constitutional law.
Follow the Opinion section on Twitter @latimesopinion
Copyright © 2014, Los Angeles Times
“If you had to boil it down to ‘What’s the number one problem in the world?’ well, it would be poor decision making,” says (Ralph) Adolphs (Professor of Pyschology and Neuroscience and biology, and director of Caltech’s Conter Center for Neuroscience). “It’s very hard to make complex decisions, especially when the consequences of those decisions will occur far in the future…
…Understanding how to improve that kind of decision making, that’s the big challenge. And neuroeconomics is the only scientific way to really crack it.”, Kimm Fesenmaier, “Valuable Decisions”, Caltech, Engineering & Science, Fall 2014
“I think the article below is ridiculous and frankly embarrassing for Caltech. I only have a bachelors degree in business from Sac State, but even I know that the human mind (no matter how much “neuroeconomic” study is performed on it) has no chance of making rational complex decisions about the consequences of events that will occur far in the future, without the benefit of computers which can model various scenarios and determine probable outcomes. I don’t think humans are as irrational as some economic experts (Mr. Shiller) think. I just think our brains can’t calculate probabilities of uncertain events far in the future, without the benefit of computer models. Stated simply, if the computer model shows a human the probability is they are likely making a dumb/irrational decision, most humans will change their minds/view, so that they have a greater probability of making a smart or winning decision.”, Mike Perry, former Chairman and CEO, IndyMac Bank
“After all, misguided energy policy can have very bad outcomes. For instance, in the 1970s, the U.S. thought it was running out of natural gas, and Congress prohibited building any new power plants that used it. Instead, we built lots of coal plants…
– about half of the modern coal fleet—that burdened us with a legacy of dirty air in some cities. Not to mention that in the past few years, we have tapped an abundance of natural-gas supplies.”, Russell Gold, “Why Peak-Oil Predictions Haven’t Come True”, Wall Street Journal
“Time and again, whether its predictions about the world running out of oil, or mainstream U.S. views about our salt or fat intake or that bankers were the primary cause of the financial crisis…the mainstream view has been flat out wrong. This is partly why I am a man-caused global warming skeptic.(I am not saying it’s wrong. Just not proven and therefore not prudent to change our entire lifestyle and economy, until it is.) Did anyone but I notice that the LA Times just published an article that noted some of our top climate experts have concluded that warming (for the past 100 years) on the West Coast of the U.S. was mostly caused by natural events (wind), not man-made carbon emissions???”, Mike Perry, former Chairman and CEO, IndyMac Bank
Why Peak-Oil Predictions Haven’t Come True
More Experts Now Believe Technology Will Continue to Unlock New Sources
By Russell Gold
Have we beaten “peak oil”?
For decades, it has been a doomsday scenario looming large in the popular imagination: The world’s oil production tops out and then starts an inexorable decline—sending costs soaring and forcing nations to lay down strict rationing programs and battle for shrinking reserves.
U.S. oil production did peak in the 1970s and sank for decades after, exactly as the theory predicted. But then it did something the theory didn’t predict: It started rising again in 2009, and hasn’t stopped, thanks to a leap forward in oil-field technology.
To the peak-oil adherents, this is just a respite, and decline is inevitable. But a growing tide of oil-industry experts argue that peak oil looks at the situation in the wrong way. The real constraints we face are technological and economic, they say. We’re limited not by the amount of oil in the ground, but by how inventive we are about reaching new sources of fuel and how much we’re willing to pay to get at it.
Whether peak oil exists is more than just a point of intellectual debate—although it certainly has proved to be a heated and divisive one for decades. The question—and how we think about it—also has a big potential impact for governments, oil producers and ordinary people across the globe, all of whom depend on the vagaries of oil production and would be threatened by soaring costs and shortages.
The peak-oil boosters argue that instead of plowing money into new ways to find oil, we should be conserving what we have and investing in alternative energy sources so that we’re prepared when supplies run low and costs soar. Most of the naysayers agree that we shouldn’t stick with oil forever. But they think it’s wiser to invest in technology to keep expanding the available supply, until it gets too expensive to do so. At that point, they’re confident, we’ll be able to come up with an economical alternative.
The History of an Idea
Peak oil was most widely popularized by M. King Hubbert, a brilliant—and egotistic, by some accounts—geologist who worked for years at Shell Oil. In a 1956 paper, he predicted that U.S. oil production would peak, probably in the early 1970s, and then decline. It would resemble a bell curve.
Photos from left: Getty Images, Corbis, Statoil, Schlumberger
This came to be called Hubbert’s peak and later peak oil. The idea gained enormous popularity when U.S. oil output did in fact peak in the early 1970s. It took hold at a time when the nation was prepared to believe the worst: Drivers were waiting in long gas lines, and the nation felt it was groaning under the yoke of OPEC. Forecasters like Paul Ehrlich became celebrities with dire warnings of overpopulation and exhaustion of natural resources.
As the theory took hold, it helped justify increased investments in alternative energy, and informed some expert thinking about the future of energy. More recently, the theory saw a surge of interest a few years ago when oil prices were high and seemed stuck there.
“Welcome to the world beyond Hubbert’s peak,” wrote Kenneth Deffeyes, one of the adherents of peak oil, in 2008.
Then the data took a detour from the bell curve. In 2008, the U.S. produced five million barrels a day. In 2009, U.S. oil production began to rise—at first slowly, then quickly. It is still rising today. Through the first half of 2014, it averaged 8.3 million barrels a day.
What changed? An innovation in oil-field technology, which peak-oil theory didn’t anticipate. Energy companies combined hydraulic fracturing and horizontal drilling to wring oil out of super-tight rock formations in North America. The industry figured out that pumping chemically slickened water and sand into shales could create thousands of fractures, each one a tiny path for energy molecules to travel into a well.
At first, drillers targeted natural gas because they thought oil molecules were too big to be extracted. But fracking worked to make oil wells, also. Innovations allowed the industry to locate its frack jobs better and increase density. Now other countries are starting to apply the same techniques and may see the same kinds of gains.
A Different Take
With the recent boom have come arguments that peak oil underestimates the power of innovation. Indeed, many oil experts say, the industry has a history of turning up new supplies just when prospects look bleak.
A century ago, the energy industry found giant new oil fields in Texas and California just as fears spread that oil output had peaked. As production in the U.S. began to decline, other regions picked up the slack: the North Sea, Nigeria and Saudi Arabia. Technical innovations such as using sound waves to locate oil fields through thousands of feet of water and rock spurred a boom in deep-water drilling.
More broadly, peak-oil naysayers argue, the theory looks at the problem in the wrong way—focusing on the physical supply instead of our ingenuity in being able to reach it. “There has to be a finite limit” of oil and gas in buried reserves, says George King, a global technology consultant for Apache Corp. But the constraint on how much oil can be produced isn’t geological, he believes: “We face technical and economic limits more than anything else.”
And Mr. King is an optimist about our ability to overcome technical limits. “This is an inventive industry,” he says.
One of his responsibilities at Apache, a Houston-based oil and gas company, is to stay abreast of new technologies that could boost output in years ahead. For example, he is paying attention to new ways of squeezing more oil out of tight reservoirs. When rocks are fracked, a large amount of oil remains left behind. Fracking tends to free the lighter, smaller gas and oil molecules but leaves behind heavier and stickier molecules.
One idea calls for using carbon dioxide to flood into the tight rocks and push oil out ahead of it. Another is to use nanochemistry to reduce surface tension and lift oil molecules off rock, much like a detergent lifts stains. “Some companies have really neat ideas” along these lines, he says.
To be sure, the peak-oil naysayers don’t think we should wholly embrace oil for all time, just that we shouldn’t try to speed up any transition to alternatives in anticipation of short supplies. After all, misguided energy policy can have very bad outcomes. For instance, in the 1970s, the U.S. thought it was running out of natural gas, and Congress prohibited building any new power plants that used it. Instead, we built lots of coal plants—about half of the modern coal fleet—that burdened us with a legacy of dirty air in some cities. Not to mention that in the past few years, we have tapped an abundance of natural-gas supplies.
And naysayers agree that while they don’t believe supply limits loom, economic limits remain. When the oil industry overcomes an obstacle and boosts oil production, costs typically increase. That opens the door for a better and cheaper energy source that will eventually displace crude oil.
So at some point, the cost of getting more and more oil likely will get so high that buyers can’t—or won’t—pay.
This is an issue the late petroleum economist Morris Adelman wrestled with. “No mineral, including oil, will ever be exhausted. If and when the cost of finding and extraction goes above the price consumers are willing to pay, the industry will begin to disappear,” he wrote in “The Genie out of the Bottle: World Oil Since 1970,” a book published in 1995. Mr. Adelman, a professor emeritus of economics at the Massachusetts Institute of Technology, died earlier this year at 96.
Already, economics is bringing about some changes. Despite the abundance of oil that fracking has delivered, global oil prices remain high. This has kept the door wide open for alternative sources of energy and spending on energy efficiency. Natural gas has been grabbing market share from oil for years. A few decades ago, heating oil kept American homes snug; now it’s natural gas. And gas is making inroads in transportation—trucks and trains—as are electric cars.
What’s more, climate change has altered the calculus. More advocates are pushing for alternative, low-carbon fuels to slow the rising level of carbon dioxide in the atmosphere. They argue that the possibility of running out of oil isn’t the only reason to reduce its use; in fact, they worry that the expansion of supply is dangerous, hindering efforts to take action on the long-term threat of climate change.
“There will be peak oil, but it will be [because of] peak consumption,” says Michael Shellenberger, president of the Breakthrough Institute, an energy and climate think tank in Oakland, Calif. “What we all want is to move to better, cheaper and cleaner sources of energy.”
Mr. Shellenberger suspects that oil’s long dominance in transportation is weaker than most people suspect. When something better comes along, he says, oil’s days are numbered. “We will be leaving a lot of oil in the ground, in the same way we are leaving coal in the ground,” he says.
If M. King Hubbert were alive today—he died in 1989—would he admit defeat? Probably not, says Mason Inman, who has written a biography of Mr. Hubbert that will be released next year. He argues that the recent shale boom is just a temporary respite in a long march downward. U.S. oil production could be about to hit a second peak, and then return to its terminal decline.
The production boom “makes things better for a while, but it doesn’t change the long-term picture,” Mr. Inman says.
If Mr. Hubbert were around, he might be dumbstruck by what he sees, Mr. Inman says. Mr. Hubbert, he says, advocated turning to solar power and energy efficiency to break the dependency on oil.
As for the power of innovation to reach new oil reserves, Mr. Hubbert believed that technology would help extend the limits of oil production, but thought its impact was exaggerated, Mr. Inman says. He felt people would invoke technology as a kind of panacea—which it isn’t.
There will eventually be diminishing returns, Mr. Inman says, since oil is a finite resource, even though we don’t really know its limits. “He would probably say, ‘You guys are crazy to be drilling this so fast and using it up and pretending it’s a solution,’ ” says Mr. Inman.
Mr. Gold is a Wall Street Journal staff reporter in Austin, Texas. He can be reached at firstname.lastname@example.org.
“There is not much evidence that these policies (macroprudential regulations) prevent financial bubbles. But there is great risk in allowing a small group of unelected technocrats to determine the allocation of credit in the U.S. economy…
…Government regulators are no better than private investors at predicting which individual investments are justified and which are folly. The cost of macroprudential regulation in the name of financial stability is almost certainly even slower economic growth than the anemic recovery has so far yielded.”, Paul H. Kupiec , “When Central Bankers Become Central Planners”, Wall Street Journal
When Central Bankers Become Central Planners
Macroprudential regulation is not likely to prevent asset bubbles. But credit allocation will depress growth.
PAUL H. KUPIEC
Sept. 28, 2014 7:01 p.m. ET94 COMMENTS
Stanley Fischer, vice chairman of the Federal Reserve, has been tapped to head the Fed’s new financial stability committee. In recent speeches both Mr. Fischer and Fed Chair Janet Yellen have argued that so-called macroprudential regulation can prevent asset bubbles from erupting while the Fed maintains near-zero interest rates. There is not much evidence that these policies prevent financial bubbles. But there is great risk in allowing a small group of unelected technocrats to determine the allocation of credit in the U.S. economy.
Macroprudential regulation, macro-pru for short, is the newest regulatory fad. It refers to policies that raise and lower regulatory requirements for financial institutions in an attempt to control their lending to prevent financial bubbles.
These policies will not succeed. Consider the most common macroprudential tool: raising or lowering bank minimum capital standards. Academic research—including a recent study I co-authored with Yan Lee of the Federal Deposit Insurance Corp. and Claire Rosenfeld of the College of William and Mary—has found that increasing a bank’s minimum capital requirements by 1% will decrease bank lending growth by about six one-hundredths of a percent.
Other studies have examined the effect on loan growth of raising a bank’s minimum capital requirements by 1%, using data from different countries and different measurement techniques. They have found a similarly minor effect—between seven and 13 basis points. The economic magnitude is trivial.
Banks adjust their lending in response to a host of factors including pressure from bank regulators, changes in their funding cost, losses on their outstanding loans and other factors. Changes in regulatory capital and liquidity requirements have only the weakest detectable effects on lending.
There is also the very real risk that macroprudential regulators will misjudge the market. Banks must cover their costs to stay in business, and in the end bank customers will pay the cost banks incur to comply with regulatory adjustments, regardless of their merit. By the way, when was the last time regulators correctly saw a coming crisis?
Other common macroprudential tools include varying maximum loan-to-value ratios and debt-to-income limits. Yet in aspeech on July 10 Mr. Fischer noted that changing minimum capital requirements and maximum loan-to-value ratios on mortgage loans in Israel had little effect on attenuating a mortgage-lending boom that raised central bank stability concerns beginning in 2010.
When short-term interest rates are low and long-term rates are high, borrowers prefer to use short-term or floating-rate debt to minimize interest payments. And so it was in Israel when a surge in mortgage-lending growth was fueled by mortgages keyed to low short-term interest rates. When higher capital requirements and lower loan-to-value limits did not work, the Bank of Israel reduced the growth in mortgage lending by requiring banks to tie mortgage rates to long-term interest rates, effectively prohibiting cheap variable-rate mortgages.
With Mr. Fischer now heading the Fed’s new financial stability committee, might we soon see regulations requiring product-specific minimum interest rates? Or maybe rules that single out new loan products and set maximum loan maturities and debt-to-income limits to stop banks from lending on activities the Fed decides are too “risky”? None of these worries is an unimaginable stretch.
Since the 2008 financial crisis, U.S. bank regulators have put in place new supervisory rules that limit banks’ ability to make specific types of loans in the so-called leverage-lending market—loans to lower-rated corporations—and for home mortgages. Since there is no scientific means to definitively identify bubbles before they break, the list of specific lending activities that could be construed as “potentially systemic” is only limited by the imagination of financial regulators.
Few if any centrally planned economies have provided their citizens with a standard of living equal to the standard achieved in market economies. Unfortunately the financial crisis has shaken belief in the benefits of allowing markets to work. Instead we seem to have adopted a blind faith in the risk-management and credit-allocation skills of a few central bank officials.
Government regulators are no better than private investors at predicting which individual investments are justified and which are folly. The cost of macroprudential regulation in the name of financial stability is almost certainly even slower economic growth than the anemic recovery has so far yielded.
Mr. Kupiec, a resident scholar at the American Enterprise Institute, has held senior positions at the Federal Deposit Insurance Corp., International Monetary Fund and Federal Reserve Board.
What Uber and School Choice Have in Common
As Ted Forstmann used to tell me, ‘Monopolies invariably produce bad products at high prices.’
JAMES C. COURTOVICH
Sept. 28, 2014 6:57 p.m. ET
When President Lyndon Johnson announced the creation of the Transportation Department 48 years ago this month, he said that one of the agency’s goals would be to “bring new technology to every mode of transportation.” Nearly half a century later, Uber is doing just that, allowing customers to order and pay for trips on their smartphone. Satisfied passengers, drivers and investors are singing Uber’s tagline: Choice is a Beautiful Thing.
But choice threatens entrenched interests like cabdrivers, who are deploying intimidation tactics and red tape to protect their turf. An Uber driver I rode with recently in Miami told me that he’d received two $1,000 tickets in as many days for picking up passengers at the airport.
In January French cabdrivers smashed windows and slashed tires of Uber cars at a protest in Paris. Public Utility Commissions and Municipal Transit Agencies have issued “cease and desist” orders against Uber and competitors like Lyft. “You’re changing the way cities work,” Uber CEO and founder Travis Kalanick explained at a tech conference in May, “and that’s fundamentally a third rail.”
Mr. Kalanick shows no signs of ceasing or desisting. Uber announced in June that it had raised an additional $1.2 billion in financing, bringing the company’s valuation to an astounding $17 billion. Mr. Kalanick’s deft maneuvering around political and bureaucratic roadblocks reflects the confidence of a man who knows where he’s going and knows he has the right of way.
He reminds me of legendary financier Ted Forstmann, with whom I worked, along with Wal-Mart‘s John Walton, to provide $200 million in private scholarships to inner-city children. When we began the project in 1998, teachers, union leaders and their political benefactors said choice was a threat, much as cabdrivers say now. But as Forstmann used to say, “Monopolies invariably produce bad products at high prices.” The 1.3 million parents who applied for the scholarships illustrated the tremendous demand for alternatives.
Both Mr. Kalanick and Forstmann were demonized by defenders of the status quo. But when entrenched interests win, everyone loses. I learned that lesson in 2008, when I was part of an effort to bring more than $20 billion in private investment to Pennsylvania. A group of investors wanted to partner with the state to lease the Pennsylvania Turnpike. A private operator would have invested billions to improve the roadway and streamline operations, but no such deal happened.
Purported champions of public safety and public works fought against the project, arguing that tolls would rise and employees would be fired. After three extensions and legislative inaction, the investors pulled their offer due to the tightened credit markets of the global crisis.
But who really won? Not commuters, who didn’t benefit from the lower tolls the private lease would have required, and who now face longer drives. Not toll workers, who lost jobs due to gross bureaucratic mismanagement.
Innovation also unlocks the value in idle cars, rooms, tools and hands—and opens a channel for billions of dollars of capital to spur economic growth and create new jobs. “Money is like blood; it must flow. Hoarding and holding on to it causes sludging . . . and, like clotted blood, it can only cause damage.” Adam Smith ? Try Deepak Chopra, doctor and two-time Barack Obama backer.
What’s true for blood and capital is true for transportation. Washington is always focused merely on passing bills for more government spending on infrastructure. But this Beltway bickering takes place far away from where the rubber really hits the road, on the highways and back streets.
What is needed are not simply new legislation and regulation; what is needed is new thinking, new leadership and a new collaborative relationship among union representatives, policy makers and business. At stake is not just our failing infrastructure, and failing schools, it’s the country’s ability to compete successfully on a global scale. We should all hope Mr. Kalanick and others like him succeed.
Mr. Courtovich is founder of Sphere Consulting, and founding president and chief operating officer of the Children’s Scholarship Fund.
“It’s about an abusive system (SEC enforcement) that threatens the nation’s economic vitality by jeopardizing small business and its entrepreneurial spirit.”, Nelson Obus, co-founder Wynnefield Capital, Chief Executive Magazine
“In July, Federal Reserve Chair Janet Yellen suggested in congressional testimony that social media and biotech valuations “do appear substantially stretched.” Has she looked at Treasurys lately? Treasury yields actually turned negative this week…
…Meantime, LinkedIn shares are up 30%. A lot of questionable things are being funded—Kickstarter has raised about $1.3 billion through crowdfunding, and surely not all of it is going to worthwhile projects—but bubbles are best predicted in retrospect.”, Andy Kessler, “Mothers, Tell Your Children to Be Software Coders”, Wall Street Journal
Mothers, Tell Your Children to Be Software Coders
Dozens of companies are poised to follow Alibaba in going public, but there’s a shortage of developers.
By Andy Kessler
Now that Alibaba has gone public in a $25 billion deal, an avalanche of IPOs may follow. Dozens of disruptive mobile, cloud, network systems and biotech companies are rumored to be ready to tap public markets. Even before Alibaba, about 190 companies had raised $40 billion in 2014, a 40% increase from last year, according to a recent report from Renaissance Capital. Investors who remember the dot-com days of 1999 may cringe at the thought of a deluge of IPOs. But it may be a sign of vigor.
These companies all have one problem: They can’t find enough coders. And every modern company, regardless of industry, needs software developers. The public has taken notice of the shortage. Britain, for example, recently announced that coding would soon be core curriculum in the country’s primary and secondary schools. In Estonia, coding class starts in first grade.
The coders living in Silicon Valley enjoy a seller’s market. One I met lives in San Francisco and commutes to a fast-growing publicly traded Web company in Silicon Valley. She takes an unmarked, Wi-Fi-enabled bus each way, allowing her to code as she commutes. But it can take two hours to get home on a Friday afternoon. Plus the Wi-Fi is kind of slow. So she’s interviewing at a privately held company in San Francisco, and in this market, why not?
I also chatted about this shortage with a recruiter who works for a prominent venture-backed startup. I asked what hiring is like these days. He lowered his head, slumped his shoulders and said, “It’s tough. Everyone that’s any good knows they are good and don’t hesitate to tell you how good they are.” He networks until he finds someone decent. “They’re maybe 27, 28 and working at Google. Can’t stand it. They’re just one of 52,000. They just get lost. They can’t make an impact. But they’re pulling in $240,000.”
So he makes them an offer. The coders go back to Google, Facebook, Apple or Twitter ; the companies bump their pay to $280,000 and give them $8 million in restricted stock units, typically vesting over four years. The recruiter tries to match this. He offers $180,000, which is about as high as his company can go—with $10 million in stock. The problem is that the stock is funny money. The company is still private and so the valuation is basically a made-up number that won’t be realized if the company never goes public.
“Then,” the recruiter tells me, “the wife gets involved. ‘Wait, you’re going to walk away from a guaranteed $8 million? Oh no you’re not.’ So it’s back to the drawing board looking for decent talent.”
This may explain why a two-bedroom, 1,300-square-foot apartment in San Francisco runs $6,000 a month. And why a four-bedroom, 2,200-square-foot home on one-seventh of an acre in Palo Alto costs $3.65 million.
Meanwhile, the flurry of IPOs has given bankers plenty to do. Predictably, we’re now hearing talk of bubbles. Twitter is worth $31 billion—24 times its sales. Private company Pinterest is worth $5 billion and Snapchat $10 billion, neither with much in revenue. Facebook bought the messaging company WhatsApp for $19 billion in February.
In July, Federal Reserve Chair Janet Yellen suggested in congressional testimony that social media and biotech valuations “do appear substantially stretched.” Has she looked at Treasurys lately? Treasury yields actually turned negative this week. Meantime, LinkedIn shares are up 30%.
A lot of questionable things are being funded—Kickstarter has raised about $1.3 billion through crowdfunding, and surely not all of it is going to worthwhile projects—but bubbles are best predicted in retrospect. More than a billion smartphones will ship this year. Lots of companies are getting growth capital to shake up transportation and logistics and energy and medicine, not to mention retail.
Watch as products and services, from taxi dispatchers to wallets and door keys and doctor visits disappear into your phone and the cloud. These are the IPOs of the next few years. The peer-to-peer lender Lending Club should be public soon, along with the cloud-storage service Box, and expect Uber, Airbnb and Dropbox to take the plunge in 2015.
And it isn’t a bubble until financial shenanigans appear. In the 1990s, AOL would ship millions of enrollment CDs and write them off over 30 years, even though they had only been shipping them for maybe two years. That stopped in 1997, but then investment bankers told Web companies they would gladly take them public if they got their page views up. AOL jumped in, exchanging millions of pop-up ads for half or more of companies’ IPO proceeds. These financial gimmicks haven’t appeared yet, and perhaps the shortage of programmers will keep them at bay for a while.
Venture capitalist Mark Andreesen tweeted the other day pointing out there are many things for which there are no public stock plays. “Sharing economy, cryptocurrency, nonmilitary drones, Internet of things,” he listed. He’s right, but probably not for long. Meanwhile, send us your coders.
Mr. Kessler, a former hedge-fund manager, is the author, most recently, of “Eat People” (Portfolio, 2011).
“Changes in ocean circulation as a result of weaker winds were the main cause of about 1 degree Fahrenheit of warming in the northeast Pacific Ocean and nearby coastal land between 1900 and 2012, according to the analysis of ocean and air temperatures over that time…
…The study, conducted by researchers from the National Oceanic and Atmospheric Administration and the University of Washington, was published Monday by the Proceedings of the National Academy of Sciences.”, Tony Barboza, “West Coast warming linked to naturally occurring changes”, Los Angeles Times
West Coast warming linked to naturally occurring changes
By TONY BARBOZA
Naturally occurring changes in winds, not human-caused climate change, are responsible for most of the warming on land and in the sea along the West Coast of North America over the last century, a study has found.
The analysis challenges assumptions that the buildup of greenhouse gases in the atmosphere has been a significant driver of the increase in temperatures observed over many decades in the ocean and along the coastline from Alaska to California.
Changes in ocean circulation as a result of weaker winds were the main cause of about 1 degree Fahrenheit of warming in the northeast Pacific Ocean and nearby coastal land between 1900 and 2012, according to the analysis of ocean and air temperatures over that time. The study, conducted by researchers from the National Oceanic and Atmospheric Administration and the University of Washington, was published Monday by the Proceedings of the National Academy of Sciences.
Natural, wind-driven climate patterns in the Pacific Ocean, such as El Niño and the Pacific Decadal Oscillation, are already known to exert a powerful influence on sea and land temperatures over years and even decades.
This latest research shows that similar changes in atmospheric and ocean circulation can drive trends that last a century or longer, overshadowing the effects of human-generated increase in greenhouse gases, the study’s authors said.
“Changing winds appear to explain a very large fraction of the warming from year to year, decade to decade and the long-term,” said study leader James Johnstone, an independent climatologist who did most of the work when he was at the University of Washington’s Joint Institute for the Study of the Atmosphere and Ocean.
When coastal wind speeds weaken, they result in less evaporation from the sea surface and unusually low pressure that alters ocean currents and causes temperatures to rise over time.
The study found that weakening winds accounted for more than 80% of the warming trend along the Pacific Northwest coast between Washington and Northern California. In Southern California, weaker winds were responsible for about 60% of the increased warming.
If global warming had been the most powerful influence on land and sea temperatures, those temperatures would have been different, the study’s authors said. Most of the warming in the region occurred before 1940, when greenhouse gas concentrations were lower and winds were weaker, the study found. In contrast, winds have strengthened since 1980 and coastal ocean cooled, even as the rise in greenhouse gases has accelerated.
The study focused only on trends at the regional level and did not offer conclusions about the influence of naturally occurring winds on warming throughout the world. If anything, the results reinforce what scientists have known for years: that global climate projections fall short in predicting how temperatures are actually changing at the regional scale.
Kevin Trenberth, a climate scientist at the National Center for Atmospheric Research in Boulder, Colo., who was not involved in the study, said its conclusions about long-term trends were probably overstated because the quality of data from the early 20th century was poor and unreliable. The results may also reflect the fact that the northeast Pacific is an area of the globe where past studies have shown the “signal” of climate change is low relative to the “noise” of natural variability.
“There is no doubt that regionally, the changes in temperature are dominated by changes in the atmospheric circulation that likely have little or nothing to do with climate change,” Trenberth said. But, he added, “this does not call into question the concept of global warming.”
Copyright © 2014, Los Angeles Times