The article discusses the role that open-ness played in the CDO/CDS ratings disaster:
The rating agencies made public computer models that were used to devise ratings to make the process less secretive. That way, banks and others issuing bonds — companies and states, for instance — wouldn’t be surprised by a weak rating that could make it harder to sell the bonds or that would require them to offer a higher interest rate.
But by routinely sharing their models, the agencies in effect gave bankers the tools to tinker with their complicated mortgage deals until the models produced the desired ratings.
That paragraph beginning "But..." is a common knee-jerk reaction to the concept of open source, particularly for sensitive topics such as financial ratings, and it's wrong, so wrong, 100% wrong.
At first, it's hard to understand how sharing and opening-up a complex and sensitive process can make it more secure. So it's time for a bit of history.
A similar process has occurred with computer security software over the years. Initially, the prevailing notion was that, the less known about the security software and its implementation, the better. At one company I worked for in the early 80's, the subroutine package which handled security went by the internal acronym CTTC; this string of letters was used to name the library, the APIs and variables, etc. When I asked what it meant, somebody finally told me: "Confusing To The Curious".
But, over time, the computer software security community has come to hold a different view, best summarized in Eric Raymond's memorable phrase:
Given enough eyeballs, all bugs are shallow
As Raymond observes:
Complex multi-symptom errors also tend to have multiple trace paths from surface symptoms back to the actual bug. Which of the trace paths a given developer or tester can chase may depend on subtleties of that person's environment, and may well change in a not obviously deterministic way over time. In effect, each developer and tester samples a semi-random set of the program's state space when looking for the etiology of a symptom. The more subtle and complex the bug, the less likely that skill will be able to guarantee the relevance of that sample.
It's an argument that takes time to comprehend and grow comfortable with. If you've never done so, I encourage you to read Raymond's entire paper.
So, back in the world of finance, how does this argument apply? Well, I think that, if handled properly, the release of rating model details should actually make the ratings models better, not worse. When computer software security teams first started releasing the details of security bugs, people were horrified, as they thought that this was tantamount to giving the bad guys the keys to the storehouse.
But in fact, it was exactly the opposite: the bad guys already have the keys to the storehouse (and we see exactly this in the NYT article that started this discussion); what the open-ness process does is to give the good guys a chance to improve and refine their models.
Back to the New York Times article:
Sometimes agency employees caught and corrected such entries. Checking them all was difficult, however.
“If you dug into it, if you had the time, you would see errors that magically favored the banker,” said one former ratings executive, who like other former employees, asked not to be identified, given the controversy surrounding the industry. “If they had the time, they would fix it, but we were so overwhelmed.”
Do you see the comparison now? The agencies were trying to make the models better, but they needed more eyeballs.
Some years ago, the Secure Hash Algorithm was broken. If you have no idea what I'm talking about, here's a good place to start. That discovery, as it percolated through the computer security community, could have had a variety of results: it could have been covered up, it could have been "classified", it could have been played down.
Instead, the computer security world made it big news, as big as they possibly could. They told everybody. They blogged about it, wrote articles for the trade press, called up journalists and tried to explain to them what it meant.
The end result was this: The National Institute of Standards and Technology is holding a world-wide competition to design and implement a new hash function, the best one that the entire planet can possibly construct.
This is the process that we need for the ratings agencies and their models: instead of having a closed, secret process for developing ratings, we need an open one. We need eager graduate students pursuing their research projects in ratings models. We need conferences where people from different organizations and communities meet and discuss how to comprehend the financial models and their complexity. We need sunshine. We need a mindset in which anyone who reads this paragraph recoils in horror at how wrong the second sentence is, especially after the first sentence seems to show that they were starting to 'get it':
David Weinfurter, a spokesman for Fitch, said via e-mail that rating agencies had once been criticized as opaque, and that Fitch responded by making its models public. He stressed that ratings were ultimately assigned by a committee, not the models.
Make no mistake about it: modern structured financial instruments are extraordinarily complex. As this paper shows, they are equivalent in complexity to the hardest problems that computer scientists tackle. So hopefully the task forces and politicians investigating these problems will realize that they need to be handled using techniques that we have evolved for tackling these problems.