Saturday, April 30, 2011

Paul Harding's Tinkers is a special book.

I mostly read nonfiction: newspapers and magazines, primarily, but also an endless succession of dreary, dull books with titles like Expert C Programming and Understanding Linux Network Internals. And when I read fiction, it's usually murder mysteries and the like (for the last six months it's been a feast of C. J. Sansom...).

But every so often, mostly by happenstance or serendipity, I come across a treasure. Once it was Zora Neale Hurston's Their Eyes Are Watching God; another time it was Marilynne Robinson's Housekeeping, then Cormac McCarthy's All The Pretty Horses: the sort of book, that, while you are reading it, you are aware that you are being visited by something special, and you try to enjoy every bit, sipping delicately, all the while becoming so absorbed that 45 minutes passes and you come up for air and wonder where the time has gone.

Paul Harding's Tinkers is such a book.

The tinkers of the title are George Crosby and his father Howard, as well as the various family members that surround them. Howard Crosby was a peddler who drove a wagon around the rural New England backcountry, selling this and that to "the backwoods people":

He tinkered. Tin pots, wrought iron. Solder melted and cupped in a clay dam. Quicksilver patchwork. Occasionally, a pot hammered back flat, the tinkle of tin sibilant, tiny beneath the lid of the boreal forest. Tinkerbird, coppersmith, but mostly a brush and mop drummer.

George has all his father's primitive, earthy skills, as well as more modern talents, and becomes a clockmaker of some renown:

The wallpaper in George's basement workroom had a pattern of larch branches on a dun-colored background. Clocks in various states of repair and disrepair hung on the wall, some ticking, some not, some in their cases, some no more than naked brass works fitted with their pairs of hands. Cuckoos and Vienna regulators and schoolhouse and old railroad station clocks hung at different heights. There were often twenty-five or thirty clocks on the wall.


At its surface level, the book is about George's passing; it starts and ends with George on his deathbed, and the stories are mostly told through flashbacks and reminiscences. But that is only the surface story; as with all great literature, this book is concerned with much more deeper subjects: "What persists beyond this cataclysm of making and unmaking?"

Two deep themes run through the book and are investigated and examined and returned to in various ways. The first is the notion of time: what it is, what it means, what it says about us and about our lives. George, the clockmaker, thinks about this constantly:

That was it, he realized: the clock had run down. All the clocks in the room had run down -- the tambours and carriage clocks on the mantel, the banjo and mirror and Viennese regulator on the walls, the Chelsea ship's bells on the rolltop desk, the ogee on the end table, and the seven-foot walnut-cased Stevenson grandfather's clock, made in Nottingham in 1801, with its moon-phase window on the dial and pair of robins threading flowery buntings around the Roman numerals. When he imagined inside the case of that clock, dark and dry and hollow, and the still pendulum hanging down its length, he felt the inside of his own chest and had a sudden panic that it, too, had wound down.

When his grandchildren had been little, they had asked if they could hide inside the clock. Now he wanted to gather them and open himself up and hide them among his ribs and faintly ticking heart.


Harding also spends much of the book meditating about nature, and about man's relationship to nature. Howard, who is a man of the woods, is compelled by the wilderness:

How can I not wonder what it would be like to sit in that cold silver water, that cold stone water up to my chin, the tangled marsh grass at the level of my eyes, sit in the still water, in the still air, bright day behind me lighting the face of everything under the dark millstone cloud lid in front of me, watching the storm coming from the north?

Howard knows everything about nature, but what he knows most of all is that to know nature, you must open your eyes and observe. His meticulous instructions about how to build a bird's nest run for pages, but in the end they are simple and direct:

Spend as many a spring afternoon as manageable watching the birds themselves weave their homes; such observation will help immensely in learning the particular stitch required.


Mostly, though, Harding is just a sheer joy to read. His skill with words is astonishing; can this really be his first novel? Here he is, in perhaps my favorite passage from the book, describing a boy at play in a creek in the woods:

What of miniature boats constructed of birch bark and fallen leaves, launched onto cold water clear as air? How many fleets were pushed out toward the middles of ponds or sent down autumn brooks, holding treasures of acorns, or black feathers, or a puzzled mantis? Let those grassy crafts be listed alongside the iron hulls that cleave the sea, for they are all improvisations built from the daydreams of men, and all will perish, whether from ocean siege or October breeze.

This is Ozymandias in prose, indeed.

I hope Mr. Harding writes many more books; I know I will be eager to see what he writes next.

Friday, April 29, 2011

I have a low tolerance for these sorts of things...

... so after the 3rd system crash of the new 11.04 Ubuntu UI, I've changed the system settings back to "Ubuntu Classic" to see if it will be more stable.

Update: I think it's very possible that I'm seeing this bug. Several comments in the bug report indicate that the problem may be linked to using Chrome, and suggest using only Firefox as the browser. I'll try that for a while.

Natty Narwhal may be a bit immature

Well, I took the plunge and upgraded to Ubuntu 11.04. The upgrade was successful, and this post is brought to you from an 11.04 desktop.

However, I've already had one strange crash, something that never happened to me with previous Ubuntu versions. I was just using Thunderbird, and I dismissed a dialog, closed a message window, and my Window Manager restarted. I don't think my entire machine rebooted, just the UI session. Hmmm...

So, you may not wish to rush to upgrade your own system...

Thursday, April 28, 2011

Here comes Natty Narwhal!


It appears that Ubuntu release 11.04 indeed arrived during the 4th month of 2011. Today my Ubuntu system greeted me with this announcement.

The upgrade is covered at El Reg, among other places.

Complete sessions list for Google I/O 2011 is now online

Google have now posted the complete information for all the sessions for Google I/O 2011. This is an incredible list. It's just astonishing that this is a 2-day conference; this much material seems like it should take a week or more to cover. My head spins just reading the brief abstracts!

Wednesday, April 27, 2011

They're made out of meat!

Great short story, fantastic short film.

Apropos, of course, of this.

(kudos to the non-biz list at work for these great pointers)

Jeff Bezos drops a few names

This may be the geekiest "letter from the CEO" that I've ever read. I love it!

Saturday, April 23, 2011

Algorithmic pricing gone mad

Professor Michael Eisen of UC Berkeley posts an interesting short article about an apparent algorithmic pricing battle between two automated selling programs on Amazon, with some very intriguing speculation about what the two algorithms were trying to achieve, why one was behaving differently than the other, and how still other algorithms might attempt to interact with these algorithms to their own benefit.

This example is simple and easy to understand, and in this case, it appears that it was resolved without harm. On Wall Street, however, it's undertaken at high volume by well-funded participants, and pretty soon you get predatory trading and the May 2010 "Flash Crash".

Ah yes, it's not just a book about flies anymore...

Friday, April 22, 2011

Algorithm design as dialogue

I think Bill Bryant's presentation of the Kerberos authentication system is delightful. It would be easy for this technique to be silly and absurd, but Bryant handles it very well and the result is a clear and methodical presentation of the reasoning behind one of the most important and long-lived algorithms in modern computing.

Thursday, April 21, 2011

Google Summer of Code "DOs and DON'Ts"

The Google Summer of Code team have published three nice short articles entitled The DOs and DON'Ts of Google Summer of Code:


The nice thing is that these aren't just narrow-focused policies for the Google Summer of Code; they are well-thought-out, comprehensive advice for anybody participating in a software community.

Right on the heels of the Google articles, Rands in Repose published his own take on internship in the software industry; although he's oriented more towards the traditional in-office internship, he's also got a number of great insights.

Even if you aren't an open source coder, if you work in the software industry, or if you're still a student of computing, you are almost certainly part of a software community of some sort, and it wouldn't be a waste of time to read through these commonsense points.

Wednesday, April 20, 2011

Cryptography Engineering perfectly achieves its goal

Cryptography Engineering, by Ferguson, Schneier, and Kohno, is a revision and update of an earlier book, Practical Cryptography, by Ferguson and Schneier. (Interestingly, the older book is still in print and still being sold as new, even though Cryptography Engineering completely replaces it and I can see no reason why anyone would wish to buy or read Practical Cryptography at this point.)

Most computer science books are, at their core, books designed to teach you how to write a certain type of software. Jim Gray's book is intended to teach you how to write a database system; Richard Stevens's book is intended to teach you how to write a TCP/IP stack; the "dragon" book is intended to teach you how to write a compiler; and so forth.

The authors of Cryptography Engineering are all seasoned cryptographers, and have written books and papers on cryptography, and in addition they actively write cryptography software (e.g., the Skein hash function, one of the candidates for NIST's revised secure hash algorithm standard). So they clearly could have written such a book (and, in fact, Schneier's earlier Applied Cryptography was such a book). But this is not such a book.

The authors tell us why in their preface:

Cryptography and security engineers need to know more than how current cryptographic protocols work; they need to know how to use cryptography.

To know how to use cryptography, one must learn to think like a cryptographer.

By learning how to think like a cryptographer, you will also learn how to be a more intelligent user of cryptography. You will be able to look at existing cryptography toolkits, understand their core functionality, and know how to use them. You will also better understand the challenges involved with cryptography, and how to think about and overcome those challenges.


In my opinion, the authors succeed with this book; it does exactly what they intend it to.

If you are trying to understand the different types of block ciphers, and how to perform the cipher mode and key selection tasks that are associated with them, this is the book for you.

If you are confused about the difference between a seed, a salt, and a nonce (or even about what those terms mean), this is the book for you.

If you can't tell a message digest from a message authentication code from a secure hash from a digital signature, this is the book for you.

If you'd like to understand why key management is the hardest part of the public key cryptography infrastructure, and why certificate revocation is not the simple solution you thought it might be, this is the book for you.

Perhaps the best thing about this book is that it leaves you wanting more, yet at the same time feeling confident about your ability to learn more. That's what a computer science textbook should strive to do, and so I can recommend this book with no reservations whatsoever.

Monday, April 18, 2011

Kevin Kelly says a book is a self-contained story.

I very much liked Kevin Kelly's recent essay: What Books Will Become. In an entertaining and well-written fashion, he looks back at what books once were, looks with clear eyes at what books now are, and speculates on what books will become:

As we gain these tools (and skills) we'll make a class of highly visual books, ideal for training and education, which we can study, rewind, and study again. They will be books we can watch or TV we can read.

My son is trying to teach himself how to maintain and service his high-end mountain bike. Although he's found some friendly local mechanics, and some decent online sites, it's clear that he's starving for just this sort of high-density absorbable information.


We will find out that books never wanted to be telephone directories, or hardware catalogs, or gargantuan lists. These are jobs that websites are much superior at -- all that updating and searching -- tasks that paper is not suited for. What books have always wanted was to be annotated, marked up, underlined, dog-eared, summarized, cross-referenced, hyperlinked, shared, and talked-to.


The future is one of change, with structure:

Wikipedia is a stream of edits, as anyone who has tried to make a citation to it realizes. Books too are becoming flows, as precursors of the work are written online, earlier versions published, corrections made, updates added, revised versions approved. A book is networked in time as well as space.


Kelly has written a nice essay, short enough to be approachable but filled with lots to consider; it's worth a read.

Saturday, April 16, 2011

The climax approaches in La Liga

Nobody in the United States is paying much attention, but the two greatest soccer teams on the planet (Madrid and Barcelona) are about to face each other 4 times in the next 3 week, a rather startling little bit of scheduling.

You can make a decent argument that Barcelona, in particular, are having one of the greatest seasons in modern sports history, not just defeating other teams but positively annihilating the top teams at the highest level of competition. They have not only the best player in the world (Leo Messi) but probably also the second-best (Xavi Hernandez), and support them with a deep roster of superb talent.

Madrid, meanwhile, have their own contender for best player (Cristiano Ronaldo), as well as the most successful head coach in soccer (Jose Mourinho), and are playing at a very high level themselves, destroying London's Tottenham Hotspur recently.

Zonal Marking has a nice preview of the tactical and strategic questions each team faces, so you can read up and be prepared!

My friend Cal has just spent a week's holiday in Barcelona (well, in Sitges to be precise), and I'm looking forward to getting a report about the local perspective when he returns.

Thursday, April 14, 2011

Phil Gyford's Pepys Diary project is in its 9th year

Over at BoingBoing, Cory Doctorow points out that Phil Gyford's project to republish the diaries of Samuel Pepys is well into its ninth year now.

Gyford has recently been giving a series of talks about his experiences with the project, and has placed the presentation slides and audio from a recent talk online at his site.

As diary projects go, I find Tom Hilton's new Fremont Survey project more immediately accessible, both since I am an American and since I'm more outdoors-y by nature. I enjoyed this week's description of the hills and valleys near Arroyo Grande as it brought back memories of calling out "roller coaster!" as our car zoomed up one hill and down the next with the kids. Of course, it was much harder in a horse-and-wagon.

The Pepys Diaries, however, have tremendous historical importance and significance, and it is wonderful that Gyford has been able to sustain this project for such a long time; Hilton has years of work ahead of him before he can approach that. It's particularly interesting to see how Gyford's project has encouraged the development of a community of enthusiasts who gather regularly (both on the site and IRL) to discuss the diaries and their experiences.

A related project, but different, is DeLong's "Liveblogging World War II", in which Brad DeLong routinely posts a sort of "on this day in history" snippet of a bit of information from the same day, 70 years ago. It's not quite the "diaries" format, but has some of the same flavor. However, it's interspersed with his other entries on his blog, and so it's not easy to follow just the WW2 project as a separate entity.

Are there other diary republishing efforts like these? It seems a wonderful format; I'm pleased to see it growing in use.

Monday, April 11, 2011

Great JVM presentation

I've long maintained that Java Virtual Machine implementations are some of the most sophisticated and impressive pieces of system software that exist. They've been iteratively developed over several decades, and are deserving of tremendous study by people who are interested in great system software.

So let me point you at this great JVM presentation by Dr. Cliff Click of Azul Systems; he's one of the world experts in JVM implementation, and this whirlwind tour through the internals of a modern JVM will leave you with a deeper appreciation for just how amazing your JVM really is.

Sunday, April 10, 2011

Reflections on AmberPoint, Inc.

A year has now passed since I left AmberPoint, and I've decided to put together a short re-telling of the company's history, at a very high level. As startups go, it was nothing special, just another company that tried something, didn't quite succeed, and has now passed on. But it was the only startup that I ever participated in from start to finish, and I spent nearly a decade there, and so it was a major part of my life. Hence, these brief notes, not so much because I think you'll find them interesting, but because I wanted to write them.


  1. The company was formed in the summer of 2001, as Edgility Software, by Paul Butterworth and John Hubinger, who had worked together at Forte Software, and continued working together at Sun Microsystems. A team of 10 or so founding engineers was recruited. An initial ($9.1M) round of funding occurred; the early investors were Promod Haque of Norwest Venture Partners and Bill Younger of Sutter Hill Ventures.

  2. I joined the company soon after it started, in October, 2001, just as the company was setting up offices at 155 Grand in Oakland, CA. (As a bit of nerd humor, we were at Suite 404, which led to a fair amount of joking about how that caused us to be "not found".) I had just turned 40 years old, and I thought this was my last chance to try to participate in a "real" startup: helping to build a company from the beginning, watching all the phases of how a company is born.

  3. The first 6 months of the company involved a lot of recruiting and staffing; by December 2001 we were more than two dozen employees, and by winter's end 2002 we were more than 40. The company grew quite rapidly during its first year; there was a sense that this was a big potential market, and we needed to own the market early before any competitors could develop.

  4. During this time we also completed the principal engineering work on the company's core product, the AmberPoint Agent. The agent was a programmable XML message router, which could observe and/or alter XML message traffic, and could call out to arbitrary Java code to take action based on the message flow. The agent was designed to be inserted into the XML message streams at the main processing points, such as application servers, web servers, firewalls, gateways, message queues, etc.

  5. In May, 2002, the company renamed itself from Edgility Software to AmberPoint, Inc. The new company name was the result of a contest, and was chosen from various candidates submitted by employees; Greg Batti, our VP of Engineering, was the one who coined AmberPoint.

  6. In November 2002, we had our second round of funding, of $13.6M, from the same core investors, joined by Crosslink Capital.

  7. During this time we developed a very clever internal engineering technique that allowed us to write our software in Java, but post-process it with a tool of our own design that emitted a corresponding set of code in C#, thus giving us both a Java product and a .Net product from a single source base. This technology was extremely clever, but extraordinarily expensive to build and operate; basic product build times soon ballooned to over an hour to compile and link the product. However, it did make us, immediately and, as it turned out, for all time, the only company in our space which had both a Java and a .Net version of Web Services Management software.

  8. In the winter of 2003, AmberPoint merged with / acquired CrossWeave. CrossWeave was sort of a sister company of AmberPoint: both companies were Forte spinoffs, Bill Younger was also the primary investor in CrossWeave, etc. The merger with CrossWeave added about 10 employees, mostly in Engineering, bringing the combined company to more than 50 headcount.

  9. During the summer of 2003, we embarked on a re-engineering effort. The original AmberPoint Agent was an isolated piece of software; the new architecture made it possible to collect a set of agents together into a larger unit called a Sphere, allowing enterprises to tie multiple XML message flows together and see a larger view of their overall network flow. We also made the decision to revise our UI technologies and base our UIs on dynamic HTML, rather than on Java Swing, and we retained Cooper Design as consultants to help us with the redesign. The revised architecture was much more powerful, but also much more complex.

  10. By the end of 2003 we had some of our first European revenues and we were competing for a number of enterprise sales. However, we were also burning cash rapidly, and we were losing a number of sales to competitors.

  11. In early 2004 we struck a deal with Microsoft to build a web services product to be included as part of Visual Studio. Internally, this was code-named Kestrel (one of our engineering managers had decided to use birds of prey as code names); externally it would come to be known as AmberPoint Express. The idea was to give a low-end version of our product away for free to developers, with the hope that they would upgrade to our commercial product once they had their applications in production. As part of the deal with Microsoft, we also did a fair amount of custom programming for a model-based development tool that they were building; we had some model-based programming expertise (notably from CrossWeave's CTO, Sean Fitts) which Microsoft was interested in learning from.

  12. Annual revenues during this time were about 2.5M to 3M.

  13. During 2003-2004 we were rapidly expanding our direct sales force; we felt that we now had a product to sell and so we built a team to sell it. By spring of 2004 our headcount was at 62; our plan was to add 15 more people during the year and try to get to 8-10M in revenue that year.

  14. In early 2004 the company decided to open an off-shore engineering office in India, and building this office occupied Engineering management for most of 2004. By summer 2004 we had selected the general manager for the India office, and through the summer and fall we were interviewing for early hires for that team.

  15. To fund the growth of the sales team and the India office we had our third round of funding in June, 2004, a $8.2M round from the existing investors as well as new investor Motorola.

  16. By the fall of 2004 we had grown sufficiently that we held our company meeting offsite, and we were negotiating with our landlord to expand our space to the entire 4th floor of 155 Grand. After 18 months of development, our re-designed Sphere architecture was finally going to be released. We believed we had 40 actual customers at this point; several of them were large enough that we were realizing some follow-on services revenue.

  17. By the winter of 2005 our quarterly revenues were $1.5M or more but our sales force was pursuing a number of deals; during our fiscal 4th quarter we had 5+M in bookings and surpassed $9M for the year. It would turn out that this was our best quarter. It also turned out that a large part of this revenue was a special licensing deal for the technology we built for Microsoft, not a sale of our regular product line. We opened sales offices in Dallas, Atlanta, Washington DC, Boston, and Chicago, expanded our European operations, and started negotiating with possible Asian and Indian partners to open sales offices there.

  18. The summer of 2005 was the make-or-break time for AmberPoint: we had a fully-staffed sales force, we had released our intended product line ("Release 5"), and the economy was fully recovered from the "dot com" bubble of a few years earlier. By summer we were at 99 employees, including 10 in our Pune India office.

  19. By the fall of 2005 we were achieving quarterly revenues of $3M; we had 10 separate sales teams; we had achieved our first production customer in Europe.

  20. By winter 2006 we were tracking sales metrics and trying to make forecasts. However, there was severe internal dissent: sales felt that the new R5 Sphere architecture was unreliable and poor performing, and had been very late to market; engineering felt that sales had made excessive promises, and also felt that engineering resources had been diverted to custom programming projects for several of our major customers.

  21. Many people felt that the major problem at this time was that enterprise sales required a large sales effort, as we were involved in deals with companies typically used to working with vendors that were ten times our size. So the company's basic strategy was to find a friendly partner that could include us in these deals, but this was complicated because the company's core strategy, dating back years to when we had built our Java-to-C# translator, was to emphasize our cross-vendor, cross-platform breadth. In a time when each vendor was pushing their own Web Services engine, we worked equally well on all of them, which meant than none of them was particularly interested in pushing our software.

  22. In the spring of 2006 we raised our fourth round of funding, a $10.3M Series D led by new investor Meritech Capital Partners. Our headcount in the spring was at 121; sales were growing, but so were expenses.

  23. In the summer of 2006 we started working with a major new customer: Lehman Brothers, who would quickly become our largest customer.

  24. In the fall of 2006 we had a $5M quarter. We changed our financial accounting procedures to handle deferred revenue recognition differently. We were hoping for a $18M fiscal year.

  25. By the winter of 2007 we were still hiring, but we had greatly slowed the rate, and we had reduced our forecast to a $15M year.

  26. In the spring of 2007 we raised $9M in a fifth round of funding; the Series E round added SAP Ventures as an investor, together with the existing investors. This brought us to $50.1 million raised in total. We held our annual meeting at the Claremont hotel in Berkeley. The sales team felt they could sell our product, but they were desparately in need of "Release 6", a substantial re-engineering of our Sphere architecture to try to address the performance and reliability problems, as well as to introduce a fair amount of new functionality (such as the SAP support).

  27. By the end of summer 2007 we had finally accomplished a partnership arrangement with BEA, and we were starting to see substantial sales growth on the WebLogic platform. We were nearly 140 employees in headcount, with almost 30 in our Pune India office.

  28. From here on out the story starts to become very sad, very fast, can be summarized pretty quickly:

    1. In Fall 2007, Oracle bought BEA, and immediately cancelled our partnership deal with BEA. Sales of our product to WebLogic sites essentially stopped.

    2. In the winter of 2008, some of the first rumblings of the credit crunch occurred, and enterprises worldwide began to scale back spending. We all took company-wide across-the-board pay cuts.

    3. By the spring of 2008, we were still burning cash and could not raise any additional investments.

    4. In the summer of 2008 we had a substantial layoff

    5. In the fall of 2008, the credit crunch was real. Most of our sales force had quit; engineering and support were still trying to satisfy our existing customers.

    6. In the winter of 2009 we had another company-wide across-the-board pay cut.

    7. In the summer of 2009 we shut down our India office.

    8. In the winter of 2010 the company was sold to Oracle for $49M. Many employees were offered jobs with Oracle, and a number of them still work there now.



So there you have it. What do I think I learned?

  • Building a startup is a fascinating process; everyone should try it at least once.

  • Our Sphere architecture was over-complex and under-implemented. Releasing a shoddy piece of software is an unrecoverable mistake. You can't re-implement fast enough to make up for a big stumble like that.

  • Our cross-vendor strategy was an elegant bit of technology, but it made our engineering teams much less productive; worse, by trying to work with all Web Services vendors in the marketplace we ended up being that software that tried to do everything, and did it all in a mediocre fashion, rather than doing a smaller set of things, and doing them well.

  • Building an India office distracted the executives at a crucial time.

  • The credit crisis, having your largest customer suddenly go bankrupt, and the Oracle/BEA buyouts were pieces of particularly bad luck that hastened an unavoidable outcome.



I loved the people that I met and worked with at AmberPoint. I worked there for 8.5 years, longer than I've worked at any other company. For years I felt empowered and engaged, and I really wanted the company to succeed. But after the R5 release it all changed for me; I was puzzled and confused by many of the decisions that the company made, though I tried my best to contribute where I could. Overall, it was a memorable time, and I wish all ex-AmberPoint people the best in their future endeavors.

Thursday, April 7, 2011

Facebook opens up their data center designs

The Facebook engineering team have created a website called the Open Compute Project, where they are posting designs and other information about the way they are building their server and data centers. There is a wealth of information already online at the new website, and they've also posted a nice article on the Facebook site about their efforts.

So far, they seem quite pleased by the results of the data center design:

The result is that our Prineville data center uses 38 percent less energy to do the same work as Facebook’s existing facilities, while costing 24 percent less.


Since I know very little about hardware design, I'll ask the first really stupid question that came up as I was reading the materials:

What is a "vanity free" server?


Meanwhile, the various media seem to be trying to build this up into a battle between Facebook and Google, but I find that claim hard to swallow. As Facebook themselves say, their engineering team has historically been a very open team, releasing infrastructure such as Cassandra back to the world as they built it.

Thank you, Facebook, for releasing this information; the world is now a little bit smarter.

Try Perforce in the Cloud!

The Perforce cloud computing team have announced Perforce Cloud Trials, which are a great way to get acquainted with Perforce. You can read more about Perforce Cloud Trials here.

Wednesday, April 6, 2011

Moving day nears for Facebook

The New York Times has an interesting article about Facebook's preparations to move into the old Sun Microsystems campus located at the west end of the Dumbarton Bridge in Menlo Park.

I've been to those offices a number of times, mostly in the late 1990's when I worked for Sun Microsystems. The offices themselves are nice enough, on the water and with views of the bay, but the overall atmosphere is sterile, isolated, and unwelcoming, as the Times observes:

the Facebook site is surrounded on three sides by water, and separated from the rest of Menlo Park by railroad tracks and a divided highway.

The site is so insular that in the two decades it was occupied by Sun Microsystems it was nicknamed Sun Quentin (a reference to San Quentin prison, about 40 miles north). And because Facebook provides its employees with three meals a day in its own cafeterias, there may be little reason for them to venture off the property.


Facebook, which was desparate for space, is trying to figure out how to transform this location to make it more successful, which for them seems to mostly mean re-arranging the internal space to fit more employees in:

because Sun’s engineers had private offices, while most Facebook employees work in unpartitioned spaces, Mr. Tenanes said the one million-square-foot campus could handle a much larger population than it was originally designed for.

Contractors have already replaced rows of small offices in one of the Sun buildings with a loftlike space where desks will be pushed together in groups of four.


And some of their ideas seem downright odd; am I just getting old?

Unlike the Sun campus, with color-coordinated buildings reminiscent of an upscale resort, Facebook is looking for “an urban streetscape where no one architect or designer” dominates, Mr. Tenanes said. “Random is good,” he added.

Right now the courtyard that connects them suggests a botanical garden, but that is going to change as Mr. Tenanes reduces the amount of vegetation on the site and adds more paths.


Around the Bay Area, most corporate office locations are awful; located in "business parks" in places like Pleasanton, San Ramon, Fremont, Santa Clara, Redwood Shores, etc., they are dull buildings where workers arrive by car, spend all day in their offices, then return by car to their homes in the evening. You don't go out for a pleasant walk with friends during the day; you don't wander to a nearby eatery down the block for lunch; you take your lunch in the corporate cafe and do your workouts in the company gym. And then you return to your cubicle and code.

Those few areas of the Bay Area where interesting street life abounds are notable for their vibrancy: San Francisco, Oakland, Palo Alto, Berkeley, San Jose. High tech companies know that their employees love these locations, where transport is available, great restaurants, stores, and nightlife are nearby, and lots of other interesting people are close at hand, and so the best companies work hard to try to provide such facilities; witness this discussion of the efforts to keep the Twitter offices in San Francisco.

Unfortunately, I feel the pain of the Facebook employees who are soon going to lose their wonderful Palo Alto office space when the big move occurs this summer. Will they transform Sun Quentin into the next great location? Or will it deaden them, as it has deadened others before? I guess, as they say, time will tell.

Tuesday, April 5, 2011

Let it rain!

Have a look at this chart, from the California Department of Water Resources: every single major reservoir in the state, including the monumentally ginormous reservoirs of Lake Shasta and Lake Oroville, are above their historical average, and within a few percentage points of capacity.

Around these parts, we get our water from New Melones and Don Pedro, and yes, they're nearly full too.

Amazing.

2011 National Magazine Awards Finalists announced

In case you were looking for something to read, here's the announcement, and here's a nice short page with many of the links to the finalists (Instapaper ready!).

My wife will be pleased to see that Cooking Light received a nomination; she's rather partial to that magazine.

Monday, April 4, 2011

Sometimes there are sad comics, too

Hang in there, Randall Munroe!

Detailed portraits of the imperceptible

Some recent articles highlight the ways that computer software, specifically simulations of complex situations, are helping us deal with the world around us.

In this weekend's New York Times, a front-page story entitled From Far Labs, a Vivid Picture of Japan Crisis describes how international nuclear scientists are attempting to use simulation software to comprehend the current situation and behavior of the Fukushima Daiichi nuclear power plant:

The bits of information that drive these analyses range from the simple to the complex. They can include everything from the length of time a reactor core lacked cooling water to the subtleties of the gases and radioactive particles being emitted from the plant. Engineers feed the data points into computer simulations that churn out detailed portraits of the imperceptible, including many specifics on the melting of the hot fuel cores.


This is not a simple task, and it will take a long time to pursue it:

the forensic modeling could go on for some time. It took more than three years before engineers lowered a camera to visually inspect the damaged core of the [Three Mile Island] Pennsylvania reactor, and another year to map the extent of the destruction.


Experts in reactor simulation have been gathering at quickly-convened conferences, such as this one at Stanford two weeks ago, to discuss the work they're doing and what it might mean.

The article discusses the delicacy of trying to use these simulations as practical tools. The simulations are simply the output of computer software, and may or may not match the actual events that are occurring within the reactors. They may suggest certain progressions, and people may use them to make decisions, but, as with all software, they are just tools, and it is up to the people to think:

A European atomic official monitoring the Fukushima crisis expressed sympathy for Japan's need to rely on forensics to grasp the full dimensions of the unfolding disaster.

"Clearly, there's no access to the core," the official said. "The Japanese are honestly blind."


Also, in The New Yorker, Raffi Khatchdourian writes a detailed post-mortem on the cleanup efforts in the Gulf of Mexico following last spring's explosion of the Deepwater Horizon drilling rig. As with the Japanese nuclear power plant disaster, the Deepwater Horizon spill involved plenty of effort by many people working directly with the environmental impacts of the explosion; however, they were again supported by a substantial software effort helping them study, understand, adapt to, and deal with the issues they were facing.

The article starts by explaining SCAT, the Shoreline Clean-Up Assessment Technique:

BP hired the designer of SCAT, Ed Owens, a British geologist, to implement the surveys.

He had come up with SCAT in 1989, after an oil barge collided with a tug off Washington State and released fifty-five hundred barrels of fuel, contaminating ninety-five miles of shoreline. Owens devised standard terminology for the various levels of pollution, and created surveys that allowed government responders and oil companies to trust the same data. Before that, Cramer told me, "people would look and say, 'There's a bunch of oil,' but there wasn't a real systematic process."

In Lousiana, members of the SCAT teams regarded themselves as intelligence officers for the cleanup.


This intelligence, it turns out, was crucially necessary, for reasons that are broadly quite similar to those impacting the Fukushima responders right now: it was very unclear what was going on at the site of the disaster. In Japan, this was because all the action was taking place inside an inner containment core held within several other layers of structure, while in Louisiana it was because all the action was taking place one mile below the ocean's surface:


When the rig sank to the ocean floor, it created clouds of debris, making it difficult to tell how much oil was being released. "It took probably thirty-six hours to get good imagery, because so much sediment and silt was raised when the thing crashed," Admiral Allen told me. After the sediment had cleared, days of bad weather further complicated underwater surveys of the wellhead area.


It turns out that understanding the precise details of the makeup of the spill was crucial, yet tremendously complex:

In the press, oil spills are typically judged by the amount of oil released, but volume can be a misleading standard. Wind patterns, ocean hydrodynamics, the chemistry of the oil, the temperature of the water -- all these factors are significant.


The responders had many tools available to them, and deciding what technique to use where was vital:

Even as Laferriere tried to motivate his responders for an all-out assault upon the coastline, he recognized that the principal fight against the oil was offshore, to be conducted with a weapon -- dispersants -- that many people thought was more harmful than the spill itself. "How do you view the various technologies and their ability to fight oil?" he said. "There are really two components to that. One is: How much oil do they take out of the environment? How much oil can be skimmed or burned or dispersed? Then, there is another factor that is equally important: What is the 'encounter rate' of the technology? Remember, the oil on the water is about a millimeter thick. Its area is huge. So if you can only go about a knot, which is the average skimming capacity, and less than a knot when you are burning, it is not possible, physically, even with all the vessels in the world, to keep up with the spreading of the oil."


As all good engineers do, you can just visualize these teams of engineers sitting together, bouncing ideas off each other, rapidly doing "back of the envelope" calculations of feasibility, effectiveness, and risk, and trying to produce plans on the spot, given the information available.

However, information about techniques for responding to oil spills is not easy to come by, because it takes a long time to acquire:

Levine also phoned Alan Mearns, a NOAA biologist in Seattle, who had worked on the Exxon Valdez response. (He was famous for monitoring for twenty years a boulder that the cleanup did not touch. The boulder -- eventually called Mearns Rock -- recovered as quickly as the most aggressively cleaned areas.)


This paucity of information constrained the engineers, particularly in terms of risk management:

Corexit was the most studied dispersant available; any other chemical would be inherently less well understood.


Even after the spill was over, the engineers were still starving for information:

By September, the BP well had been contained, and the most pressing questions for the response were: Where had the oil gone and how much harm had it done? A team of federal scientists had estimated that the total amount of oil that spewed from the well was 4.9 million barrels. Based on this number, the response estimated that seventeen per cent of the oil had been captured directly from the wellhead. The burns had eliminated five per cent of the oil; skimming had removed three per cent; and the Corexit had dispersed sixteen per cent into the sea. Altogether, the Unified Command appears to have removed and chemically dispersed two million barrels of oil -- an amount equivalent to some of the largest spills in history. A comparable volume of oil seems to have naturally dissolved in the water column, or dispersed on its own, or simply evaporated.


In other words, we just don't know, and maybe we will never know:

Clearly, it will be years before the oil's full ecological impact -- especially the sublethal effects on plants and animals -- is fully understood. Recent studies in Prince William Sound suggest that, in small ways, the ecological legacy of Exxon Valdez persists to this day.


The world is a complicated place, and we need to continue to improve our tools, and our techniques, so that we can produce the best possible "detailed portraits of the imperceptible".

Friday, April 1, 2011

Eric Melski hooks Gource to Perforce

Here's a nice article by Eric Melski of Electric Cloud talking about using Gource to visualize the activity of their Perforce server during the development of ElectricAccelerator..

Gource is an intriguing tool. As Melski says, it's not immediately obvious that the Gource displays are useful, but they are certainly beautiful. I'd love to learn more about how people are using Gource, and what sort of information they find that it reveals. Or is it just art?

At work, we think that the concept of The Flow of Change is very powerful and very useful; Laura Wingerd, VP of Product Technology at Perforce, has spoken and written about the concept for years. I particularly enjoy looking at this presentation, with its rich variety of different ways to understand what your code is doing and how it is changing.

Modern systems software projects are complex and intricate, have rich histories, and evolve in sophisticated ways. Tools which help you understand and grasp what is occurring are extremely valuable, so I'll be interested to follow the growth of Gource as more people learn about it and learn how to use it.

Alan Taylor, the BigPicture blog, and InFocus

About 2 months ago, Alan Taylor, the creator of The Big Picture blog at the Boston Globe website, moved over to The Atlantic, where he now publishes his photo-journalism blog under the title InFocus.

Meanwhile, the Boston Globe transitioned The Big Picture to a new set of writers, who seem to rotate responsibility.

Well, this is all well and good, but am I the only one who finds it really odd that these two blogs seem to both cover the same topics, with the same pictures, at about the same times? I guess that's just natural, given that they are both photo blogs and they are both very topically oriented, but it still seems strange...


and so on.

Probably it's just me, but it seems like something unusual is going on here...