Thursday, December 30, 2010

Some situations where power efficiency isn't the desired answer

Here's an interesting collection of inter-related notes about people who were surprised when their brand new spiffy computers were running substantially slower than their old computers:



The bottom line is that modern CPUs are incredibly sophisticated, and are capable of dynamically speeding up and slowing down in response to their workload, running faster (and using more power, generating more heat, etc.) when they need to get lots of work done, but automatically slowing themselves down when they aren't busy.

However, as Jeff Atwood and his team at StackOverflow found, sometimes this automatic speedup/slowdown functionality doesn't work right, and the only thing that you might notice is that your brand new server is running slower than your old one. Gotta love this perspective on the behavior:

My hat is off to them for saving the planet by using less power, but my pants are down to them for killing performance without informing the users. In the last few weeks, I’ve seen several cases where server upgrades have resulted in worse performance, and one of the key factors has been throttled-down CPUs. In theory, the servers should crank up the juice according to demand, but in reality, that’s rarely the case. Server manufacturers are hiding power-saving settings in the BIOS, and Windows Server ships with a default power-saving option that throttles the CPU down way too often.


Jeff Atwood is an incredibly alert and aware programmer; I have to wonder how many other users out there are being bit by this behavior and are completely unaware that it is occuring to them.

It looks like there is a CPUID tool available from Intel for Mac OS X: MacCPUID. It seems to work on my system, though it's hard to compare it to the CPU-Z screen shots from the articles above. Is there a better tool to run on a Mac OS X system?

The American healthcare system is annoying

Groan. It's things like this that make me almost willing to move to England, or New Zealand, or Denmark, or someplace that has a decent view of the importance of healthcare to a society.

Apparently, the 2010 U.S. Healthcare reform act was very delicately worded when it came to the much-bally-hooed change to enable parents to include their children in their coverage up through age 26. The insurance companies apparently worded the bill in such a way that medical insurance plans are now provided for such overage dependents.

But the legislation does not cover dental insurance.

Nor vision insurance.

As though your teeth weren't part of your health.

Or your eyesight.

Stupid politicians.

So my grown son, who has been trying for 14 months now to find a job, and who is subsisting on temporary employment through a staffing agency, is at least finally covered by my health plan.

But not by my dental plan. Nor by my vision plan.

Meanwhile, what did the dental insurance company report last month?


While economic conditions continue to influence parts of our business, steady consumer demand for our insurance and retirement products has contributed to consistent sales and positive aggregate net flow results.


Well, good for them. I hope they sleep better with those fine aggregate net flow results.

Wednesday, December 29, 2010

Where set theory meets topical humor

It's topical! It's set theory! It's humorous! I won't be the first person to link to this, but even if you can barely spell "Venn diagram" you'll enjoy reading this nice short essay.

Learning from the Skype outage

I'm not much of a Skype user recently, but in the past I used it quite a bit; it's a great service!

So I wasn't much impacted by last week's Skype system outages, but I was still interested, because Skype is a big complex system and I love big complex systems :)

If, like me, you're fascinated by how these systems are built and maintained, and what we can learn from the problems of others, you'll want to dig into some of what's been written about the Skype outage:


Building immense complicated distributed systems is incredibly hard; I've been working in the field for 15 years and I'm painfully aware of how little I really know about this.

It's wonderful that Skype is being so forthcoming about the problem, what caused it, what was done to fix it, and how it could be avoided in the future. I am always greatful when others take the time to write up information like this -- post-mortems are great, so thanks Skype!

Tuesday, December 28, 2010

The 2010 One Page Dungeon Contest

A nice posting over at Greg Costikyan's Play This Thing alerted me to the 2010 One Page Dungeon contest.

There are a lot of entrants, and I'm not really much of a tabletop RPG player, so most of the entries went over my head, but I did rather enjoy reading through the nicely formatted PDF collection of the 2010 winners.

Personally, I liked "Trolls will be Trolls", "Velth, City of Traitors", and "Mine! Not yours!" the best, though all of them were quite nice.

Monday, December 27, 2010

Fish Fillets

Oh my, I am completely 100% addicted to Fish Fillets! I've always loved puzzle games, and this one is superb.

Thank you ALTAR Interactive for allowing the world to continue to enjoy your delightful game!

Saturday, December 25, 2010

Traditional DBMS techniques in the NoSQL age

Nowadays the so-called "NoSQL" techniques are all the rage. Everywhere you look it's Dynamo, Cassandra, BigTable, MongoDB, etc. Everybody seems to want to talk about relaxed consistency, eventual consistency, massive scalability, approximate query processing, and so on.

There's clearly a lot of value in these new paradigms, and it's indeed hard to see how Internet-scale systems could be built without them.

But, for an old-time DBMS grunt like me, raised on the work of people like Gray, Stonebraker, Mohan, Epstein, Putzolu, and so forth, it's a breath of extremely fresh air to come across a recent Google paper: Large-scale Incremental Processing Using Distributed Transactions and Notifications.

Google, of course, are pioneers and leaders in Internet-scale data management, and their systems, such as BigTable, Map/Reduce, and GFS, are well known. But this paper is all about how traditional database techniques still have a role to play in Internet-scale data management.

The authors describe Percolator and Caffeine, systems for performing incremental consistent updates to the Google web indexes:

An ideal data processing system for the task of maintaining the web search index would be optimized for incremental processing; that is, it would allow us to maintain a very large repository of documents and update it efficiently as each new document was crawled. Given that the system will be processing many small updates concurrently, an ideal system would also provide mechanisms for maintaining invariants despite concurrent updates and for keeping track of which updates have been processed.


They describe how they use ideas from traditional DBMS implementations, such as transaction isolation, and two-phase commit, to provide certain guarantees that make new approaches to maintaining Google's multi-petabyte indexes feasible:

By converting the indexing system to an incremental system, we are able to process individual documents as they are crawled. This reduced that average document processing latency by a factor of 100, and the average age of a document appearing in a search result dropped by nearly 50 percent.


Since Google have for many years been the poster child for Internet-scale data management, it's an event of significant importance in this age of NoSQL architectures and CAP-theorem analysis to read a paragraph such as the following from Google's team:

The transaction management of Percolator builds on a long line of work on distributed transactions for database systems. Percolator implements snapshot isolation by extending multi-version timestamp ordering across a distributed system using two-phase commit.


What goes around comes around. Reading the paper, I was reminded of the days when I first got interested in DBMS technology. In the late 1970's, data processing tended to be done using what was then called "batch" techniques: During the day, the system provided read-only access to the data, and accumulated change requests into a separate spooling area (typically, written to 9-track tapes); overnight, the day's changes would be run through a gigantic sort-merge-apply algorithm, which would apply the changes to the master data, and make the system ready for the next day's use. Along came some new data processing techniques, and systems could provide "online updates": operators could change the data, and the system could incrementally perform the update while still making the database available for queries by other concurrent users.

Now it's 40 years later, and the same sort of changes are still worth doing. The authors report that the introduction of Percolator and Caffeine provided a revolutionary improvement to the Google index:

In our previous system, each day we crawled several billion documents and fed them along with a repository of existing documents through a series of 100 MapReduces. Though not all 100 MapReduces were on the critical path for ever document, the organization of the system as a series of MapReduces meant that each document spent 2-3 days being indexed before it could be returned as a search result.

The Percolator-based indexing system (known as Caffeine) crawls the same number of documents, but we feed each document through Percolator as it is crawled. The immediate advantage, and main design goal, of Caffeine is a reduction in latency: the median document moves through Caffeine over 100x faster than the previous system.


The paper is very well written, thorough, and complete. If you are even tangentially involved with the world of "Big Data", you'll want to carve out an afternoon and spend it digging through the paper, chasing down the references, studying the pseudocode, and thinking about the implications. Thanks Google for publishing these results, I found them very instructive!

Wednesday, December 22, 2010

Sterling on Assange

I've been mostly baffled by the WikiLeaks saga; I didn't know what it meant, and I've been waiting for someone capable of doing a "deep reading", as they say in literature classes.

Today, along comes the world's best writer on technology and culture, Bruce Sterling, and his essay on Julian Assange and the Cablegate scandal is the best work I've yet seen to explain and interpret what's occurring:


That’s the real issue, that’s the big modern problem; national governments and global computer networks don’t mix any more. It’s like trying to eat a very private birthday cake while also distributing it. That scheme is just not working. And that failure has a face now, and that’s Julian Assange.


Sterling has both the experience and the brilliance to interpret these events in the light of all of modern culture, tying together banking scandals, MP3 file sharing, the Iraq war, the Clinton/Lewinsky scandal, the Velvet Revolution, and more, taking you back to 1947, and on to tomorrow. Sterling's essay does more than just take you through what's happened, and why it matters: it peers into the future, as the best writers can do, and opens your eyes to what may lie ahead:


For diplomats, a massive computer leak is not the kind of sunlight that chases away corrupt misbehavior; it’s more like some dreadful shift in the planetary atmosphere that causes ultraviolet light to peel their skin away. They’re not gonna die from being sunburned in public without their pants on; Bill Clinton survived that ordeal, Silvio Berlusconi just survived it (again). No scandal lasts forever; people do get bored. Generally, you can just brazen it out and wait for public to find a fresher outrage. Except.

It’s the damage to the institutions that is spooky and disheartening; after the Lewinsky eruption, every American politician lives in permanent terror of a sex-outing. That’s “transparency,” too; it’s the kind of ghastly sex-transparency that Julian himself is stuck crotch-deep in. The politics of personal destruction hasn’t made the Americans into a frank and erotically cheerful people. On the contrary, the US today is like some creepy house of incest divided against itself in a civil cold war. “Transparency” can have nasty aspects; obvious, yet denied; spoken, but spoken in whispers. Very Edgar Allen Poe.


It's a brilliant essay, every word of which is worth reading. If you've got the time, you won't regret spending it reading The Blast Shack.

Thursday, December 16, 2010

Insert Coin

Perhaps the best part of this delightful video homage to old video games is the end, where the two artists describe the behind-the-scenes techniques that they used to make the video.

Lights theory

OK, so where do I go for a basic introduction to the theory and practice of Christmas tree lights?

In particular, where can I find a self-help guide that covers topics such as:

  • When part, but not all, of a string isn't staying lit, what is causing that? How can I find and replace the one piece which is causing the problem?

  • When one or more lights in the string flash, when they are supposed to stay lit steadily, or stay lit steadily, when they are supposed to flash, what is causing that, and how can I find and replace the one piece which is causing the problem?

  • What configurations prolong or, conversely, reduce the life of the string of lights? Does connecting strings in certain orders end-to-end change their behavior? Why is that?



Surely there must be some resource that saves me from futilely manipulating the unlit bulbs for 15 minutes, then giving up in despair and switching to another string...

Wednesday, December 15, 2010

Apache Derby 10.7 has been released

The 10.7 release of the Apache Derby project is now live on the Apache website!

Congratulations to the Derby team, they continue to do wonderful work! I didn't have many direct contributions to this release, as I've been extremely busy with other projects and spending less time on Derby recently. However, several of my Google Summer of Code students made substantial contributions to this release:

  • Nirmal Fernando contributed the query plan exporting tool, which can format a captured Derby query plan as XML for further analysis or for formatting for easier comprehension

  • Eranda Sooriyabandara contributed to the TRUNCATE TABLE effort.



I believe the Unicode database names feature was also a GSoC-contributed feature.

I hope to continue being part of the Derby community in the future. Even if I'm not directly contributing features and bug fixes, I still enjoy spending time on the mailing lists, learning from the work that others on the project do, answering questions and participating in the discussions, etc. It's been a great group of people to be involved with, and I'm pleased to be a member of the Derby community.

If you're looking for a low-footprint, reliable, high-quality database, and even more so if you're looking for one implemented in Java, check out Derby.

Big progress on the San Francisco Bay Bridge

If you're a construction junkie (and what software engineer isn't?), it's been an exciting fall for the San Francisco Bay Bridge replacement project. This week, the crews

began hoisting the third of four sets of giant steel pieces that will make up the legs of the 525-foot-tall tower

Read more about the current events here.

But of course, if you're a construction junkie, just reading about the bridge isn't cool enough, so go:


Metaphors are of course crucial:

"The tower is like a stool with four legs," Caltrans spokesman Bart Ney said. "We hope to have the four legs in place by Christmas. Then we can put the seat on top."

It's like a giant present, for the entire San Francisco Bay Area!

Saturday, December 11, 2010

El Nino, La Nina, and the Atmospheric River

In California, around this time of year, we often hear the local weather forecasters discussing El Nino, La Nina, and how the long term forecast for the winter suggests this or that.

I've always been rather baffled by the discussion, because in the short time available to them, the forecasters rarely have enough time to really explain what they're observing, why it matters, and how they're deriving their conclusions.

But I've been reading the blog of a Seattle-area forecaster, Cliff Mass, and he's written several great posts this fall explaining how La Nina affects the weather of the West Coast of the USA.

Short summary: the changed ocean temperature affects the air patterns, and the "Atmospheric River", often called the "Pineapple Express" by Bay Area forecasters when it causes warm, wet air from Hawaii to come streaming right at Northern California, shifts just a couple degrees in direction and instead of being pointed at California, becomes pointed at Oregon/Washington instead. The result: very very wet weather in Oregon and Washington, rather dryer-than-usual weather in California.

Here's some of Mass's recent essays on the subject:

And a bonus link to a short Science article on the subject: Rivers in the Sky are Flooding the World with Tropical Waters.

If you're looking for something a bit different to read, you could do much worse than tuning into Cliff Mass's blog from time to time. I particularly enjoy how he illustrates his articles with the charts and graphs of various forecasting tools, showing how these tools are used, and how forecasters continue to improve their technology in order to further their understanding of the world's weather.

Great essay on waiting in line

Via Cory Doctorow at BoingBoing comes a pointer to this marvelous article about the theory and practice of designing waiting lines for theme park attractions, specifically the waiting lines at Walt Disney World and its new Winnie-the-Pooh attraction.

You probably have no idea that an essay about lining up for a ride could be anywhere near this fascinating and absorbing, but it is:

It's simply a beautiful, expertly executed experience, and the real world seems to fade away slowly as we descend into the perfect dream state. The surrender is so complete that nobody ever seems to notice several significant logic gaps which the queue sees no reason to explain, but rather leaves mysterious. How, for example, do we end up in outer space? It's just there, at the end of a hallway, as if outer space could be on the other side of any ordinary door.


As the author points out, the special magic of doing this well is that the simple activity of waiting in line is part of what builds and reinforces the entire experience of the ride:

The Haunted Mansion, similarly, conjures up an ethereal "house" out of painted walls and suggestive darkness and so we think there's more there than there really is, but we believe the house is really there because we've seen its' exterior. It's hard to not be fooled into believing that there is a real interior inside a solid looking exterior house or facade, or a real room behind a solid-looking door.


About 15 years ago, I had my first experience with Disney's ride reservation system. This is the process by which you can reserve a time slot for one of the more popular rides (Indiana Jones, etc.), and then you simply show up at the appointed time and enter a special pathway which enables you to skip the majority of the line and go directly to the ride.

Ride reservations definitely resolved one of the bigger problems that Disney was having, and made it possible for visitors, with a bit of planning, to avoid spending all day waiting in line to ride only a handful of rides.

However, I recall distinctly remarking to my mystified family that one of the downsides of the new approach was that, for a lot of the rides, "waiting in the line was actually a lot of the fun". When you just walk right up, get on the ride, and walk back away again, somehow the ride isn't anywhere near as fun.

Find 10 minutes. Pour yourself a cup of coffee (tea, soda, milk, etc.) Get comfortable and sit down and read the article. It won't be wasted time, I promise.

Thursday, December 9, 2010

Jim Gettys on TCP/IP and network buffering

If you're at all interested in TCP/IP, networking programming, and web performance issues, you'll want to run, not walk, to this fantastic series of posts by the venerable Jim Gettys:


Here's a little taste to wet your whistle, and get you hankering for more:

You see various behavior going on as TCP tries to find out how much bandwidth is available, and (maybe) different kinds of packet drop (e.g. head drop, or tail drop; you can choose which end of the queue to drop from when it fills). Note that any packet drop, whether due to congestion or random packet loss (e.g. to wireless interference) is interpreted as possible congestion, and TCP will then back off how fast to will transmit data.

... and ...

The buffers are confusing TCP’s RTT estimator; the delay caused by the buffers is many times the actual RTT on the path. Remember, TCP is a servo system, which is constantly trying to “fill” the pipe. So by not signalling congestion in a timely fashion, there is *no possible way* that TCP’s algorithms can possibly determine the correct bandwidth it can send data at (it needs to compute the delay/bandwidth product, and the delay becomes hideously large). TCP increasingly sends data a bit faster (the usual slow start rules apply), reestimates the RTT from that, and sends data faster. Of course, this means that even in slow start, TCP ends up trying to run too fast. Therefore the buffers fill (and the latency rises).


Be sure to read not only the posts, but also the detailed discussions and commentary in the comment threads, as there has been lots of back-and-forth on the topics that Gettys raises, and the follow-up discussions are just as fascinating as the posts.

Be prepared, it's going to take you a while to read all this material, and I don't think that Gettys is done yet! There is a lot of information here, and it takes time to digest it.

At my day job, we spend an enormous amount of energy worrying about network performance, so Getty's articles have been getting a lot of attention. They've provoked a number of hallway discussions, a lot of analysis, and some new experimentation and ideas. We think that we've done an extremely good job building an ultra-high-performance network architecture, but there's always more to learn and so I'll be continuing to follow these posts to see where the discussion goes.

The ASF Resigns From the JCP Executive Committee

You can read the ASF's statement here.

Wednesday, December 8, 2010

The slaloming king

Even if you don't know much about chess, you should check out this game. At move 39, Black gives up his rook, realizing that, of his other pieces, only his other rook has legal moves. Therefore, Black believes he can draw the game via "perpetual check", for if White were to capture the remaining rook, it would be a stalemate.

However, White looks far enough ahead to see that he can manage to walk his king all the way across the board, to a safe square entirely on the opposite side of the board, at which point White will be able to capture the rook while simultaneously releasing the stalemate.

If you're not a great chess fan, just click on move 39 ("39. Rg4xg7") on the game listing on the right side, then step through the final 20 moves of the game to watch in delight as White's king wanders back-and-forth, "slaloming" up the chessboard to finally reach the safe location.

Delightful!

White Nose Syndrome report in Wired

The latest issue of Wired has a long and detailed report from the front lines of the battle against White Nose Syndrome. It's a well-written and informative article, but unfortunately not filled with much hope for those wishing to see an end to the bat die-off.

Tuesday, December 7, 2010

Mathematical doodling

This combines two of the best things in the entire world: mathematics, and doodling!

ALL mathematics classes should be like this!

Progress in logging systems

Twenty years ago, I made my living writing the transaction logging components of storage subsystems for database systems. This is a specialty within a specialty within a specialty:

  • Database systems, like operating systems, file systems, compilers, networking protocols, and the like, are a type of "systems software". Systems software are very low-level APIs and libraries, on top of which are built higher-level middleware and applications.

  • Inside a database system, there are various different components, such as SQL execution engines, query optimizers, and so forth. My area of focus was the storage subsystem, which is responsible for managing the underlying physical storage used to store the data on disk: file and index structures, concurrency control, recovery, buffer cache management, etc.

  • Inside the storage subsystem, I spent several years working just on the logging component, which is used to implement the "write ahead log". The basic foundation of a recoverable database is that, everytime a change is made to the data, a log record which describes that change is first written to the transaction log; these records can later be used to recover the data in the event of a crash.



When I was doing this work, in the 80's and 90's, I was further sub-specializing in a particular type of logging system: namely, a position-independent shared-memory multi-processor implementation. In this implementation, multiple processes attach to the same pool of shared memory at different virtual addresses, and organize the complex data structures within that shared memory using offsets and indexes, rather than the more commonly used pointer based data structures. (As a side note, I learned these techniques from a former co-worker, who is now once again a current co-worker, though we're working on completely different software now. The world turns.)

Anyway, I became somewhat disconnected from logging systems over the years, so I was fascinated to stumble across this paper in this summer's VLDB 2010 proceedings. The Aether team are investigating the issues involved with logging systems on multi-core and many-core systems, where ultra-high concurrency is a main goal.

As the paper points out, on modern hardware, logging systems have become a significant bottleneck for the scalability of database systems, and several databases have coped in rather painful ways, providing hideous "solutions" such as COMMIT_WRITE = NOWAIT which essentially discards the "write ahead" aspect of write ahead logging in search of higher performance.

The authors are pursuing a different strategy, leveraging the several decades worth of work in lock-free shared data structures to investigate how to build massively concurrent logging systems which don't become a bottleneck on a many-core platform. These techniques include things such as Nir Shavit's Diffracting Trees and Elimination Trees.

I think this is fascinating work; it's great to see people approaching logging systems with a fresh eye, and rather than trying to avoid them, as so many of the modern "no sql" systems seem to want to do, instead they are trying to improve and strengthen this tried-and-true technology, addressing its flaws rather than tossing it out like last century's bathwater.

Ultra-high-concurrency data structures are a very interesting research area, and have been showing great progress over the last decade. I think this particular sub-field is far from being played out, and given that many-core systems appear to be the most likely future, it's worth investing time to understand how these techniques work, and where they can be most effectively applied. I'll have more to say about these topics in future posts, but for now, have a read of the Aether paper and let me know what you think!

Monday, December 6, 2010

Ubuntu 10.10 kernel hardening, ptrace protection, and GDB attaching

Today I happened to try to use the gdb debugger to try to attach to an already-running process, and failed:

ptrace: Operation not permitted.


After a certain amount of bashing-of-head-against-wall and cursing-of-frustration-didn't-this-work-before activities, I did a bit of web searching, and found:



I'm not completely sure what to make of this, but the suggested workaround:

# echo 0 > /proc/sys/kernel/yama/ptrace_scope

(executed as root) seems to have done the trick, for now.

If this happened to be your particular nightmare as well, hopefully this saved you a few seconds of anguish...

Friday, December 3, 2010

Perforce 2010.2 enters Beta testing

I'm pleased to see that the 2010.2 version of the Perforce Server is now online and available for testing. If you aren't familiar with the notion of Beta test, here's the wikipedia definition.

I'm excited about this release; I think it contains a number of features which Perforce sites will find useful. Perforce has devoted a lot of attention to the needs of high-end SCM installations recently, and this release contains a number of enhancements specifically targeted at administrators of large, complex Perforce installations.

I was pleased to be able to be part of the team that delivered this release, and I'm looking forward to getting some feedback from users of the release.

The software life cycle never ends, of course, and we're already busy gathering ideas and making plans for the next iteration!

If you get a chance to try out the beta of the new server, let me know what you think!

Monday, November 29, 2010

Two tidbits of computer security news today

The New York Times has been digging into the WikiLeaks cable traffic, and reports that the leaked documents appear to confirm that China's Politburo ordered the Google hacking intrusions:

China’s Politburo directed the intrusion into Google’s computer systems in that country, a Chinese contact told the American Embassy in Beijing in January, one cable reported. The Google hacking was part of a coordinated campaign of computer sabotage carried out by government operatives, private security experts and Internet outlaws recruited by the Chinese government.


Meanwhile, Wired Magazine's Threat Level blog is reporting today that Iranian President Mahmoud Ahmadinejad appears to be confirming that the Stuxnet virus did in fact affect the operation of the nuclear-enrichment centrifuges at Iran's Natanz facility:

Frequency-converter drives are used to control the speed of a device. Although it’s not known what device Stuxnet aimed to control, it was designed to vary the speed of the device wildly but intermittently over a span of weeks, suggesting the aim was subtle sabotage meant to ruin a process over time but not in a way that would attract suspicion.

“Using nuclear enrichment as an example, the centrifuges need to spin at a precise speed for long periods of time in order to extract the pure uranium,” Symantec’s Liam O Murchu told Threat Level earlier this month. “If those centrifuges stop to spin at that high speed, then it can disrupt the process of isolating the heavier isotopes in those centrifuges … and the final grade of uranium you would get out would be a lower quality.”


The entire Threat Level report is fascinating; it reads like a movie script, but apparently it's real life.

TritonSort benchmark for Indy GraySort

25 years ago, Jim Gray started benchmarking sort performance, and the efforts continue, as sort performance is a wonderful tool for incrementally advancing the state of the art of systems performance. You can read more about the overall sort benchmarking world at sortbenchmark.org.

One of the 2010 winners, the TritonSort team at UCSD, have posted an overview article about their work on this year's benchmark. Although they don't have the complete technical details, the article is still quite informative and worth reading.

One of the particular aspects they studied was "per-server efficiency", arguing that in the modern world of staggeringly scalable systems, it's interesting to ensure that you aren't wasting resources, but rather are carefully using the resources in an efficient manner:

Recently, setting the sort record has largely been a test of how much computing resources an organization could throw at the problem, often sacrificing on per-server efficiency. For example, Yahoo’s record for Gray sort used an impressive 3452 servers to sort 100 TB of data in less than 3 hours. However, per server throughput worked out to less than 3 MB/s, a factor of 30 less bandwidth than available even from a single disk. Large-scale data sorting involves carefully balancing all per-server resources (CPU, memory capacity, disk capacity, disk I/O, and network I/O), all while maintaining overall system scale. We wanted to determine the limits of a scalable and efficient data processing system. Given current commodity server capacity, is it feasible to run at 30 MB/s or 300 MB/s per server? That is, could we reduce the required number of machines for sorting 100 TB of data by a factor of 10 or even 100?


Their article goes on to describe the complexities of balancing the configuration of the four basic system resources: CPU, memory, disk, and network, and how there continues to be no simple technique that makes this complex problem tractable:

We had to revise, redesign, and fine tune both our architecture and implementation multiple times. There is no one right architecture because the right technique varies with evolving hardware capabilities and balance.


I hope that the TritonSort team will take the time to write up more of their findings and their lessons learned, as I think many people, myself included, can learn a lot from their experiences.

Saturday, November 27, 2010

Paul Randal's amazing Bald Eagle pictures from Alaska

Paul Randal, who is best known for being one of the best writers and teachers about SQL Server, is also a tremendous photographer, and he recently posted some pictures from his trip to Alaska.

You have to see these pictures, they are simply superb:



Not only are the pictures gorgeous, Paul includes some great notes about the process of learning to take pictures like these:

You don't need camoflauge clothing and lens wraps to get good wildlife shots (I think that stuff looks daft), you just need patience and an understanding of the wildlife behavior. We sat in the same small area for 6 hours a day and waited for the eagles to come to us.


Many thanks for sharing these pictures Paul, I enjoyed them very much!

Mobile broadband for a small team

Suppose you have 2 (or 3 or 4) people who want to travel together, working on a fairly large project (sufficiently large that you want local data and computing power, not just "the cloud"), and you want that team to be able to quickly and reliably set up and operate an small local area network for team computing, anywhere in the U.S. or Canada. You'll often be in suburbs or rural locations rather than right downtown at the fanciest modern hotels, so counting on hotel broadband seems rather iffy. And it may be weeks between stops at the mothership, so you need a reliable cloud-based backup provider that can handle data volumes in the tens of gigabytes (maybe even up to 250Gb). You'll be setting up and tearing this lab routinely, perhaps 10 times a week, so you want it to be as simple and reliable as possible.

What setup would you advise?

Here's what I've been exploring; I haven't constructed such a system, but am wondering where the holes would be. Can you poke some holes in this proposal and let me know?


  1. Two laptops, each running Windows 7, each with 500GB hard disks. Perhaps
    something in this range.


  2. Something along the lines of the Verizon MiFi for reliable broadband connectivity (at least throughout North America)


  3. Something along the lines of the Cisco Valet or the Airport Express for setting up a small internal network for file-sharing purposes.

  4. Something like Mozy for reliable cloud-based backup, though I'm a little worried that the cloud-based backup schemes can't scale to dozens or hundreds of gigabytes.

  5. To augment the cloud-based backup strategy, a couple of these stocking stuffers can be used for local standby backup purposes.



With this gear, I think you can quickly set up a small internal LAN for file-sharing support between the two laptops, and each laptop can get online at will via the MiFi.

The primary laptop, which holds the master copy of all the files, can share that folder with the secondary laptop, and I think Windows 7 file sharing is robust and reliable enough that the other secondary can continue accessing those files even while the "primary" laptop is busy running large computations (or playing the occasional game of Dark Lords of Elven Magic V).

The secondary laptop is over-provisioned, but this is intentional, so that if the primary laptop fails, the secondary can take over the primary's duties (after restoring the data from a combination of the local spare backup and the Mozy data from the cloud).

Am I crazy?

Wednesday, November 24, 2010

kernel.org upgrades their master machines

I found this article about the new kernel.org "heavy lifting" machines interesting. These are the machines with which

kernel.org runs the infrastructure that the Linux Kernel community uses to develop and maintain a core piece of the operating system.


It's a good snapshot of the state-of-the-art in provisioning a pretty substantial server machine nowadays.

The unwritten parts of the recipe

As any cook knows, recipes are just a starting point, a guideline.

You have to fill in the unwritten parts yourself.

So, may I suggest the unwritten parts that go with this recipe?


  • First, between each sentence of the instructions, add:

    Drink a beer.

  • Second, at the end of the instructions, add:

    Drink two more beers while you cook the turkey. Then look at the oven and realize you forgot to turn it on. It's ok, you didn't own a meat thermometer anyway. Throw the whole mess away and call out for pizza.

Tuesday, November 23, 2010

Ken Johnson's Exposition of Thread-Local Storage on Win32

The always-worth-reading Raymond Chen happens to be talking about Thread-Local Storage on Windows this week. In his essay, he references Ken Johnson's eight-part description of how Thread-Local Storage works, under the covers, using support from compiler, linker/loader, and the operating system, a description which is so good that it's worth linking to all of Johnson's articles right now:


There is no such thing as "too much information" when it comes to topics like "how does the magic behind __declspec(thread) actually work?" Johnson's in-depth explanations do the world a tremendous favor. Read; learn; enjoy!

Monday, November 22, 2010

memcpy, memmove, and overlapping regions

The C runtime library provides two similar-but-different functions:


The primary distinction between these two functions is that memmove handles situations where the two memory regions may overlap, while memcpy does not.

And the memcpy manual pages describe this quite clearly:

If copying takes place between objects that overlap, the behaviour is undefined.


You might not think that is a very strong statement, but in C programming, when somebody says "the behavior is undefined", this is an extremely strong thing to say; here's a few articles that try to explain what we mean when we say "the behavior is undefined", but the bottom line is: if you program in C, you need to sensitize yourself to the phrase "undefined behavior" and not write programs which perform an action which has undefined behavior.

However, it so happens that, although memcpy has contained this statement forever (well, in programming terms at least!), one of the most common memcpy implementations, has until recently been implemented in a way which caused it to be safe, in practice, to call memcpy with overlapping memory regions.

But things have changed: https://bugzilla.redhat.com/show_bug.cgi?id=638477 There's a lot of interesting discussion in that bug report, but let me particularly draw your attention to Comment 31 and to Comment 38, and to Comment 46. Suffice it to say that the author of those comments knows a little bit about writing system software, and probably has some useful suggestions to offer :)

Happily, as several of the other comments note, the wonderful (though oddly-named) valgrind tool does a quite reasonable job of detecting invalid uses of memcpy, so if you're concerned that you might be encountering this problem, let valgrind have a spin over your code and see what it finds.

Sunday, November 21, 2010

Unidirectional Football

In American Tackle Football, each team defends its end zone, and attacks toward the other team's end zone, scoring a touchdown when it carries or passes the football across the goal line. At the end of each quarter of play, the teams swap end zones and face the other direction, to more-or-less equalize the advantages conveyed by one direction or the other.

Traditionally, that is how it is done.

Yesterday, though, the University of Illinois played Northwestern University in a Big 10 showdown, and the teams chose to play in Chicago's Wrigley Field.

Wrigley Field, of course, is a baseball field, the famous home of the Chicago Cubs; it is named for William Wrigley, the chewing gum magnate, who was part of the syndicate that brought the Cubs to Chicago and who owned the team in the 1920's.

It's not that unheard-of to hold a football game on a baseball field; for example, Notre Dame played Army yesterday in (the new) Yankee Stadium.

But it had been 40 years since a football game was played in Wrigley Field, and now we know why: the field was too small. After laying out the standard-sized football field on the grounds, there was no extra space left around the east end zone, and the playing field terminated with only 6 inches to spare before the brick wall that marks right field. After reviewing the layout:

the Big Ten said that the layout at Wrigley was too tight to ensure safe play. The conference instructed players to run offensive plays only toward the west end zone, except in the case of interceptions.


So, each time the ball changed possession, the players switched sides.

And, the teams shared a sideline, rather than being on opposite sides of the field.

There even was an interception, run back for a touchdown, by Northwestern safety Brian Peters.

Meanwhile, yesterday was also the Big Game in these parts, as Berkeley hosted Stanford in the annual classic. Stanford won easily this year: they have a phenomenal team and should finish the season in the top 5 nationally. After next week's Berkeley vs. Washington game, Berkeley's Memorial Stadium will be closed, and the long-delayed earthquake reconstruction project will begin in earnest. Memorial Stadium, which is situated in one of the most beautiful locations in the country, also happens to be right on top of the Hayward Fault, one of the most dangerous earthquake faults in California. So the stadium will be closed, and extensively overhauled to try to make it safer.

Meanwhile, the Golden Bears will play their 2011 season across the bay, in San Francisco's AT&T Park, home of the San Francisco Giants.

That's right, they'll be playing football all year in a baseball field.

I wonder if they'll play Unidirectional Football?!

Saturday, November 20, 2010

Koobface report

I spent some time reading Nart Villeneuve's fascinating report on the Koobface botnet. The report is well-written and clear, and although it's long, it doesn't take very long to read, so if you have the time, check it out. It's a detailed and broad-ranging investigation of one of the large crimeware systems infesting the Internet.

Many malware investigations look just at technical issues: vulnerabilities, exploits, defense mechanisms, etc. I love learning about that technology, but there is a lot more to malware than just the technology: social, political, and financial aspects are all part of modern organized crime on the Internet. The Koobface study is particularly worth reading because it does a good job of exploring many of these non-computer-science aspects of the malware problem. From the report's executive summary:

The contents of these archives revealed the malware, code, and database used to maintain Koobface. It also revealed information about Koobface's affiliate programs and monetization strategies. While the technical aspects of the Koobface malware have been well-documented, this report focuses on the inner workings of the Koobface botnet with an emphasis on propagation strategies, security measures, and Koobface's business model.


Wait, botnets have a business model?

Well, of course they do.

For far too long, media and popular culture have categorized malware as originating from either:

  • A lone, socially-maladjusted, brilliant-but-deranged psychopathic individual, who for reasons of mental illness constructs damaging software and looses it upon the world, or

  • A governmentally-backed military organization, which thinks of computers, networks, and information in attack-and-defense terms, and operates computer security software for military purposes.


While both these categories do exist, a major point of the Koobface report is to show that the category of modern organized crime is at least as important in the spread and operationg of malware on the net, and to help us understand how those crime organizations operate malware systems for profit.

The report is divided into two major sections:

  1. The Botnet

  2. The Money



The first section deals with operational issues: "propagation strategies, command and control infrastructure, and the ways in which the Koobface operators monitor their system and employ counter-measures against the security community".

The second section explains "the ways in which the Koobface operators monetize their activities and provides an analysis of Koobface's financial records".

The report ends with some social and political analysis and offers some recommendations to law enforcement and security organizations about how they can evolve to address these evolving threats.

Let me particularly draw your attention to the second section, "The Money".

It is absolutely fascinating to understand how botnets such as these profit, by providing a business model that is almost, yet not quite, legitimate, and how close it is to the core business models that are driving the Internet:

The Koobface operators maintain a server ... [ which ] ... receives intercepted search queries from victims' computers and relays this information to Koobface's PPC [pay-per-click] affiliates. The affiliates then provide advertisement links that are sent to the user. When the user attempts to click on the search results, they are sent to one of the provided advertisement links...


That's right: Koobface operates, and makes money, by doing essentially the same things that core Internet companies such as Microsoft, Google, and Yahoo do:

  • Provide search services

  • Provide advertising services

  • Match individuals searching for items with others who are offering products



The report links to a great Trend Micro blog explaining this "stolen click" technique, also known as "browser hijacking", in more detail:

Browser hijacker Trojans refer to a family of malware that redirects their victims away from the sites they want to visit. In particular, search engine results are often hijacked by this type of malware. A search on popular search engines like Google, Yahoo!, or Bing still works as usual. However, once victims click a search result or a sponsored link, they are instead directed to a foreign site so the hijacker can monetize their clicks.


The history of organized crime is long and well-researched; I have nothing particular to contribute to this, and it's not my field. However, I find it very interesting to learn about it, and I hope that you'll find it worthwhile to follow some of these references and learn more about it, too.

Now it's time to "pop the stack" and get back to studying the changed I/O dispatching prioritization in Windows 2008 Server as compared to Windows 2003 Server. Ah, yes, computer science, yummm, something I understand... :)

Wednesday, November 17, 2010

I feel the need ... for speed!

Here's a very nice writeup of a recent speed cubing event. I love the Feliks Zemdegs video, can't take my eyes off it!

I am not a very fast cuber. I can solve the standard 3x3x3 cube, but it usually takes me 2+ minutes, and more if I get distracted by my granddaughter :)

When I was (much) younger, John and I got into a spirited competition of speed Minesweeper, expert level. We set up a computer in the break area and we would alternate back and over during compile times, trying to break each other's best time record. Of course, I see that the world has progressed since then...:)

Nowadays my compile-and-test turnaround cycle at my day job is so lightning-fast, I barely have time to bounce over to my favorite Chess or Go sites before it's time for the next bit of work. That's progress!

The Java/JCP/Apache/TCK swirl continues

There's lots of activity as people try to figure out what is going on with Java, where Oracle is taking it, what is the future of the Java Community Process, etc. Here's a quick roundup of some recent chatter:

  • At eWeek, Darryl Taft reports on recent JCP election news, and includes some commentary from Forester analysts John Rymer and Jeffrey Hammond.

    Hammond also told eWEEK:

    “Right now Oracle holds all the cards with respect to Java, and if they choose to close the platform then I don’t think there’s much anyone can do about it. Some customer might actually be more comfortable with that in the short term if it leads to renewed innovation. In the long term I think it would be counterproductive, and hasten the development of Java alternatives in the OSS community – and I think Apache would be happy to have a role in that if things continue along their current path.”

    ...

    Forrester’s Rymer also points to the business side of things when he says:

    “One thing that puzzles me is IBM’s role in this dispute. IBM has been a big backer of ASF and its Java projects, including Harmony. We think IBM turned away from Harmony in ‘renewing’ its partnership on Java with Oracle. ASF’s ultimatum to Oracle must be related to IBM’s move, we just don’t know exactly how. I expected that IBM would continue its strong support of ASF as a counterweight to Oracle."

  • On the GigaOm website, Canonical's Matt Asay posts a long in-depth analysis, including a call for Oracle to communicate its intentions widely:

    Oracle needs to head off this criticism with open, candid involvement in the Java community. It needs to communicate its plans for Java, and then listen for feedback. Oracle needs to rally the troops around an open Java flag, rather than sitting passively as Apple and others dismiss Java, which is far too easy to do when Java comes to mean “Oracle’s property” rather than community property.

  • On the Eclipse.org site, Mike Milinkovich of IBM posts a hopeful view from the Eclipse perspective during the run-up to the JCP election:

    The Eclipse Foundation is committed to the success of both Java and the JCP, and we are optimistic that the JCP will remain a highly effective specification organization for the Java community and ecosystem.

    The Eclipse Foundation was one of the organizations that was re-elected to the JCP executive committee.

  • Eduardo Peligri-Llopart, a long-time Java EE voice from Sun, posts a nice article talking about the complexities of communicating Oracle's JVM strategy, as an insider at Oracle trying to help that process occur. It's nice to see him continuing to try to be a voice communicating Oracle's decision-making processes as they are occurring.

  • And Stephen Colebourne has an excellent 3-part series of articles analyzing:



So, the swirl continues. There's lots to read. The Java community continues to evolve, and software continues to get written. If you have pointers to more information about what's going on and what it all means, send them my way!

My cursor disappears when using GMail in Safari

I'm trying the workaround described by Adam Engst at the TidBITS web site, hopefully that will do it for me.

Tuesday, November 16, 2010

Google 1, Harvard 0

Don't miss this fascinating article by Matt Welsh relating his decision to retire from Harvard in order to join Google.

It's well worth your time to read through the comments as well, as there are lots of interesting follow-ups and related discussions.

Update: Dean Michael Mitzenmacher wrote a follow-up essay of his own, which is also posted, and also worth reading.

Monday, November 15, 2010

CUDA in the cloud

Amazon have announced that EC2 now supports GPU clusters using CUDA programming. That might just be a bunch of gobbledygook so let's expand a little bit:


  • EC2 is Amazon's Elastic Compute Cloud, one of the leaders of cloud computing services.

  • GPUs are Graphics Processing Units, the specialized computers that sit on the 3D video card in your computer. Your computer arranges to have video processing done by the GPU, while regular computing is performed by your machine's CPU, the Central Processing Unit. Here's an example of a GPU, the NVidia Tesla M2050.

  • CUDA is a specialized programming language designed for the task of offloading certain compute tasks from your CPU to your GPU. It originated with NVidia but has been used for some other GPU libraries as well. Here's the starting point for learning more about CUDA.



So Amazon are announcing that their cloud infrastructure has now provisioned a substantial number of machine with high-end GPU hardware, and have enhanced their cloud software to make that hardware available to virtual machine instances on demand, using the CUDA APIs for programming access, and are ready for customers to start renting such equipment for appropriate programming tasks.

And now you know enough to understand the first sentence of this post, and to appreciate Werner Vogels's observation that "An 8 TeraFLOPS HPC cluster of GPU-enabled nodes will now only cost you about $17 per hour." Wow! Let's see, an hour has 3600 seconds, so that's about 25 PetaFLOPS / hour, so we're somewhere around 1 PetaFLOP = $1 / hour, is that right?

Sunday, November 14, 2010

Nice Netflix paper on High-Availability Storage

Sid Anand, a Netflix engineer who writes an interesting blog, recently published a short, very readable paper entitled Netflix's Transition to High-Availability Storage Systems. If you've been wondering about cloud computing, and about who uses it, and why, and how they build effective systems, you'll find this paper quite helpful.

The paper packs a lot of real-world wisdom into a very short format. I particularly liked this summary of what Anand learned about building highly-available systems while at eBay:


  • Tables were denormalized to a large extent

  • Data sets were stored redundantly, but in different index structures

  • DB transactions were forbidden with few exceptions

  • Data was sharded among physical instances

  • Joins, Group Bys, and Sorts were done in the application layer

  • Triggers and PL/SQL were essentially forbidden

  • Log-shipping-based data replication was done




It's an excellent list. At my day job, we spend a lot of time thinking about how to build highly-available, highly-reliable systems which service many thousands of users concurrently, and you'd recognize most of the above principles in the internals of our server, even though the details differ.

To avoid your data being your bottleneck, you have to build infrastructure which breaks that bottleneck:

  • replicate the data

  • carefully consider how you structure and access that data

  • shift work away from the most-constrained resources


It sounds very simple, but it's oh-so-hard to do it properly. That's what still makes systems like eBay, Netflix, Amazon, Google, Facebook, Twitter, etc. the exception rather than the rule.

It should be no surprise that so many different engineers arrive at broadly similar solutions; using proven basic principles and tried-and-true techniques is the essence of engineering. Although my company is much smaller than the Internet giants, we are concerned with the same problems and we, too, are sweating the details to ensure that we've built an architecture that scales. It's exciting work, and it's fun to see it bearing fruit as our customers roll out massive internal systems successfully.

I enjoyed Anand's short paper, and I've enjoyed reading his blog; I hope he continues to publish more work of this caliber.

Now, back to solving the 7-by-7 KenKen while the bacon cooks... :)

Saturday, November 13, 2010

Wang Hao apologizes for winning and moving into a tie for the lead

At the fascinating and action-packed Tal Memorial chess tournament (the official website is in Russian, natch) rising Chinese chess star Wang Hao won his game with Boris Gelfand when Gelfand unexpectedly resigned in a position that was not clearly lost.


In fact Wang Hao felt a bit strange about it. “I was very lucky,” he repeated a few times in the press room, and even started to apologize for playing on in the ending so long. Unnecessary apologies of course, if only because a draw offer isn’t allowed at this tournament anyway.


Read all about it in a great Round 7 report at ChessVibes.com, or check the game itself out at Alexandra Kosteniuk's ChessBlog.com.

Apple contributes their Java technology to OpenJDK

According to this press release, Apple will be contributing their Java technology to OpenJDK.

The Register's take is that

In effect, Oracle has recognised it needs Apple - and by extension, its Mac Apps Store - to be a friend... hence today's happy-clappy OpenJDK love-in.


MacWorld's Dan Moren observes that

Apple seems to be hoping that it's progressing to a point where Flash and Java aren't critical technologies for most Mac users--and that users who do need those technologies will be more than capable of downloading and installing them themselves


Henrik Stahl has a short blog post, encouraging interested parties to join the OpenJDK community, or to apply for a job at Oracle, and noting:

This announcement is the result of a discussion between Oracle and Apple that has been going on for some time. I understand that the uncertainty since Apple's widely circulated "deprecation" of Java has been frustrating, but due to the nature of these things we have neither wanted to or been able to communicate before. That is as it is, I'm afraid.


It's not clear that this sort of behavior from the Oracle-IBM-Apple triumvirate is going to have people chanting "who put the 'open' in 'OpenJDK'?"...

Still, it's better news for the future of Java-on-the-Mac than might have been, so fans of Java are pleased, I'm sure.

Thursday, November 11, 2010

As the Pac-10 becomes the Pac-12, what happens to the B-Ball schedules?

Currently, the Pac-10 basketball schedule contains 18 conference games: each team plays each other team twice, home-and-away.

The Pac-10 is adding two new teams: Colorado and Utah, to become the Pac-12. So how does this affect the schedule? If each team were to play each other team home-and-away each year, that would be a 22 game conference schedule!

Well, because I knew you were all fascinated about this little detail, I went and found out the answer:

The Pac-10 Conference announced today the specific details surrounding the 2011-12 Pac-12 men’s basketball schedule, as well as the 10-year rotation model for scheduling.

The conference schedule will continue to be comprised of 18 games for each institution and will maintain the travel partner in a non-divisional format. Each year, the schedule will include games against an institution’s traditional rival both home and away, which means that Cal and Stanford will continue their annual home-and-home series. In addition, each school will play six other opponents both home and away (for two consecutive years), and four opponents on a single-game basis-two at home and two away. Those single-play opponents will rotate every two years.


For the full details, you can already find next year's conference schedule here!

Meanwhile, to balance all that press-release content with some content of my own, here's my brief trip report from last night's exhibition game between California and Sonoma State:


106 points! Woo-hoo!

The good news:
- Harper Kamp is back! He seems to have lost about 25 pounds, but he
still seems big enough to play inside, and he looked energetic. And
he still always seems to be in the right place at the right time.
I'm quite happy that he was able to recover from that injury, and he
looked just overjoyed to be back out playing again.

- Jorge will be a great point guard. He is confident and doesn't get
rattled, and he passes well. And he can get to the hoop when he needs to.

- Allen Crabbe has potential to be very exciting. And Gary Franklin
and Richard Solomon look good, too. Crabbe and Solomon were high-school teammates.

The scary news is that the team is incredibly young and raw. They looked
like deer trapped in the headlights far too often. They are going to
get absolutely SHELLACKED a few times before they get some experience.
With only 3 experienced players, and with Sanders-Frison still suffering
from that foul trouble issue, it's going to be up to Gutierrez and Kamp
to hold the team together for the first few months.

If Gutierrez can stay healthy enough to play 35 minutes a game, and if
Montgomery can keep the team's morale up through a few early season
whompings (Kansas 114, Cal 74 ??), it will be fun to watch coach put the
new team together.

Tuesday, November 9, 2010

Lots of Oracle action at the Federal Building in downtown Oakland

All the legal action in the software industry this week is concentrated at the Federal Building in downtown Oakland, where the Oracle vs SAP lawsuit over TomorrowNow is taking place. Yesterday, Larry Ellison was "the star witness at the start of the second week of trial in Oracle's copyright infringement suit against the German software giant."

It's kind of hard to get a handle on this case. SAP attorneys say the award should be somewhere around $40 million, while Oracle attorneys contend the amount should be $2 billion, or perhaps even $4 billion. The judge, meanwhile, has already started reducing the amount that Oracle can claim.

I'm also a bit confused about which software, precisely, SAP was illegally accessing. I don't think it was the core Oracle DBMS software; maybe it was the Oracle applications suite, which contains many different packages, including software that Oracle originally wrote themselves as well as products they bought as part of buying Siebel, PeopleSoft, Hyperion, etc. This seems to be the case, according to this description of Safra Catz's testimony about how SAP was using the TomorrowNow program to try to lure customers away from Oracle:

Catz testified that she believed that the efforts to assure customers of Oracle’s level of support would keep them from fleeing Oracle when it came time to renew licenses. As for the arrival of SAP and TomorrowNow, she said, “I’d hoped and believed it would not be material.” Of the estimated 14,000 customers Oracle obtained from its acquisitions of PeopleSoft and Siebel, only 358 went to TomorrowNow, SAP’s lawyers argued.


And then there's the side theater over the subpoena of Leo Apotheker, the former SAP boss who is now the top man at HP. Is he in California? Is he in Germany? Is he in Japan? Were private eyes really hired? Has he been found?

Meanwhile, Chris O'Brien, at the San Jose Mercury News, says that this whole trial isn't even really about SAP, which is a fading figure in Oracle's rear-view mirror, but rather is about those companies that Oracle has in their headlights: HP, Google, Apple, IBM, etc.:

HP should have seen this coming from the day Redwood City-based Oracle bought Sun Microsystems and put itself in direct competition with the Palo Alto tech giant. And it should have expected nothing less from Ellison, Silicon Valley's most cunning corporate fighter, one who draws his energy and focus by creating a clearly defined enemy.


It's all riveting for those of us who watch the industry. But it isn't just entertainment: these are real adversaries, prosecuting real lawsuits for real money, and the implications are likely to fundamentally re-shape the enterprise software industry.

Monday, November 8, 2010

NYT article on Microsoft's anti-piracy team

This weekend's New York Times brought a long, detailed, and fascinating article entitled: Chasing Pirates: Inside Microsoft's War Room.

The article begins by describing a raid on a software piracy operation in Mexico, and what was discovered:

The police ... found rooms crammed with about 50 machines used to copy CDs and make counterfeit versions of software ...

The raid added to a body of evidence confirming La Familia's expansion into counterfeit software as a low-risk, high-profit complement to drugs, bribery and kidnapping.


The article describes Microsoft's extensive world-wide anti-piracy efforts:

Microsoft has demonstrated a rare ability to elicit the cooperation of law enforcement officials to go after software counterfeiters and to secure convictions -- not only in India and Mexico, but also in China, Brazil, Colombia, Belize and Russia. Counteries like Malaysia, Chile and Peru have set up intellectual-property protection squads that rely on Microsoft's training and expertise to deal with software cases.


At times the article reads like a spy thriller, talking about "undercover operatives" who are training in "hand-to-hand combat", but mostly the article spends its time in the back office, describing the underlying techniques of intelligence-gathering operations and anti-piracy coding and manufacturing techniques:

Through an artificial intelligence system, Microsoft scans the Web for suspicious, popular links and then sends takedown requests to Web service providers, providing evidence of questionable activity.


"We're removing 800,000 links a month", say the Microsoft anti-piracy team. That's a lot of links! Unfortunately, the article doesn't really describe how this process works -- surely it's not feasible to individually examine 800,000 links each month in a manual fashion, but if not, then how do you know that the links are indeed illegal and deserving of such immediate action?

Later in the article, the author is perhaps being fanciful and florid, or else is describing a lot of technology that I wasn't aware yet existed:

Mr Finn talks at length about Microsoft's need to refine the industry's equivalent of fingerprinting, DNA testing and ballistics through CD and download forensics that can prove a software fake came from a particular factory or person.

Is this just metaphor? Or do "CD and download forensics" exist, providing such a capability? I could imagine that various network logging occurs along the major network paths, such as at ISP access points, at sub-net border crossings, etc. And I could imagine that various digital signature techniques, often referred to by names such as "Digital Watermarking", could identify each binary bundle uniquely. Still, it's a long way from technology like this to proof that "a software fake came from a particular factory or person."

Later in the article, a few more details are provided:

A prized object in the factory is the stamper, the master copy of a software product that takes great precision to produce. From a single stamper, Arvato can make tens of thousands of copies on large, rapid-fir presses.

Crucially for Mr. Keating, each press leaves distinct identifying markers on the disks. He spends much of his time running CDs through a glowing, briefcase-size machine -- and needs about six minutes to scan a disk and find patterns. Then he compares those markings against a database he has built of CD pressing machines worldwide.

This sounds much less like a software technique, such as Digital Watermarking, and much more like a hardware technique involving the analysis of physical properties of the CD or DVD. Indeed, the article's earlier description of "ballistics" and "forensics" seems like a valid metaphor, similar to how we hear that firearms experts can match a bullet fragment to the gun from which it was fired.

It sounds like an arms race between the software publishers and the pirates:

To make life harder for the counterfeiters, Microsoft plants messages in the security thread that goes into authenticity stickers, plays tricks with lettering on its boxes and embosses a holographic film into a layer of lacquer on the CDs.


As I said, the article is long, detailed, and contains many interesting ideas to follow up on. Besides the discussions of technology and its uses, the article talks about public policy issues, varying intellectual property attitudes, training and outreach, public relations impacts, and more.

I found the article worth the time; if you know of more resources in this area to learn from, drop me a note and let me know!

Sunday, November 7, 2010

Visitors from Lompoc!

 


My brother and his family came up from Lompoc for a rainy San Francisco weekend. Luckily, the rain held off until Sunday and we enjoyed a beautiful Saturday playing with the kids.

That's me on the bottom right holding my nephew Everett, my dad behind me. My brother is bottom left, with my niece Amelia on his shoulders. My son is bottom center, with his niece on his shoulders.

It was a great weekend, can't wait for another!
Posted by Picasa

Thursday, November 4, 2010

FTC appoints Felten

I have never met Professor Felten, but I am a regular and devoted reader of his writings, and read many of his student's writings as well.

I think the FTC has made an excellent choice to help them understand technology issues.

I only hope that the professor continues to be able to be as open and free about his findings as he has been in the past.

Wednesday, November 3, 2010

Wired looks at the technology behind Kinect

On the eve of tomorrow's release of the Xbox Kinect, Wired has a nice illustrated writeup of the basic ideas behind the technology, with plenty of links to further reading.

...

(Update:) Then, the next morning, Wired follows up with a review titled "Flawed Kinect Offers Tantalizing Glimpse of Gaming's Future":

For hard-core gamers, Kinect is a box full of potential, offering tantalizing glimpses at how full-body control could be used for game designs that simply wouldn’t work any other way. But at launch, the available games get tripped up by Kinect’s limitations more than they are liberated by the control system’s abilities.

Microsoft, HTML5, and Silverlight

Last week was the 2010 edition of Microsoft's Professional Developers Conference, an always-important computer industry event which is held somewhat irregularly -- I think Microsoft's official position is that they hold it when they have something important to tell their developer partners, and when there isn't anything to say, they don't hold the event. And sometimes the conference is a full week, other times it is 3-4 days, and other times it is just 2 days long. Another interesting aspect of this year's PDC was that Microsoft held it at home, in the company's own internal conference center, rather than renting out a large commercial center in Los Angeles like they had been doing for a number of years, and encouraged developers to tune in "virtually" (by viewing real-time video coverage) if they couldn't or didn't wish to attend in person.

Anyway, they held the 2010 event last week, and while I wasn't there, I've been reading about it on the net.

The biggest murmur of excitement appeared to be related to the ongoing elevation of HTML5 and IE9 as the company's long-term web application platform of choice. I think that Mary Jo Foley's column about the Microsoft strategy seemed to be frequently mis-read; people seemed to think that her column said things that it didn't actually say. Peter-Paul Koch has a nice writeup of his take on the matter, with pointers to several Microsoft follow-up articles from Bob Muglia, and Steve Ballmer. As Koch says:

What happened is not an abandonment of Silverlight; far from it. Microsoft has big plans with it — and who knows, they might even work. What happened is that Microsoft placed HTML5 on an equal footing with Silverlight.

Oh, and this is not about desktop. Desktop is almost an afterthought. It’s about mobile.


There is a lot of activity in the mobile web application space nowadays, with Apple, Google, and Microsoft all making major pushes this year, and Oracle's Java team at least trying to stay involved as well. It's a lot for the poor developer to keep track of, especially for a guy like me, who is basically a server guy at heart, but who tries to stay up-to-date on other important technologies as much as I can.

If you haven't yet had a chance to learn about HTML 5, well, shame on you! It's long past time that you should learn about this; it's the most important thing going on in the computer world right now. Here's a great place to get started: http://slides.html5rocks.com/ Or move straight on to Mark Pilgrim's thorough and clear documentation at http://diveintohtml5.org/

OK, that's enough of that; back to working on that server resource management bug that's been eating at me for a week...

Has anyone seen the black Bishop or the white Knight?

In Puke Ariki, New Zealand, a status report on the last two missing pieces of the town's outdoor chess set.

Tuesday, November 2, 2010

Google expands its Bug Bounty program

The always interesting Brian Krebs is reporting today that Google is expanding their Bug Bounty program.

As Krebs observes, Google isn't the only organization with a Bug Bounty program, here's Mozilla's Bug Bounty page. According to this article from MaximumPC, Microsoft still isn't onboard with the idea, though.

I think that Bug Bounty programs are interesting. I think it's a good way to show people you care about quality, and obviously Google feel that it is an effective way both to get useful feedback and to reward people for helping Google improve their software.

Of course, this tradition has a long history: I'm reminded of the famous Knuth reward check, which I've written about before.

What other innovative ways are there by which companies are working with their customers to improve software quality? Drop me a line and let me know!

Thursday, October 28, 2010

Steve Perry is loving this World Series!

On-and-off this summer, Steve Perry, who is/was the lead singer of Journey, would get interviewed by the guys on KNBR (the local sports radio).

They talked about how he enjoyed baseball, and how happy he was that the Giants were doing well, and how much fun he had just going to the games and relaxing.

The radio hosts kept trying to convince him to sing the national anthem at one of the games, but he didn't want any part of that.

So, well, to cut a long story short, read this, this, or this, or, better, just watch this.

A short trip to Victoria

My day job has offices around the world, including an office in Victoria, British Columbia.

Recently I was offered a chance to take a short trip to Victoria, to meet our Canadian employees and do some technical exchange work. Although most of the time was spent in airplanes, hotels, and conference rooms, I still had a chance to look around Victoria and get a quick taste of what seems to be a fascinating part of the world.

Our office in Canada is small, but quite successful. It is about evenly split between Technical Support and Product Development. I tremendously enjoyed meeting both teams, and we found we had plenty to discuss in the short time available.

When not in the office, I had a few chances to walk around downtown Victoria. Our hotel was the Magnolia; I recommend it very highly. The location is superb, the facilities were excellent, and the service was top-notch.

If you get to travel to Victoria, look for a direct flight. Although there are many connections via Seattle, Vancouver, etc., it is vastly superior to fly directly to Victoria.

Everyone in Victoria was extraordinarily kind and welcoming. My favorite story: on the stretch of road in front of the Fairmont Empress, there is a lane available adjacent to the curb. This lane is marked for parking, with signs:

Tourist Parking Only, 6:00 AM to 9:00 PM


The signs are observed!

I took a few pictures on my walks. Unfortunately, the picture of the statue of Emily Carr with her monkey Woo and her poodle Billy came out too blurry to post, but here's a nice website with the details.

Here's a picture I took of the memorial which documents the 1825 treaty that ceded British Columbia and the Yukon from Russia to Canada:
From BryanVictoria2010


And here's a picture of a nice statue of Captain Cook, who travelled to the island in 1778 I believe.
From BryanVictoria2010

Near the end of the plaque, it reads:

Also on the voyage was Midshipman George Vancouver.


Here's a couple nice views of our office:
From BryanVictoria2010

and
From BryanVictoria2010


And here's the view from the hotel (I told you it was a nice location):
From BryanVictoria2010

That's the Fairmont Empress in the foreground, and the Parliament building in the background. In between them is the Royal BC Museum, though you can't really see it in this picture.

Yes, it was rainy, and cold, and grey, but I enjoyed it very much.

Hopefully I'll get a chance to return, perhaps in the spring, when they say the flowers are in bloom and it's one of the prettiest spots on the planet.

A micro-review of Stieg Larsson's Salander trilogy

Here's a feeble attempt to condense all three books into a single short review (you want longer reviews, there are plenty of places to find them!):

  • The Girl With The Dragon Tattoo is an almost-perfect mix: one part action thriller, one part character study, one part modern Swedish history. Delicious!

  • The Girl Who Played With Fire is almost pure action thriller. It's a roller-coaster adventure ride, and will leave you completely breathless. If you like Thomas Harris, Jeffrey Deaver, authors like that, you'll be enthralled.

  • The Girl Who Kicked The Hornet's Nest swings the needle back the other way: it's about three-quarters modern Swedish history (politics, journalism, public policy, etc.) and about one-quarter action thriller. Moreover, the action thriller part of the book is a lot less action, and a lot more things like high-tech computer espionage, psychological maneuvering, etc. Thankfully, the dry parts occupy the first half of Hornet's Nest, and the second half moves along quite nicely; I suspect most readers, having experienced books one and two, will give Larsson the courtesy of being patient while he addresses things that must have felt important to him (I know I did).



The nice thing about the books is that you'll probably know within the first 50 pages of Dragon Tattoo whether the books are your thing or not.