Monday, July 21, 2014

Transactional replicated databases using consensus algorithms

A trendy thing nowadays is to build a transactional replicated database without using two phase commit, but using a different type of consensus algorithm.

The gold standard in this area is Google's F1 database, which is built on top of their Spanner infrastructure, which in turn is built on their production-quality implementation of the Paxos consensus algorithm.

Now other, similar systems are starting to emerge, demonstrating that one way to get attention in the world is to hire some Google engineers who worked on-or-near Spanner and/or F1 and/or Paxos, and build something yourself.

  • Man Busts Out of Google, Rebuilds Top-Secret Query Machine
    Under development for the past two years, Impala is a means of instantly analyzing the massive amounts of data stored in Hadoop, and it’s based on a sweeping Google database known as F1. Google only revealed F1 this past May, with a presentation delivered at a conference in Arizona, and it has yet to release a full paper describing the technology. Two years ago, Cloudera hired away one of the main Google engineers behind the project, a database guru named Marcel Kornacker.
  • In Search of an Understandable Consensus Algorithm (Extended Version)
    Raft is a consensus algorithm for managing a replicated log. It produces a result equivalent to (multi-)Paxos, and it is as efficient as Paxos, but its structure is different from Paxos; this makes Raft more understandable than Paxos and also provides a better foundation for building practical systems.
  • Introducing Ark: A Consensus Algorithm For TokuMX and MongoDB
    Ark is an implementation of a consensus algorithm (also known as elections) similar to Paxos and Raft that we are working on to handle replica set elections and failovers in TokuMX. It has many similarities to Raft, but also has some big differences.
  • Ark: A Real-World Consensus Implementation
    Ark was designed from first principles, improving on the election algorithm used by TokuMX, to fix deficiencies in MongoDB’s consensus algorithms that can cause data loss. It ultimately has many similar- ities with Raft, but diverges in a few ways, mainly to support other features like chained replication and unacknowledged writes.
  • Out in the Open: Ex-Googlers Building Cloud Software That’s Almost Impossible to Take Down
    But if anyone is up for the challenge of rebuilding Spanner—one of the most impressive systems in the history of computing—it’s the CockroachDB team. Many of them were engineers at Google, though none of them worked on Spanner.
  • Cockroach: A Scalable, Geo-Replicated, Transactional Datastore
    Cockroach is a distributed key/value datastore which supports ACID transactional semantics and versioned values as first-class features. The primary design goal is global consistency and survivability, hence the name. Cockroach aims to tolerate disk, machine, rack, and even datacenter failures with minimal latency disruption and no manual intervention. Cockroach nodes are symmetric; a design goal is one binary with minimal configuration and no required auxiliary services.
  • Cockroach
    Single mutations to ranges are mediated via an instance of a distributed consensus algorithm to ensure consistency. We've chosen to use the Raft consensus algorithm. All consensus state is stored in RocksDB.

Just so there's no confusion, let me be clear about one thing:

I am not building a consensus algorithm.

But I enjoy reading about consensus algorithms!

Friday, July 18, 2014

Walter Munk's wave experiment

The NPR website is carrying a nifty article about Walter Munk: The Most Astonishing Wave-Tracking Experiment Ever.

Yes, I'm asking a wave to tell me where it was born. Can you do that? Crazily enough, you can. Waves do have birthplaces. Once upon a time, one of the world's greatest oceanographers asked this very question.

Munk's experiment was not easy to carry out:

From a beach you can't see an old set of swells go by. They aren't that noticeable. Walter and his team had highly sensitive measuring devices that could spot swells that were very subtle, rising for a mile or two, then subsiding, with the peak being only a tenth of a millimeter high.

But what a fascinating result:

The swells they were tracking, when they reached Yakutat, Alaska, had indeed traveled halfway around the world. Working the data backward, Walter figured that the storm that had generated those swells had taken place two weeks earlier, in a remote patch of ocean near a bunch of snowy volcanic islands — Heard Island and the McDonald Islands, about 2,500 miles southwest of Perth, Australia.

Neat article, and neat to learn about Professor Munk, who I hadn't known of previously.

I wonder if he'd enjoy a visit to see the University of Edinburgh's FloWave simulator?.

Wednesday, July 16, 2014

What an innocuous headline...

7 safe securities that yield more than 4%

Eveans and his team analyze more than 7,000 securities worldwide but only buy names that offer payouts no less than double the yield of the overall stock market — as well as reasonable valuation and competitive advantages that will keep earnings growing over time.

Sounds like a pleasant article to read, no?

Well, it turns out that the companies that they are recommending you invest in are:

  • Cigarette companies (Philip Morris)
  • Oil companies (Vanguard Resources, Williams Partners)
  • Leveraged buyout specialists (KKR)

I guess the good news is that they didn't include any arms dealers or pesticide manufacturers.

Monday, July 14, 2014

git clone vs fork

Two words that you'll often hear people say when discussing git are "fork" and "clone".

They are similar; they are related; they are not interchangeable.

The clone operation is built into git: git-clone - Clone a repository into a new directory.

Forking, on the other hand, is an operation which is used by a certain git workflow, made popular by GitHub, called the Fork and Pull Workflow:

The fork & pull model lets anyone fork an existing repository and push changes to their personal fork without requiring access be granted to the source repository. The changes must then be pulled into the source repository by the project maintainer. This model reduces the amount of friction for new contributors and is popular with open source projects because it allows people to work independently without upfront coordination.

The difference between forking and cloning is really a difference in intent and purpose:

  • The forked repository is mostly static. It exists in order to allow you to publish work for code review purposes. You don't do active development in your forked repository (in fact, you can't; because it doesn't exist on your computer, it exists on GitHub's server in the cloud).
  • The cloned repository is your active repo. It is where you do all your work. But other people generally don't have access to your personal cloned repo, because it's on your laptop. So that's why you have the forked repo, so you can push changes to it for others to see and review
This picture from StackOverflow helps a lot: What is the difference between origin and upstream in github.

In this workflow, you both fork and clone: first you fork the repo that you are interested in, so that you have a separate repo that is clearly associated with your GitHub account.

Then, you clone that repo, and do your work. When and if you wish, you may push to your forked repo.

One thing that's sort of interesting is that you never directly update your forked repo from the original ("upstream") repo after the original "fork" operation. Subsequent to that, updates to your forked repo are indirect: you pull from upstream into your cloned repo, to bring it up to date, then (if you wish), you push those changes into your forked repo.

Some additional references:

  • Fork A Repo
    When a repository is cloned, it has a default remote called origin that points to your fork on GitHub, not the original repository it was forked from. To keep track of the original repository, you need to add another remote named upstream.
  • Stash 2.4: Forking in the Enterprise
    In Stash, clicking the ‘Fork’ button on a repository creates a copy that is tracked by Stash and modified independently of the original repository, insulating code from the original repository to unwanted changes or errors.
  • Git Branching and Forking in the Enterprise: Why Fork?
    In recent DVCS terminology a fork is a remote, server-side copy of a repository, distinct from the original. A clone is not a fork; a clone is a local copy of some remote repository.
  • Clone vs. Fork
    if you want to make changes to any of its cookbooks, you will need to fork the repository, which creates an editable copy of the entire repository (including all of its branches and commits) in your own source control management (e.g. GitHub) account. Later, if you want to contribute back to the original project, you can make a pull request to the owner of the original cookbook repository so that your submitted changes can be merged into the main branch.

Arrivederci, Concordia

The ongoing saga of the Costa Concordia has a major new chapter.

Costa Concordia wreck raised from under-sea platform.

The damaged ocean liner is now afloat, and soon will be towed to its final salvage location.

Here's more, from the BBC:

"The ship is upright and is not listing. This is extremely positive," the engineer in charge of the salvage, Franco Porcellacchia, told a news conference.

He said the sixth deck of the ship had begun to emerge on Monday, and once that was fully above water the other decks would become visible in quick succession.

"When deck three re-emerges we are in the final stage and ready for departure," he added.

Tugboats attached to the ship by cables have moved it a short distance away from the shore.

The time-lapse footage on the BBC web site is fun to watch.

The wreck was a terrible tragedy, but it is inspiring to see the salvage efforts proceeding so well.

Friday, July 11, 2014

Stuff I'm reading, World Cup finals weekend edition

I know it's summertime, because we get to attend the first performance of the summertime outdoor community theater at Oakland's Woodminster Theater this weekend, yay!

And when I'm not at the theater, at least I won't be bored:

  • A Proper Server Naming Scheme
    Since we’re starting fresh with this data center, we wanted to come up with our own naming scheme to address the common problems we’ve seen elsewhere. We gleaned ideas from numerous sources such as data published by large-scale companies, various RFC‘s on the topic, and blog/forum posts a’plenty. Taking all of that into account, we’ve developed some best practices that should work for most small-to-medium businesses naming their own hardware.
  • Finding All the Red M&Ms: A Story of Indexes and Full‑Table Scans
    A common question that comes up when people start tuning queries is “why doesn’t this query use the index I expect?”. There are a few myths surrounding when database optimizers will use an index. A common one I’ve heard is that an index will be used when accessing 5% or less of the rows in a table. This isn’t the case however - the basic decision on whether or not to use an index comes down to its cost.
  • Fallacies of the Cost Based Optimizer
    This paper identifies three basic assumptions made by the cost based optimizer in the estimation of cardinalities of the results of relational operations on the base and intermediate row sources and ultimately the query result set.
  • Cache coherency primer
    This is a whirlwhind primer on CPU caches. I’m assuming you know the basic concept, but you might not be familiar with some of the details.
  • "I actually was hunting Ewoks." The Original Lucasfilm Games Team Talk About Life at Skywalker Ranch.
    Booger Hunt. George Lucas avoiding tax penalties. Monkey Island and dependency charts. The Lost Patrol. A file drawer full of crazy ideas. This is the story about life at Lucasfilm Games - as told by the people who lived it.
  • Procedural Content Generation in Games: A Textbook and an overview of current research
    While the field of PCG is mostly based on AI methods, we want to set it apart from the more “mainstream” use of game-based tasks to test AI algorithms, where AI is most often used to learn to play a game.

Thursday, July 10, 2014

OK, just a little more and then I'll let it go

I have to admit to at least a bit of sadness that Arjen Robben won't be in the final. I have this weird love/hate relationship with Robben: he is just phenomenally skillful, and he runs like a maniac for the full 90 minutes (I said to a friend: "he's like Michael Bradley, but with the perfect first touch").

But then he channels his inner Leonardo DiCaprio, and I just throw up my arms.

So give me Leo.

Oh, YES, give me Leo!

It would be a much easier call if Angel DiMaria, who gets nowhere near enough credit, could be part of the outcome, but even without him I think the Argentines have a real chance.

Just let me hear no more silky-voiced British commentators telling me about that "well-oiled German machine."

Here's my prediction, and my hope: Argentina 1, Germany 0, in a well-played, well-officiated, thrilling replay of the 1990 matchup, but with a different outcome this time.

Meanwhile, I still don't understand what happened last Tuesday (and Nate Silver doesn't, either!)

I'm not alone:

  • Why Brazil Lost: Rather than make a real plan, they abandoned themselves to romantic notions of passion and desire.
    Barring the few thousand overjoyed Germans there was an atmosphere of stunned, disbelieving horror in that stadium that has possibly never before been experienced in sport. It was as though Germany had gathered 60,000 4-year-olds together and briskly announced that there is no such thing as Santa Claus.
  • The Most Shocking Result in World Cup History
    As I mentioned, however, the Elo system discounts lopsided victories. Since it was the lopsidedness of the scoreline that made Tuesday’s match such an outlier, that somewhat defeats our purpose of placing the result in historical context.
  • Germany 7-1 Brazil: Germany record a historic thrashing, winning the game in 30 minutes
    This should be regarded as one of the most historic defeats football as seen: the hosts, pre-tournament favourites and the most successful side in the history of the World Cup humbled 1-7 in their own country, in the semi-final. Everyone is wise after the event, and many will suggest Germany were always likely to win, but in reality, with the bookmakers had Germany and Brazil at exactly the same odds to triumph. This was considered 50:50, and expected to be a tight, tense game...
  • Brazil v Germany: Biggest humiliation in history of Brazilian football as 7-1 thrashing in World Cup signals night the music died
    Further down this week’s road we will turn our thoughts to the brilliance of this Germany side, and how they have shown the rest of the world the right path to youth development. But first there is much more angst to seep out of Brazil. Social equilibrium always appeared dependent on the team’s ability to go on winning games. Scolari’s promise to bestow a sixth world title on his people was meant to calm the nation’s nerves. It reads now like a rhetorical leap off a cliff.
  • Brazil's Worst Nightmare Comes True as Germany Eviscerate World Cup Dreams
    Over the next 90 minutes, in perhaps the most surprising, jaw-dropping result in World Cup history, Brazil were demolished 7-1 by a rampant Germany side, as a combination of woeful organisation, shoddy defending, individual mistakes and incisive attacking (the Europeans deserve some credit, after all) sent the tournament hosts out of the competition with their tails firmly between their legs.

    This was scarcely believable stuff, even as it happened in front of the world’s eyes. To put it in some type of context, this was Brazil’s first competitive defeat on home soil since 1975—a 3-1 loss to Peru that also happened in Belo Horizonte’s Estadio Mineirao. It was the first time they had conceded four goals since a 4-2 loss to Hungary in the 1954 World Cup.

  • World Cup 2014: Records broken in Germany's 7-1 win over Brazil
    The first time Brazil had ever conceded seven goals in a World Cup match. It has only conceded more once in any fixture, an 8-4 loss to Yugoslavia in a friendly in 1934.

Various publications have attempted to frame this in historical terms by comparing events of similar magnitude.

I have one to offer.

It happened in 1940, which was a long time ago (75 years ago!). There are probably very few people alive who remember this game, and certainly it was 25 years before my time (all I knew about Sammy Baugh came from a dog-eared, flimsy paperback book that I used to read at night before I went to bed): 1940 NFL Championship Game

The game was played at Griffith Stadium in Washington, D.C. on December 8, 1940. The Chicago Bears defeated the Washington Redskins, 73–0, the most one-sided victory in NFL history. The game was broadcast on radio by Mutual Broadcasting System, the first NFL title game broadcast nationwide.

...

the Chicago Bears played perfect football for a greater percentage of the official hour than any team before or since. In the championship game, as an underdog to the team which had just beaten them, the Bears made an eleven-touchdown pile and used it as a pedestal to raise the NFL to view in all corners of the country.

It's not a great comparison, because it was just the United States.

The 2014 Brazil-Germany semi-final, my friends, was the most shocking sporting event that has been played in

the entire world

I've really enjoyed this World Cup, and I hope you did, too.

Next week, I promise, I'll get back to All Those Other Things That Matter To Me.