My goodness I've been busy this week! What have I been doing? Well, at least part of it has involved reading about the latest Big Data happenings...
- Interesting article from the eBay team about how they are using Zookeeper:
Grid Computing with Fault-Tolerant Actors and ZooKeeper
Before diving in further, this is a good place to give a shout-out to the folks who designed, developed and maintain ZooKeeper. I think it’s one of the most useful open-source contributions for distributed computing of the past decade. Most of the fault tolerance in Nebula can be reduced to simple patterns that make use of the ZooKeeper model (e.g., ephemeral, session-bound znodes), its serialized atomic updates, and the associated watch mechanism.
- Epic essay on consistency algorithms in NoSQL databases:
Distributed Algorithms in NoSQL Databases
In the rest of this article we study a number of distributed activities like replication of failure detection that could happen in a database. These activities, highlighted in bold below, are grouped into three major sections:
- Data Consistency. Historically, NoSQL paid a lot of attention to tradeoffs between consistency, fault-tolerance and performance to serve geographically distributed systems, low-latency or highly available applications. Fundamentally, these tradeoffs spin around data consistency, so this section is devoted data replication and data repair.
- Data Placement. A database should accommodate itself to different data distributions, cluster topologies and hardware configurations. In this section we discuss how to distribute or rebalance data in such a way that failures are handled rapidly, persistence guarantees are maintained, queries are efficient, and system resource like RAM or disk space are used evenly throughout the cluster.
- System Coordination. Coordination techniques like leader election are used in many databases to implements fault-tolerance and strong data consistency. However, even decentralized databases typically track their global state, detect failures and topology changes. This section describes several important techniques that are used to keep the system in a coherent state.
- Discussion about the GitHub outage, and specifically about the issues surrounding declaring a node dead and deciding when to failover:
- Big Data? Cloud? Distributed Systems? Next month's OSDI conference is there! OSDI '12 Program
- A tiny bit dated, but this reading list from a graduate course in Cloud Computing taught by Professor Stoica of Berkeley last fall has a wonderful set of core references and reading material: Cloud computing: Systems, Networking, and Frameworks
In this course, we describe the critical technology trends that are enabling cloud computing, the architecture and the design of existing deployments, the services and the applications they offer, and the challenges that needs to be addressed to help cloud computing to reach its full potential.
- Peter Bailis published a nice essay about the detailed tradeoffs between latency and efficiency in replicated data stores (essentially, when is it worth wasting work for faster response time): Doing redundant work to speed up distributed queries
In distributed systems, there’s a subtle and somewhat underappreciated strategy for reducing tail latencies: doing redundant work. If you send the same request to multiple servers, (all else equal) you’re going to get an answer back faster than waiting for a single server. Waiting for, say, one of three servers to reply is often faster than waiting for one of one to reply.
- Here's a nice Kirk McKusick article about the current state-of-the-art in filesystems as they evolve to adapt to the changing disk sector size (the underlying hardware is moving from 512 byte sectors to 4096 byte sectors):
Disks from the Perspective of a File System
File systems need to be aware of the change to the underlying media and ensure that they adapt by always writing in multiples of the larger sector size. Historically, file systems were organized to store files smaller than 512 bytes in a single sector. With the change in disk technology, most file systems have avoided the slowdown of 512-byte writes by making 4,096 bytes the smallest allocation size.
- And this week's coffee-house debate over whether MongoDB is the greatest or the worst technology ever to be visited upon the earth:
No comments:
Post a Comment