Q4 really snuck up on me; these years are passing much faster than I remember from the past.
- Bitcoin and Cryptocurrency Technologies
After this course, you’ll know everything you need to be able to separate fact from fiction when reading claims about Bitcoin and other cryptocurrencies. You’ll have the conceptual foundations you need to engineer secure software that interacts with the Bitcoin network. And you’ll be able to integrate ideas from Bitcoin in your own projects.
- Classic Bug Reports
A bug report is sometimes entertaining either because of the personalities involved or because of the bug itself. Here are a collection of links into public bug trackers
- Code Words Issue Four
Issue Four of Code Words, our quarterly publication about programming, is now online!
- In-Memory Performance for Big Data
we enable buffer pool designs to match in-memory performance while supporting the "big data" workloads that continue to require secondary storage, thus providing the best of both worlds. We introduce here a novel buffer pool design that adapts pointer swizzling for references between system objects (as opposed to application objects), and uses it to practically eliminate buffer pool overheads for memory-resident data. Our implementation and experimental evaluation demonstrate that we achieve graceful performance degradation when the working set grows to exceed the buffer pool size, and graceful improvement when the working set shrinks towards and below the memory and buffer pool sizes.
- Understanding Distributed Analytics Databases, Part 1: Query Strategies
New analytics databases are designed to run across a cluster of machines. Instead of one supercomputer, your analytics database can run on dozens of commodity machines at the same time. This lets you achieve greater performance at a lower cost.
However, distribution comes with a new performance bottleneck. When all the data is on the same machine, the rate at which you can read and process data is limited by the speed of your hard drive.
In a cluster, the network is the limiting factor. The nodes in your analytics cluster need to share information because no single node has all the data. And a hard drive is over 3x faster than gigabit ethernet
- When Limping Hardware Is Worse Than Dead Hardware
So why should we care about designing systems that are robust against limping hardware? One part of the answer is defense in depth. Of course we should have monitoring, but we should also have systems that are robust when our monitoring fails, as it inevitably will. Another part of the answer is that by making systems more tolerant to limping hardware, we’ll also make them more tolerant to interference from other workloads in a multi-tenant environment.
- Limplock: Understanding the Impact of Limpware on Scale-Out Cloud Systems
In this paper, we highlight one often-overlooked cause of performance failures: limpware – “limping” hardware whose performance degrades significantly compared to its specification. The growing complexity of technology scaling, manufacturing, design logic, usage, and operating environment increases the occurrence of limpware. We believe this trend will continue, and the concept of performance perfect hardware no longer holds.
- 25th ACM Symposium on Operating Systems Principles
The biennial ACM Symposium on Operating Systems Principles is the world's premier forum for researchers, developers, programmers, and teachers of computer systems technology. Academic and industrial participants present research and experience papers that cover the full range of theory and practice of computer systems software.
- Holistic Configuration Management at Facebook
configuration changes help manage the rollouts of new product features, perform A/B testing experiments on mobile devices to identify the best echo-canceling parameters for VoIP, rebalance the load across global regions, and deploy the latest machine learning models to improve News Feed ranking. This paper gives a comprehensive description of the use cases, design, implementation, and usage statistics of a suite of tools that manage Facebook’s configuration end-to-end, including the frontend products, backend systems, and mobile apps.
- Building Consistent Transactions with Inconsistent Replication
In this paper, we use a new approach to reduce the cost of replicated, read-write transactions and make transactional storage more affordable for programmers. Our key insight is that existing transactional storage systems waste work and performance by incorporating a distributed transaction protocol and a replication protocol that both enforce strong consistency. Instead, we show that it is possible to provide distributed transactions with better performance and the same transaction and consistency model using replication with no consistency.
- Existential Consistency: Measuring and Understanding Consistency at Facebook
We use measurement and analysis of requests to Face- book’s TAO system to quantify how often anomalies happen in practice, i.e., when results returned by eventually consis- tent TAO differ from what is allowed by stronger consistency models.
- How to Get More Value From Your File System Directory Cache
This paper identifies several design principles that can substantially improve hit rate and reduce hit cost transparently to applications and file systems. Specifically, our directory cache design can look up a directory in a constant number of hash table operations, separates finding paths from permission checking, memoizes the results of access control checks, uses signatures to accelerate lookup, and reduces miss rates through caching directory completeness.
- Cross-checking Semantic Correctness: The Case of Finding File System Bugs
We applied JUXTA to 54 file systems in the stock Linux kernel (680K LoC), found 118 previously unknown semantic bugs (one bug per 5.8K LoC), and provided corresponding patches to 39 different file systems, including mature, popular ones like ext4, btrfs, XFS, and NFS. These semantic bugs are not easy to locate, as all the ones found by JUXTA have existed for over 6.2 years on average.
- Read-Log-Update: A Lightweight Synchronization Mechanism for Concurrent Programming
This paper introduces read-log-update (RLU), a novel exten- sion of the popular read-copy-update (RCU) synchronization mechanism that supports scalability of concurrent code by allowing unsynchronized sequences of reads to execute concurrently with updates. RLU overcomes the major limitations of RCU by allowing, for the first time, concurrency of reads with multiple writers, and providing automation that eliminates most of the programming difficulty associated with RCU programming.
- The Beginner's Guide is a game that doesn't want to be written about
It's difficult to tell at first exactly what The Beginner's Guide is supposed to be: a tribute, a eulogy, a motivational speech. Wreden says several times that Coda stopped making games in 2011 and that he hopes one day his old friend will create again. It's an impulse we see a lot on the internet these days, particularly in fan culture: the desire to write a paean so beautiful that it can bring the things we've lost back from the dead. And make no mistake, Wreden is Coda's number one fan. There are parts of this game that feel uncomfortably grasping, that want very badly to be a resurrection spell of sorts, though it takes a while to figure out exactly what has died—or why.
No comments:
Post a Comment