If the groundhog were to look today, she would DEFINITELY see her shadow, at least here in my neighborhood.
- All Change Please
The combined changes in networking, memory, storage, and processors that are heading towards our data centers will bring about profound changes to the way we design and build distributed systems, and our understanding of what is possible. Today I’d like to take a moment to summarise what we’ve been learning over the past few weeks and months about these advances and their implications.
- High-Availability at Massive Scale: Building Google’s Data Infrastructure for Ads
While most distributed systems handle machine-level failures well, handling datacenter-level failures is less common. In our experience, handling datacenter-level failures is critical for running true high availability systems. Most of our systems (e.g. Photon, F1, Mesa) now support multi-homing as a fundamental design property. Multi-homed systems run live in multiple datacenters all the time, adaptively moving load between datacenters, with the ability to handle outages of any scale completely transparently. This paper focuses primarily on stream processing systems, and describes our general approaches for building high availability multi-homed systems, discusses common challenges and solutions, and shares what we have learned in building and running these large-scale systems for over ten years.
- Immutability Changes Everything
It wasn't that long ago that computation was expensive, disk storage was expensive, DRAM (dynamic random access memory) was expensive, but coordination with latches was cheap. Now all these have changed using cheap computation (with many-core), cheap commodity disks, and cheap DRAM and SSDs (solid-state drives), while coordination with latches has become harder because latch latency loses lots of instruction opportunities. Keeping immutable copies of lots of data is now affordable, and one payoff is reduced coordination challenges.
- To Trie or not to Trie – a comparison of efficient data structures
I have been reading up a bit by bit on efficient data structures, primarily from the perspective of memory utilization. Data structures that provide constant lookup time with minimal memory utilization can give a significant performance boost since access to CPU cache is considerably faster than access to RAM. This post is a compendium of a few data structures I came across and salient aspects about them
- POPL 2016
Last month saw the 43rd edition of the ACM SIGPLAN-SIGACT Symposium on the Principles of Programming Languages (POPL). Gabriel Scherer did a wonderful job of gathering links to all of the accepted papers in a GitHub repo. For this week, I’ve chosen five papers from the conference that caught my eye.
- NSA’s top hacking boss explains how to protect your network from his attack squads
NSA tiger teams follow a six-stage process when attempting to crack a target, he explained. These are reconnaissance, initial exploitation, establish persistence, install tools, move laterally, and then collect, exfiltrate and exploit the data.
- Amazon’s Experiment with Profitability
Amazon Chief Executive Officer Jeff Bezos has spent more than two decades reinvesting earnings back into the company. That steadfast refusal to strive for profitability never seemed to hurt the company or its stock price, and Amazon’s market value (now about $275 billion) passed Wal-Mart’s last year. All the cash it generated went into infrastructure development, logistics and technology; it experimented with new products and services, entered new markets, tried out new retail segments, all while capturing a sizable share of the market for e-commerce.
- I Hate the Lord of the Rings
A software developer explains why the Lord of the Rings is too much like work, and why Middle Earth exists in every office.
- Startup Interviewing is (redacted)
Silicon Valley is full of startups who fetishize the candidate that comes into the interview, answers a few clever fantasy coding challenges, and ultimately ends up the award-winning hire that will surely implement the elusive algorithm that will herald a new era of profitability for the fledging VC-backed company.
- Startup Interviewing is (redacted)
Silicon Valley is full of startups who fetishize the candidate that comes into the interview, answers a few clever fantasy coding challenges, and ultimately ends up the award-winning hire that will surely implement the elusive algorithm that will herald a new era of profitability for the fledging VC-backed company.
-
Inverting Binary Trees Considered Harmful
he was like - wait a minute I read this really cute puzzle last week and I must ask you this - there are n sailors and m beer bottles and something to do with bottles being passed around and one of the bottles containing a poison and one of the sailors being dishonest and something about identifying that dishonest sailor before you consume the poison and die. I truly wished I had consumed the poison instead of laboring through that mess.
- "Can you solve this problem for me on the whiteboard?"
Programmers use computers. It's what we do and where we spend our time. If you can't at least give me a text editor and the toolchain for the language(s) you're interested in me using, you're wasting both our time. While I'm not afraid of using a whiteboard to help illustrate general problems, if you're asking me to write code on a whiteboard and judging me based on that, you're taking me out of my element and I'm not giving you a representative picture of who I am or how I code.
Don't get me wrong, whiteboards can be a great interview tool. Some of the best questions I've been asked have been presented to me on a whiteboard, but a whiteboard was used to explain the concept, the interviewer wrote down what I said and used that to help frame a solution to the problem. It was a very different situation than "write code for me on the whiteboard."
No comments:
Post a Comment