There's nothing better than finding articles that look deep into the complex details that underlie modern software. I love it when somebody takes the time to really study and then describe something in sufficient detail.
Your reward for writing such an article? I praise you here! To wit:
- Martin Nilsson and the team at Opera Software contributed this great review of the SPDY protocol, critiquing and analyzing it in detail, considering details such as the observation that, on very small devices, the need for aggressive compression algorithms to accumulate 32K buffers can be problematic:
Just putting the HTTP request in a SPDY stream (after removing disallowed headers) only differs by 20 bytes to SPDY binary header format with dictionary based zlib compression. The benefit of using a dictionary basically goes away entirely on the subsequent request.Their most significant proposal is to re-work the flow control:
For very constrained devices the major issue with using deflate is the 32K LZ window. The actual decompression is very light weight and doesn't require much code. When sending data, using the no compression mode of deflate effectively disables compression with almost no code. Using fixed huffman codes is almost as easy.
Mike Belshe writes an example in his blog that stalling recipient of data as a situation where SPDY flow control is helpful, to avoid buffering potentially unbound amount of data. While the concerns is valid, flow control looks like overkill to something where a per-channel pause control frame could do the same job with less implementation and protocol overhead.
- Hubert Lubaczewski pens this detailed explanation of the "upsert" problem and its potential solutions: Why is Upsert so Complicated?
Of course the chances for such case are very low. And the timing would have to be perfect. But it is technically possible, and if it is technically possible, it should be at least mentioned, and at best – solved.
This is, of course, another case of race condition. And this is exactly the reason why docs version of the upsert function has a loop.
If you’ll excuse me – I will skip showing the error happening – as it requires either changing the code by adding artificial slowdowns, or a lot of luck, or a lot of time. But I hope you understand why the DELETEs can cause problems. And why loop is needed to solve the problem.
- On the Chromium blog, don't miss A Tale of Two Pwnies (Part 1) and A Tale Of Two Pwnies (Part 2), in which the Chrome security team walk through the intricate details of how a modern browser vulnerability may require the complex interactions of multiple independent browser weaknesses:
- Lastly, again on the topic of Chrome, Ilya Grigorik offers this wonderful description of Chrome's Predictor architecture for overlapping networking operations such as DNS lookup and TCP/IP connection establishment with other work, to dramatically enhance perceived browser speed: Chrome Networking: DNS Prefetch & TCP Preconnect
If it does its job right, then it can speculatively pre-resolve the hostnames (DNS prefetching), as well as open the connections (TCP preconnect) ahead of time.
To do so, the Predictor needs to optimize against a large number of constraints: speculative prefetching and preconnect should not impact the current loading performance, being too aggressive may fetch unnecessary resources, and we must also guard against overloading the actual network. To manage this process, the predictor relies on historical browsing data, heuristics, and many other hints from the browser to anticipate the requests.
Wonderful, wonderful, wonderful all around. The world of software is so complex and beautiful these days! Enjoy!