It's not that the easy problems have all been solved; it's just that I'm pleased when there is continued efforts on hard problems sustained over many years.
Herewith, some examples:
- Slate reprints an interview with JPL astrophysicist Slava Turyshev regarding the Pioneer Anomaly.
we saw a very unusual tiny force that fell right in between Newton's gravity and Einstein's general relativity. That prompted people to think that maybe the spacecraft was sensing the presence of a new type of physics. It was either a major discovery or a puzzle that, in the solving, would help us build better craft to study gravity.
Part of the problem is that the data from the satellites was almost all lost:In the 1970s and 1980s, mission data was recorded on magnetic tapes, and to study the Pioneer anomaly we needed the probe's navigational data. But mission tapes were normally saved for only a few months and then thrown away, so you're lucky if you can find what you need. The only data available from Pioneer 10 was from planetary flybys, which were kept to study gravity around planets. Then we had to figure out how to read it. You need a proper machine with the right software, and you need to "upconvert" the data to modern formats so it can be used in today's computer modeling systems. That took years.
- In the world of database concurrency control, for many years you could choose between serializability, which gave you correct results, but poor concurrency, or snapshot isolation, which gave you high concurrency, but inaccurate results. Of course, nowadays, the NoSQL gang are jettisoning correctness and consistency entirely, but I'm pleased to observe that some hard workers are still trying to build systems that do the right thing, including these three:
- A PhD thesis by Michael Cahill: Serializable Isolation for Snapshot Databases
- A PhD thesis by Stephen Revilak: PRECISELY SERIALIZABLE SNAPSHOT ISOLATION
- Documentation of Serializable Snapshot Isolation (SSI) in PostgreSQL compared to plain Snapshot Isolation (SI).
- The interaction between TCP Congestion Control and Application Protocols has been a problem for more than three decades, and continues to be actively studied. Patrick McManus gives us a great update on the ongoing work on the HTTP application protocol to improve its performance and its interaction with TCP's congestion control:
HTTP has been a longtime abuser of TCP and the Internet. It uses mountains of different TCP sessions that are often only a few packets long. This triggers lots of overhead and results in common stalling due to bad interaction with the way TCP was envisioned to be deployed. The classic TCP model pits very large FTP flows against keystrokes of a telnet session - HTTP doesn't map to either of those very well. The situation is so bad that over 2/3rds of all TCP packet losses are repaired via the slowest possible mechanism (timer expiration), and more than 1 in 20 transactions experience a loss event. That's a sign that TCP interactions are the bottleneck in web scalability.
Mark Nottingham explains that while SPDY is the start, it's not necessarily the complete answer:It’s important to understand that SPDY isn’t being adopted as HTTP/2.0; rather, that it’s the starting point of our discussion, to avoid a laborious start from scratch. Likewise, we’re not replacing all of HTTP — the methods, status codes, and most of the headers you use today will be the same. Instead, we’re re-defining how it gets used “on the wire” so it’s more efficient, and so that it is more gentle to the Internet itself (see Jim Gettys — one of the original authors of HTTP/1.1 — discussing buffer bloat for more on this).
- And lastly, but perhaps most importantly, Wired Magazine gives us a status report on the wonderful work being sponsored by the Gates Foundation to try to build a better toilet.
The challenge: Create a toilet that doesn’t rely on piped water, sewer, or electrical connections. And while you’re at it, fashion something useful from the waste that goes in. Energy and water might be nice. Do it all for $0.05 per user per day, and you might win a $100,000 prize. Attacking the problem with the world’s most advanced science is one thing, but doing it within strict X-Prize-like parameters is entirely another. This week they announced the winners.
An interesting side-light is the focus on the user interface (yes!) of the toilet:Apart from making the new toilets as inexpensive as possible, said Fisher, the key is making them a social norm and object of aspiration, and making pit latrines and in-the-open defecation an object of community opprobrium.
“With a lot of these social enterprises and designs, the problem isn’t that we should have designed it better. It’s that we haven’t designed around the problems of convincing people to change their behavior,” Fisher said. “That’s where we need the innovation.”
No comments:
Post a Comment