Oh, my, these pictures from the ever-wonderful Big Picture blog are simply mesmerizing.
What must the aliens think of us as they gaze down from outer space?
Thursday, September 30, 2010
Tuesday, September 28, 2010
OpenIndiana? LibreOffice?
Lots of little bits and pieces of news continue to dribble out about the future of the various Open Source software efforts that Sun Microsystems had initiated:
It's all rather confusing, and I don't think I'm the only one who's trying to suss it all out. Here's a short report from Rob Weir, an IBM'er who's much involved with Open Office, who says:
Meanwhile, The Inquirer gives its take on Open Indiana:
Update: Here's an interesting short essay from James Governor with his take on some related issues, as well as some very interesting comments at the end of the essay from several readers.
- ComputerWorld reports that some OpenOffice.org developers have established a new foundation to distribute the newly-named LibreOffice (formerly OpenOffice).
- PCWorld reports that some OpenSolaris developers have established a new foundation to distribute the newly-named OpenIndiana (formerly OpenSolaris).
It's all rather confusing, and I don't think I'm the only one who's trying to suss it all out. Here's a short report from Rob Weir, an IBM'er who's much involved with Open Office, who says:
This will be an interesting test of openness in action. This is as close as we have seen to “twins separated at birth”, a rare but key subject for studying the relative contribution of hereditary and environmental factors on the development of personal traits. With LibreOffice and OpenOffice.org we have a similar “experiment”, a separation of identical code bases, with the same license, only varying the openness of the community.
Meanwhile, The Inquirer gives its take on Open Indiana:
If the Open Indiana project wants to provide a viable alternative to Linux and BSD in servers, then the project needs to adopt a low profile, quietly work on producing a stable and complete release, formulate release schedules and support structures, and then come back to beat the drum. Otherwise it will be seen as a rabble of hobbyists playing around with the long since discarded scraps of an industry behemoth, which won't do justice to the talents of the coders involved
Update: Here's an interesting short essay from James Governor with his take on some related issues, as well as some very interesting comments at the end of the essay from several readers.
Wild theories about Stuxnet
This weekend, the media was practically boiling over with strange and curious theories about the Stuxnet worm.
Both Kaspersky and Symantec have promised to release detailed technical analyses of the worm at the annual Virus Bulletin conference, which starts tomorrow. I'm quite looking forward to whatever information they share.
In the meantime, I found this overview on Steve Bellovin's weblog to be among the most balanced and informative summaries of what we know so far.
Both Kaspersky and Symantec have promised to release detailed technical analyses of the worm at the annual Virus Bulletin conference, which starts tomorrow. I'm quite looking forward to whatever information they share.
In the meantime, I found this overview on Steve Bellovin's weblog to be among the most balanced and informative summaries of what we know so far.
Monday, September 27, 2010
Great Sandy Bridge article at Real World Technologies
The folks over at Real World Technologies have put together an extremely detailed and thorough analysis of the new Sandy Bridge microprocessor architecture that Intel is rolling out:
The complexity of these new systems is breath-taking. Consider this description of the Sandy Bridge memory subsystem:
For the most part, the details of modern processor architectures are hidden from people like me. Even though most programmers would consider the low-level C-language server programming that I do to be very "close to the metal", there's still layers and layers below me:
And then we get down to "the hardware" itself, which, as is clear from reading the RWT analysis, is extremely sophisticated and multi-layered as well.
It's a very well-written and fascinating whirlwind tour through the latest CPU architecture, and certainly worth your time to read.
In the coming year, three new microarchitectures will grace the x86 world. This abundance of new designs is exciting; especially since each one embodies a different philosophy. At the high-end, Sandy Bridge focuses on efficient per-core performance, while Bulldozer explicitly trades away some per-core performance for higher aggregate throughput. AMD’s Bobcat takes an entirely different road, emphasizing low-power, but retaining performance.
The complexity of these new systems is breath-taking. Consider this description of the Sandy Bridge memory subsystem:
The load buffer grew by 33% and can track 64 uops in-flight. Sandy Bridge’s store buffer increased slightly to 36 stores, for an overall 100 simultaneous memory operations, roughly two thirds of the number of the total uops in-flight. To put this in perspective, the number of memory uops in-flight for Sandy Bridge is greater than the entire instruction window for the Core 2 Duo. Again, like Nehalem, the load and store buffers are partitioned between threads.
For the most part, the details of modern processor architectures are hidden from people like me. Even though most programmers would consider the low-level C-language server programming that I do to be very "close to the metal", there's still layers and layers below me:
- C runtime libraries
- Compiler-generated code
- Operating system APIs
- Device drivers
- Microcode
And then we get down to "the hardware" itself, which, as is clear from reading the RWT analysis, is extremely sophisticated and multi-layered as well.
It's a very well-written and fascinating whirlwind tour through the latest CPU architecture, and certainly worth your time to read.
Saturday, September 25, 2010
Maverick Meerkat is two weeks away
The final countdown is well underway for the next Ubuntu software release, Maverick Meerkat. The expected release date is October 10th, and Canonical have a very good track record of achieving their dates, so I think the release is pretty likely to arrive around this date.
There is a wealth of information available about the upcoming release, so much information that I made no attempt to study it all. One of the most intriguing projects is the "paper cuts" project, which you can read about here. The paper cuts project has been around for a while: Ars Technica did a great article on it during the summer of 2009. The core idea is to try to fix a lot of small problems, with the intention that, by paying attention to these details, the overall experience will be dramatically improved. As Siegel describes it:
In many ways, this is quite similar to the "Broken Windows" philosophy that I described last summer. I think that the paper cuts team has been doing great work over this last year, and I'm looking forward to seeing what arrives in Meerkat!
There is a wealth of information available about the upcoming release, so much information that I made no attempt to study it all. One of the most intriguing projects is the "paper cuts" project, which you can read about here. The paper cuts project has been around for a while: Ars Technica did a great article on it during the summer of 2009. The core idea is to try to fix a lot of small problems, with the intention that, by paying attention to these details, the overall experience will be dramatically improved. As Siegel describes it:
This is a very small detail, and it was extremely simple to remedy, but it slipped through the cracks for two successive releases before I sat down to fix it. Part of the reason I put off fixing it is because it seemed inconsequential, as the amount of programming required to fix it was so small compared to other bugs in the application. Also, as with many other paper cuts, users (myself included) became habituated to this annoyance, learning to ignore and work around it.
In many ways, this is quite similar to the "Broken Windows" philosophy that I described last summer. I think that the paper cuts team has been doing great work over this last year, and I'm looking forward to seeing what arrives in Meerkat!
DOJ Cold-Calling decision
This week brought the DOJ's much-anticipated decision in the high-tech cold-calling investigation. I've been crawling around in the tiny bit of information available, trying to figure out what was actually decided, and what it means (if anything).
There isn't a lot of information. In addition to its summary of the case, the DOJ has published the actual complaint, and some supporting documents. Google published a very short comment about the decision. Chris O'Brien has an opinion piece in the San Jose Mercury News, and a few bloggers have weighed in with some thoughts.
But mostly, this action seemed to pass quite quietly through the media; nobody really had much to say about it. Far more attention has been paid to the "Angelgate" scandal.
So, what actually happened? The DOJ says:
The Wall Street Journal says that this action was just the first step in a broader attempt to change the behavior throughout American industry:
Chris O'Brien wonders whether the DOJ decision may open the door for disgruntled employees to sue for improper treatment:
I find it hard to imagine the situation in which this might happen. Would this be an example of such a disgruntled employee? He doesn't seem likely to sue; rather just to move on to something else. Meanwhile, the papers are full of high-profile stories of high-tech companies competing vigorously over the top engineers.
I guess I'm left with the nagging sense that this decision was important (why else would the government have invested so much time and energy into it), but little deep understanding of what the government was trying to achieve and whether they believe they accomplished that. I'll keep my eyes open for some sort of explanation, and let me know if you think I'm misunderstanding this!
There isn't a lot of information. In addition to its summary of the case, the DOJ has published the actual complaint, and some supporting documents. Google published a very short comment about the decision. Chris O'Brien has an opinion piece in the San Jose Mercury News, and a few bloggers have weighed in with some thoughts.
But mostly, this action seemed to pass quite quietly through the media; nobody really had much to say about it. Far more attention has been paid to the "Angelgate" scandal.
So, what actually happened? The DOJ says:
The proposed settlement, which if accepted by the court will be in effect for five years, prohibits the companies from engaging in anticompetitive no solicitation agreements. Although the complaint alleges only that the companies agreed to ban cold calling, the proposed settlement more broadly prohibits the companies from entering, maintaining or enforcing any agreement that in any way prevents any person from soliciting, cold calling, recruiting, or otherwise competing for employees. The companies will also implement compliance measures tailored to these practices.
The Wall Street Journal says that this action was just the first step in a broader attempt to change the behavior throughout American industry:
A settlement with tech companies—or a court fight—could therefore help determine what kinds of agreements are acceptable in other industries as well.
At stake are dueling visions of how far companies should be able to go in agreeing to limit the kind of headhunting that can help valuable employees increase their compensation.
Chris O'Brien wonders whether the DOJ decision may open the door for disgruntled employees to sue for improper treatment:
Those who do think they got the shaft may sue. And because this is an antitrust finding, the settlement will allow anyone who wins in federal court to "recover three times the damages the person has suffered."
I find it hard to imagine the situation in which this might happen. Would this be an example of such a disgruntled employee? He doesn't seem likely to sue; rather just to move on to something else. Meanwhile, the papers are full of high-profile stories of high-tech companies competing vigorously over the top engineers.
I guess I'm left with the nagging sense that this decision was important (why else would the government have invested so much time and energy into it), but little deep understanding of what the government was trying to achieve and whether they believe they accomplished that. I'll keep my eyes open for some sort of explanation, and let me know if you think I'm misunderstanding this!
Friday, September 24, 2010
Contrastive Reduplication
Everyone needs a linguistics lesson from time to time.
At my old job, we used to have certain discussions:
Frankly, it used to drive me crazy. I would try to decipher what these people were saying, and wonder things like: will they at some point talk about being done done done? It was part of the reason I felt somewhat excluded: I literally couldn't speak the language they were speaking.
So here's a great essay, reviewing a recently-published Linguistics paper, that analyzes the phenomenon known as Contrastive Reduplication:
There's also a quite detailed article on Wikipedia, natch.
Oh, and regarding "done" versus "done done", I eventually came to, somewhat, understand what they meant in those discussions:
So there actually is a distinction between "done" and "done done", and it can be a useful communications technique.
Once you know what it means.
At my old job, we used to have certain discussions:
"How's that project going?"
"Pretty good, I guess."
"Well, I need to know: are you done?"
"Uhm, do you mean 'done'? Or do you mean 'done done'? Because I'm pretty close to being done, but it will be a while before I'm done done."
Frankly, it used to drive me crazy. I would try to decipher what these people were saying, and wonder things like: will they at some point talk about being done done done? It was part of the reason I felt somewhat excluded: I literally couldn't speak the language they were speaking.
So here's a great essay, reviewing a recently-published Linguistics paper, that analyzes the phenomenon known as Contrastive Reduplication:
This paper presents a phenomenon of colloquial English that we call Contrastive Reduplication (CR), involving the copying of words and sometimes phrases as in 'It's tuna salad, not SALAD-salad', or 'Do you LIKE-HIM-like him?'
There's also a quite detailed article on Wikipedia, natch.
Oh, and regarding "done" versus "done done", I eventually came to, somewhat, understand what they meant in those discussions:
- Done: I've completed the design and implementation; it's been reviewed; the code is submitted to SCM and builds on our build system; the internal documentation is submitted to our wiki; the other relevant teams are aware of my work.
- Done Done: I've finished writing a suite of regression tests. They pass, and are run regularly by our build automation system. Our testing team is satisfied that their testing is complete. Our technical writers have finished the external documentation, and it's been reviewed. The support team has been through internal training on the work and is ready for customers to use it. All known bugs have been logged, and we've fixed the ones we intend to fix for this release, and annotated the others with workarounds and other discussion.
So there actually is a distinction between "done" and "done done", and it can be a useful communications technique.
Once you know what it means.
Thursday, September 23, 2010
Android Hacking
Here's a great story about hacking on an Android phone:
The article makes several related points:
According to the author, Google is trying to have it both ways, and can't:
I haven't yet taken the plunge into the smart-phone world. I know that, when I do finally get one, I'm going to want to program it. I have talked with a variety of friends and colleagues who have smartphones, of all different breeds, and some of them have tried programming theirs. But, so far, every development experience that they've described has sounded far too limiting and frustrating.
Soon, soon.
I totally got to hotwire a phone battery with a sliced-open USB cable while reflashing it with leaked firmware.
The article makes several related points:
- The Android community, while much younger than the Windows Phone or iPhone communities, is already much more adventurous and developer-friendly than those hyper-controlled communities:
a double high five for the Android community, which is about as enthusiastic and creative a group of people as I've ever encountered online. - However, Google and its phone-company partners are still struggling with the idea of being part of an open developer-oriented community:
Google goes on and on about how Android is "open," and the amazing Android community is a proud credit to how tinker-friendly the platform is at its best -- there's a cooked ROM for everything.
According to the author, Google is trying to have it both ways, and can't:
Once I left the reservation and installed that leaked 2.2 build, I was gone for good -- no official path back to the fold exists. That's not true on other platforms: if I was running a jailbroken iPhone, I'd just restore it with iTunes, and it would be factory-fresh with known software. That's simply not the case with Android, and it's a problem -- Google can't keep implicitly condoning Android hacking and trading on the enthusiasm of its community unless it requires manufacturers to provide restore tools for every device. Sometimes you just want to go home again.
I haven't yet taken the plunge into the smart-phone world. I know that, when I do finally get one, I'm going to want to program it. I have talked with a variety of friends and colleagues who have smartphones, of all different breeds, and some of them have tried programming theirs. But, so far, every development experience that they've described has sounded far too limiting and frustrating.
Soon, soon.
Was there a Java conference this week?
Apologies in advance: this is a pretty snarky essay. But I'm just stunned.
As we all know, this was the week of Oracle OpenWorld.
And, since Oracle now owns Java, they folded the old JavaOne conference into Oracle OpenWorld.
And then Oracle sued Google over Android, and Google responded by pulling out of JavaOne.
So, I've been waiting all week, wondering if there would be any news. Wondering if anything interesting would occur. Wondering if there was still a "Java community", and if they cared about Java, and if they would be getting together to try to figure out what Java is, and where it is going.
Apparently, not.
Oh, I found the occasional blog posting at the obvious locations. And the few remaining Sun^H^H^HOracle employees who still work on Java posted a few notes about what they're working on (Look! A JRuby benchmark!).
But that was about it for the good news. Meanwhile, there were lots of "obituary" postings like this one:
And, worse, bizarre postings about how the conference is just a shadow of its once tremendous self:
Remember the old days, a decade ago, when James Gosling's annual JavaOne keynote was discussed for weeks and weeks afterwards? Not this year, as he's not even around anymore:
Oh, dear.
I'm sure everybody expected things to change with the takeover. And I'm sure that things had to change; after all, Sun was unsuccessful in their approach, and so a new approach had to be tried.
But it sure seems like Java went from 100 MPH to 0 awfully fast.
Update: I guess I'm not the only one who wondered what was going on: Check out this column in The Reg: Everyone but Oracle demands Java independence. Yikes!
As we all know, this was the week of Oracle OpenWorld.
And, since Oracle now owns Java, they folded the old JavaOne conference into Oracle OpenWorld.
And then Oracle sued Google over Android, and Google responded by pulling out of JavaOne.
So, I've been waiting all week, wondering if there would be any news. Wondering if anything interesting would occur. Wondering if there was still a "Java community", and if they cared about Java, and if they would be getting together to try to figure out what Java is, and where it is going.
Apparently, not.
Oh, I found the occasional blog posting at the obvious locations. And the few remaining Sun^H^H^HOracle employees who still work on Java posted a few notes about what they're working on (Look! A JRuby benchmark!).
But that was about it for the good news. Meanwhile, there were lots of "obituary" postings like this one:
Last year, I wrote the conference's obituary, and after spending the week at the show, I can safely say that obituary was not written in haste. This year's JavaOne was a strange affair, laid out across three hotels and a blocked-off city street. The event's myriad talks and demonstrations were stretched out across a labyrinthine series of hallways and ballrooms, and many of the attendees were completely lost.
And, worse, bizarre postings about how the conference is just a shadow of its once tremendous self:
The exhibit hall, where I spent most of my time, was 1/8 the size of last year's JavaOne.
...
the booth next to us had two very pretty models and they did a lot better than I did.
...
Best news of the show? My youngest daughter goes to Harvey Mudd College and when I called her up to ask if she wanted the show backpack & t-shirt, she told me that would be really cool.
Remember the old days, a decade ago, when James Gosling's annual JavaOne keynote was discussed for weeks and weeks afterwards? Not this year, as he's not even around anymore:
Gosling went a bit deeper, telling a tale of low-balling key employees and cutting off at the knees projects and strategies Sun had put into play.
Oh, dear.
I'm sure everybody expected things to change with the takeover. And I'm sure that things had to change; after all, Sun was unsuccessful in their approach, and so a new approach had to be tried.
But it sure seems like Java went from 100 MPH to 0 awfully fast.
Update: I guess I'm not the only one who wondered what was going on: Check out this column in The Reg: Everyone but Oracle demands Java independence. Yikes!
Wednesday, September 22, 2010
A perfect confluence
Wow! This story has it all!
... Yum!
- Northern California. Check
- Beer. Check
- Trappist Monasteries. Check
- Leland Stanford. Check
- William Randolph Hearst. Check
- Rebuilding ancient buildings from their dis-assembled and preserved stones. Check
"rich with dark fruit flavors and the unique winelike characters of these strong Abbey ales."
... Yum!
Monday, September 20, 2010
Progress, of sorts, in sport, is closely tied to money
Today's Slate brings a very interesting article about the ongoing invasion of American billionaires into ownership of English Premier League football clubs.
Meanwhile, back here in the states, where similar gushers of money have resulted in new 10-figure stadia in Dallas, Washington, and New York City, Wired has a wonderful short article about last weekend's confluence of games at the new Meadowlands Stadium. Check out the video, it's fun and short.
Presto, change-o: it's Giants Stadium! No, it's Jets Stadium!
By a weird quirk of destiny, England's two greatest soccer clubs have both fallen under the control of American tycoons of a peculiar carpetbagging sort. These minor billionaires have gone to England like backward colonists, looking to reap the bounty of the Premier League's soaring global popularity by taking advantage of its lax financial regulations. The standoff between the clubs' owners and supporters hasn't merely led to innovations in signcraft. It has also thrown an unwitting light on some big differences in the way English fans and American fans view sports.
Meanwhile, back here in the states, where similar gushers of money have resulted in new 10-figure stadia in Dallas, Washington, and New York City, Wired has a wonderful short article about last weekend's confluence of games at the new Meadowlands Stadium. Check out the video, it's fun and short.
Presto, change-o: it's Giants Stadium! No, it's Jets Stadium!
Saturday, September 18, 2010
Code Freeze!
Yesterday was a major code freeze date at my day job.
In general, code freeze doesn't mean an awful lot to me; I'm a big fan of agile methods, in particular Continuous Integration (I consider Martin Fowler's essay on the subject to be the single most important thing you can possibly learn about how to successfully run a software development effort).
However, I've been at places where code freeze never occurred. Nobody seemed to care about schedules, or about whether software projects were ever completed or not; work just carried along, and sometimes got released, leading to all sorts of programmer gallows humor (example: "we have a constant here: code freeze is always 2 weeks away").
Now, schedules aren't everything, and code freeze isn't all that important, but time does matter, and deadlines do concentrate the mind, and software should be released, and so I'm pleased that we made our code freeze date, and I'm pleased with the work that was completed and submitted in this cycle, and I'm pleased that, as the deadline approached, the entire organization has become more serious and more vigilant about firming up and solidifying the software in preparation for the upcoming release. As Jeff Atwood points out: real developers ship product.
We've put a huge amount of hard work into the release, and I'm very excited about the idea that it will soon be out for customers to start working with. Get ready for lots of fun new version management functionality to arrive soon!
In general, code freeze doesn't mean an awful lot to me; I'm a big fan of agile methods, in particular Continuous Integration (I consider Martin Fowler's essay on the subject to be the single most important thing you can possibly learn about how to successfully run a software development effort).
However, I've been at places where code freeze never occurred. Nobody seemed to care about schedules, or about whether software projects were ever completed or not; work just carried along, and sometimes got released, leading to all sorts of programmer gallows humor (example: "we have a constant here: code freeze is always 2 weeks away").
Now, schedules aren't everything, and code freeze isn't all that important, but time does matter, and deadlines do concentrate the mind, and software should be released, and so I'm pleased that we made our code freeze date, and I'm pleased with the work that was completed and submitted in this cycle, and I'm pleased that, as the deadline approached, the entire organization has become more serious and more vigilant about firming up and solidifying the software in preparation for the upcoming release. As Jeff Atwood points out: real developers ship product.
We've put a huge amount of hard work into the release, and I'm very excited about the idea that it will soon be out for customers to start working with. Get ready for lots of fun new version management functionality to arrive soon!
Odersky's paper on compilation techniques for Scala
Scala is the programming language developed by Martin Odersky, et.al. If you're not familiar with Scala, here's a good place to start.
I recently made my way through Dubochet and Odersky's Compiling Structural Types on the JVM, which is a fascinating and quite approachable paper for those interested in compilers generally, and more particularly in compilation techniques for object-oriented languages.
The general topic of the paper involves the problem of dealing with structural types:
If you aren't familiar with the phrase 'duck typing', it comes from the old folk saying: "if it walks like a duck, and quacks like a duck, it's a duck".
The core program addressed by this work involves the choice between generative compilation and reflective compilation:
The point of the paper is to describe the work done by the Scala team which has succeeded in making reflexive implementation of structural subtyping successful.
The core of the technique is to carefully and precisely use caching techniques to ensure that the reflection overhead occurs as few times as possible.
There are other issues, including: boxing and type-casting, exception handling, and handling parameterized types. But the core of the work involves the caching implementation, and the Scala team have provided lots of clear and useful descriptions of how their caching algorithms work, together with a number of benchmarks to analyze the effectiveness of the compilation techniques.
The caching techniques that the Scala team have settled on are derived from work that Urs Hozle did on the Self language while in graduate school at Stanford; in particular, Hozle, Chambers and Ungar's paper on Optimizing Dynamic Object-Oriented Langauges with Polymorphic Inline Caches, which is nearly 20 years old now, but still well worth reading.
Oh, well, enough about all that programming language stuff. The rain has stopped (briefly); it's time to take the dog down to the grass for some exercise.
I recently made my way through Dubochet and Odersky's Compiling Structural Types on the JVM, which is a fascinating and quite approachable paper for those interested in compilers generally, and more particularly in compilation techniques for object-oriented languages.
The general topic of the paper involves the problem of dealing with structural types:
A type is a nominal subtype of another one if there exists, somewhere in the program, an explicit mention of this fact. In Java, such an explicit mention takes the form of an extends clause. Structural subtyping, also known as duck typing, declares a type to be [a] subtype of another if their structures -- their members -- allow it. At the simplest, a type is allowed to be a subtype of another if it exposes at least the same members.
If you aren't familiar with the phrase 'duck typing', it comes from the old folk saying: "if it walks like a duck, and quacks like a duck, it's a duck".
The core program addressed by this work involves the choice between generative compilation and reflective compilation:
Generative techniques create Java interfaces to stand in for structural types on the JVM. The complexity of such techniques lies in that all classes that are to be used as structural types anywhere in the program must implement the right interfaces.
...
Reflective techniques replace JVM method call instructions with Java reflective calls. Reflective calls do not require a priori knowledge of the type of the receiver: they can be used to bypass the restrictions of JVM method calls. The complexity of such techniques lies in that reflective calls are much slower than regular interface calls.
The point of the paper is to describe the work done by the Scala team which has succeeded in making reflexive implementation of structural subtyping successful.
The core of the technique is to carefully and precisely use caching techniques to ensure that the reflection overhead occurs as few times as possible.
There are other issues, including: boxing and type-casting, exception handling, and handling parameterized types. But the core of the work involves the caching implementation, and the Scala team have provided lots of clear and useful descriptions of how their caching algorithms work, together with a number of benchmarks to analyze the effectiveness of the compilation techniques.
The caching techniques that the Scala team have settled on are derived from work that Urs Hozle did on the Self language while in graduate school at Stanford; in particular, Hozle, Chambers and Ungar's paper on Optimizing Dynamic Object-Oriented Langauges with Polymorphic Inline Caches, which is nearly 20 years old now, but still well worth reading.
Oh, well, enough about all that programming language stuff. The rain has stopped (briefly); it's time to take the dog down to the grass for some exercise.
Rain!?
... I thought we would get our typical Bay Area Indian Summer before fall set it, but there's rain this weekend.
It's been so cold and gray that my parents took a trip to England for the sunshine! That's when you know you've had a cooler-than-usual summer.
Everyone is talking about whether this winter will encounter the La Nina weather pattern.
Me, I'm just a dumb old computer programmer. As long as the electricity doesn't go out, I'll be sitting here warm and dry in front of my screen...
It's been so cold and gray that my parents took a trip to England for the sunshine! That's when you know you've had a cooler-than-usual summer.
Everyone is talking about whether this winter will encounter the La Nina weather pattern.
Me, I'm just a dumb old computer programmer. As long as the electricity doesn't go out, I'll be sitting here warm and dry in front of my screen...
Friday, September 17, 2010
Dance your PhD finalists announced
Science magazine have announced the 4 finalists for this year's "Dance your Ph.D." contest.
It's definitely art, but I'm not completely sure it's science. The videos are short, and fun to watch. A number of the runner-up videos are also available from the main page, or you can follow the links to all the submitted videos.
Learn while you are entertained!
The dreaded question. "So, what's your Ph.D. research about?" You could bore them with an explanation. Or you could dance.
It's definitely art, but I'm not completely sure it's science. The videos are short, and fun to watch. A number of the runner-up videos are also available from the main page, or you can follow the links to all the submitted videos.
He plays the bison (with horns), and the kids are anthrax spores.
Learn while you are entertained!
Thursday, September 16, 2010
A comment on Duffy's optimization post
Lots of people are talking about Joe Duffy's essay on optimization.
To me, Duffy's primary point is to avoid leaving performance analysis until the end, but rather always care about your code, and always consider the performance implications of what you do, otherwise you fall into sloppy habits:
I think he's absolutely right. I've worked with large bodies of code where people cared about each line of code, as it was written. And I've worked with large bodies of code where people (very good programmers, mind you) didn't pay attention to the code as it was written, didn't review each line, didn't gather together for design discussions, didn't engage in those back-and-forth question-and-answer sessions that may seem so incredibly frustrating, but are vital to building a piece of software that matters. And when they didn't do those things, the software turned out not to matter.
Maybe they thought they were too good to need these activities; maybe they thought they were a waste of time; maybe they thought they could defer them until later; maybe it was just an unpleasant activity and nobody cared enough about it to make it happen.
So if you want your software to matter, care about it: "pour love into the code you write." It's a great sentiment!
But what really struck me about Duffy's post was his description of this code review experience:
There's a lot wrong here that is non-technical. Both sides of this event are at fault, and this situation isn't going to get resolved in a meeting. There is a management problem here, and not an easy one to solve, either.
I've been in those situations, too, and I don't have a simple prescription. I'm sure you've been in them as well; if not, count yourself lucky.
To me, Duffy's primary point is to avoid leaving performance analysis until the end, but rather always care about your code, and always consider the performance implications of what you do, otherwise you fall into sloppy habits:
These kinds of "peanut butter" problems add up in a hard to identify way. Your performance profiler may not obviously point out the effect of such a bad choice so that it’s staring you in your face. Rather than making one routine 1000% slower, you may have made your entire program 3% slower. Make enough of these sorts of decisions, and you will have dug yourself a hole deep enough to take a considerable percentage of the original development time just digging out. I don’t know about you, but I prefer to clean my house incrementally and regularly rather than letting garbage pile up to the ceilings, with flies buzzing around, before taking out the trash. Simply put, all great software programmers I know are proactive in writing clear, clean, and smart code. They pour love into the code they write.
I think he's absolutely right. I've worked with large bodies of code where people cared about each line of code, as it was written. And I've worked with large bodies of code where people (very good programmers, mind you) didn't pay attention to the code as it was written, didn't review each line, didn't gather together for design discussions, didn't engage in those back-and-forth question-and-answer sessions that may seem so incredibly frustrating, but are vital to building a piece of software that matters. And when they didn't do those things, the software turned out not to matter.
Maybe they thought they were too good to need these activities; maybe they thought they were a waste of time; maybe they thought they could defer them until later; maybe it was just an unpleasant activity and nobody cared enough about it to make it happen.
So if you want your software to matter, care about it: "pour love into the code you write." It's a great sentiment!
But what really struck me about Duffy's post was his description of this code review experience:
It’s an all-too-common occurrence. I’ll give code review feedback, asking "Why didn’t you take approach B? It seems to be just as clear, and yet obviously has superior performance." Again, this is in a circumstance where I believe the difference matters, given the order of magnitude that matters for the code in question. And I’ll get a response, "Premature optimization is the root of all evil." At which point I want to smack this certain someone upside the head, because it’s such a lame answer.
There's a lot wrong here that is non-technical. Both sides of this event are at fault, and this situation isn't going to get resolved in a meeting. There is a management problem here, and not an easy one to solve, either.
I've been in those situations, too, and I don't have a simple prescription. I'm sure you've been in them as well; if not, count yourself lucky.
Tuesday, September 14, 2010
FINRA vs Trillium
I've been trying to understand this week's FINRA ruling regarding Trillium.
I've seen a number of varying reports trying to explain the ruling to lay-people like me. The first one I saw claimed that this was a clear finding against "quote-stuffing", and was the beginnings of a crackdown on high frequency trading. Another, from today's New York Times, said that the decision was related to HFT, and used some pretty strong language:
But the FINRA press release does not use the word "illegal"; rather, they use the words "illicit", "improper", and "illegitimate", which are not quite the same. The press release does carry this quote from Thomas R. Gira, Executive Vice President, FINRA Market Regulation:
However, it's not clear whether this quote specifically refers to the Trillium finding, or rather just describes FINRA's intent to more actively police the modern electronic markets.
I tried reading through the detailed FINRA finding paper for more details. It says that the Trillium staff
Note that the finding doesn't say that Trillium themselves were doing the HFT, but rather that they were taking actions that were designed to interact with the HFT algorithms of other market participants.
The finding explains the activity in greater detail later:
Felix Salmon has a post up at Seeking Alpha where he explains the notion of "layering" in more detail, and clarifies the difference between layering and quote-stuffing:
Salmon also makes the point that this isn't really a finding that has to do with HFT and whether HFT needs to be altered:
I think that a number of people jumped the gun because there's been a lot of press recently hinting that the SEC was getting ready to do something about the dangers of High Frequency Trading and quote stuffing, for example see this article in USA Today.
But it's important not to get confused. Salmon's essay seems to be clear and detailed, and I'm pretty confident that he's laying out these actions accurately. If you're interested in these issues, and trying to figure out where all this technology and legal policy is going, gave Salmon's post a good read.
I've seen a number of varying reports trying to explain the ruling to lay-people like me. The first one I saw claimed that this was a clear finding against "quote-stuffing", and was the beginnings of a crackdown on high frequency trading. Another, from today's New York Times, said that the decision was related to HFT, and used some pretty strong language:
he Financial Industry Regulatory Authority said it had fined Trillium, a brokerage firm based in New York, $1 million for carrying out an illegal high-frequency trading strategy, while also fining and suspending the firm’s heads of compliance and trading, and nine traders under their supervision.
But the FINRA press release does not use the word "illegal"; rather, they use the words "illicit", "improper", and "illegitimate", which are not quite the same. The press release does carry this quote from Thomas R. Gira, Executive Vice President, FINRA Market Regulation:
FINRA will continue to aggressively pursue disciplinary action for illegal conduct, including abusive momentum ignition strategies and high frequency trading activity that inappropriately undermines legitimate trading activity, in addition to related supervisory failures.
However, it's not clear whether this quote specifically refers to the Trillium finding, or rather just describes FINRA's intent to more actively police the modern electronic markets.
I tried reading through the detailed FINRA finding paper for more details. It says that the Trillium staff
engaged in a repeated pattern of layering conduct to take advantage of trading, including algorithmic trading by other firms
...
and obtained a full or partial execution for that order through the entry of numerous layered, non-bona fide, market moving orders on the side of the market opposite the limit order
Note that the finding doesn't say that Trillium themselves were doing the HFT, but rather that they were taking actions that were designed to interact with the HFT algorithms of other market participants.
The finding explains the activity in greater detail later:
Within seconds after the Trillium Trader received the execution or partial execution of the buy (sell) limit order, he intentionally and knowingly canceled the non-bona fide orders that he had placed into Single Book.
Felix Salmon has a post up at Seeking Alpha where he explains the notion of "layering" in more detail, and clarifies the difference between layering and quote-stuffing:
The distinction is an important one. Quote-stuffing, if it exists, is a destructive attack on an entire stock market. Layering, by contrast, is relatively benign, and the only people who get damaged by it are high-frequency traders who are looking to sniff out where the market is going and place trades attempting to front-run that move.
Salmon also makes the point that this isn't really a finding that has to do with HFT and whether HFT needs to be altered:
it’s a bit of a stretch to paint this as the first battle in the war against high-frequency traders — not least because there isn’t actually anything particularly high-frequency about what Trillium was doing.
I think that a number of people jumped the gun because there's been a lot of press recently hinting that the SEC was getting ready to do something about the dangers of High Frequency Trading and quote stuffing, for example see this article in USA Today.
But it's important not to get confused. Salmon's essay seems to be clear and detailed, and I'm pretty confident that he's laying out these actions accurately. If you're interested in these issues, and trying to figure out where all this technology and legal policy is going, gave Salmon's post a good read.
Autonomous Unmanned Surface Vessels
We have un-manned airplanes in the air (commonly called drones), so why should we be surprised about the idea of un-manned ships in the sea? Autonomous Unmanned Surface Vessels, or AUSVs, appear to be making significant progress, as you can see by browsing around on the Harbor Wing website. Of course, you can tell just by the opening trumpet music of the demo movie that this is primarily a Department of Defense project, but the company does make some reasonable suggestions about scientific and ocean-safety missions. The basic concept of 'sail-by-wire' seems pretty sensible, so I won't be surprised if this effort sees deployment.
Meanwhile, the reason I got to this site was from Kimball Livingston's wonderful Blue Planet Times, which today brings us coverage of the latest negotiations for the 2013 America's Cup defense. The latest proposal appears to be to have two types of boats: 45-footers, and 72-footers:
These are amazing boats! As Russell Coutts says, "it won't look like the senior tour any more". Livingston reports that the 45-footers can be shipped around the world in standard yacht-transport containers, while the 72-footers, which are immense, can apparently be (somewhat) dis-assembled and flown from place to place in giant cargo planes.
The sailing tactices of the races may change, as well, Livingston reports:
"A high-wind area," eh? Perhaps, you mean, San Francisco Bay?!!
Meanwhile, the reason I got to this site was from Kimball Livingston's wonderful Blue Planet Times, which today brings us coverage of the latest negotiations for the 2013 America's Cup defense. The latest proposal appears to be to have two types of boats: 45-footers, and 72-footers:
The plan is for declared teams (“at least eight challengers” according to BOR CEO Russell Coutts) to race against each other in the 45-footers through March of 2012, while they are building their 72-foot America’s Cup contenders. Subsequent racing moves to the 72-footers, with seven regattas in 2012.
These are amazing boats! As Russell Coutts says, "it won't look like the senior tour any more". Livingston reports that the 45-footers can be shipped around the world in standard yacht-transport containers, while the 72-footers, which are immense, can apparently be (somewhat) dis-assembled and flown from place to place in giant cargo planes.
The sailing tactices of the races may change, as well, Livingston reports:
Strong consideration will be given to a short first leg, to bring the boats to the first mark close together. “That would mean that the boats would round nose-to-tail,” Coutts said, “so the race could be won downwind, which would be interesting from a competitive point of view, also from a spectator point of view. And if we end up in a high-wind area, a reaching leg would be worth considering because you’d see the boats at peak speed.
"A high-wind area," eh? Perhaps, you mean, San Francisco Bay?!!
Monday, September 13, 2010
NetApp and Oracle have settled the ZFS patent lawsuit
I read today that NetApp and Oracle have settled the ZFS patent lawsuit.
If you'd forgotten about this lawsuit, it involved a dispute between Sun and NetApp regarding patent infringement.
It's always a bit strange when companies which spent years suing each other over patent infringement quietly settle the case; it's hard to understand what was really decided. But this particular lawsuit was interesting because it featured one of the more famous blog posts of the time, three years ago, when Jonathan Schwartz notably said "we didn't file an injunction to stop competition - instead, we joined the free software community and innovated". Schwartz's post was much-remarked upon, in particular, for how open he was about the real use of patent law in the software industry nowadays: "we've always protected our markets from trolls ... we file patents defensively ... we're going to use our defensive portfolio to respond".
At the time, Schwartz promised:
No mention of that outcome in Friday's press release(s), however.
Meanwhile, it's interesting to see a lot of activity regarding patents and software licenses recently. For example, the Mozilla legal team have recently announced the latest draft of the new Mozilla Public License, noting that:
Also, and (I think) unrelated, Google announced last week that they are changing their policy about how they handle licensing on their Open Source project hosting site, code.google.com, although they qualify the policy change with the observation that:
Of course, the confusion over the actual meaning of software licenses is not limited to Open Source software; people are equally confused about the meaning and implication of commercial software licenses. A major court case in this area was just resolved last week, when the 9th U.S. Circuit Court of Appeal issued its ruling in the case of Vernor vs Autodesk. Over at TechDirt, Mike Masnick has the full ruling, as well as a lot of discussion and analysis.
Not that I understand a bit of this (I'm much better with code and algorithms than I am with intellectual property law and public policy issues), but it would be premature to close this blog posting without noting the wild story in the New York Times this weekend regarding the Russian government's apparent use of software license infringement law as a technique for cracking down on politically-unpopular groups:
These are all hard, complex issues, and, to their credit, many important people appear to be thinking about them. Hopefully we are making progress in figuring out how to manage these problems. I guess I'll just try to keep reading and learning and trying to understand...
If you'd forgotten about this lawsuit, it involved a dispute between Sun and NetApp regarding patent infringement.
It's always a bit strange when companies which spent years suing each other over patent infringement quietly settle the case; it's hard to understand what was really decided. But this particular lawsuit was interesting because it featured one of the more famous blog posts of the time, three years ago, when Jonathan Schwartz notably said "we didn't file an injunction to stop competition - instead, we joined the free software community and innovated". Schwartz's post was much-remarked upon, in particular, for how open he was about the real use of patent law in the software industry nowadays: "we've always protected our markets from trolls ... we file patents defensively ... we're going to use our defensive portfolio to respond".
At the time, Schwartz promised:
In addition to seeking the removal of their products from the marketplace, we will be going after sizable monetary damages. And I am committing that Sun will donate half of those proceeds to the leading institutions promoting free software and patent reform (in specific, The Software Freedom Law Center and the Peer to Patent initiative), and to the legal defense of free software innovators. We will continue to fund the aggressive reexamination of spurious patents used against the community (which we've been doing behind the scenes on behalf of several open source innovators). Whatever's left over will fuel a venture fund fostering innovation in the free software community.
No mention of that outcome in Friday's press release(s), however.
Meanwhile, it's interesting to see a lot of activity regarding patents and software licenses recently. For example, the Mozilla legal team have recently announced the latest draft of the new Mozilla Public License, noting that:
The highlight of this release is new patent language, modeled on Apache’s. We believe that this language should give better protection to MPL-using communities, make it possible for MPL-licensed projects to use Apache code, and be simpler to understand.
Also, and (I think) unrelated, Google announced last week that they are changing their policy about how they handle licensing on their Open Source project hosting site, code.google.com, although they qualify the policy change with the observation that:
we felt then and still feel now that the excessive number of open source licenses presents a problem for open source developers and those that adopt that software
Of course, the confusion over the actual meaning of software licenses is not limited to Open Source software; people are equally confused about the meaning and implication of commercial software licenses. A major court case in this area was just resolved last week, when the 9th U.S. Circuit Court of Appeal issued its ruling in the case of Vernor vs Autodesk. Over at TechDirt, Mike Masnick has the full ruling, as well as a lot of discussion and analysis.
Not that I understand a bit of this (I'm much better with code and algorithms than I am with intellectual property law and public policy issues), but it would be premature to close this blog posting without noting the wild story in the New York Times this weekend regarding the Russian government's apparent use of software license infringement law as a technique for cracking down on politically-unpopular groups:
Across Russia, the security services have carried out dozens of similar raids against outspoken advocacy groups or opposition newspapers in recent years. Security officials say the inquiries reflect their concern about software piracy, which is rampant in Russia. Yet they rarely if ever carry out raids against advocacy groups or news organizations that back the government.
As the ploy grows common, the authorities are receiving key assistance from an unexpected partner: Microsoft itself. In politically tinged inquiries across Russia, lawyers retained by Microsoft have staunchly backed the police.
These are all hard, complex issues, and, to their credit, many important people appear to be thinking about them. Hopefully we are making progress in figuring out how to manage these problems. I guess I'll just try to keep reading and learning and trying to understand...
Sunday, September 12, 2010
Not an auspicious start
If you're a Bay Area professional football plan, then week 1 of the 2010 NFL season did not start well.
Hopefully the teams will get it figured out, and fast!
At least the UC Berkeley Golden Bears did well.
Hopefully the teams will get it figured out, and fast!
At least the UC Berkeley Golden Bears did well.
Friday, September 10, 2010
Urk! I need a dictionary!
I'm not even sure what a "mega fakie to fakie 900" is. Maybe my son will help me translate. In the meantime, soar above the ramp with this short video!
Magnus beats the world!
I quite enjoyed Magnus vs. the World, although the internet connectivity was problematic and I had to keep refreshing the screen. I thought that the format of having the 3 grandmasters suggest moves and taking the popular vote was entertaining, but it had the problem that it was very tactical on the part of the GMs advising the world, because they couldn't easily communicate their longer-range plans, and so I think that the World's game suffered from see-sawing among a range of alternatives.
It will be interesting to read some follow-on analysis, as there were definitely parts of the game which I didn't grasp at the time, watching the moves in real time.
It will be interesting to read some follow-on analysis, as there were definitely parts of the game which I didn't grasp at the time, watching the moves in real time.
Wednesday, September 8, 2010
JDK 7 slips again
Mark Reinhold has finally said what everyone's known for a while: JDK 7 won't be shipping this year; the currently-posted schedule just isn't happening.
Frankly, it's not clear that this matters. The future of Java is completely unclear, from mergers and lawsuits, to the scattering of the core JDK team, to the questions of whether Java is or isn't open source, to the overall shifting of the development landscape to mobile devices, the web, and HTML 5.
I'm glad they're trying to be open and engage the community, and I wish them well, but it's awfully hard to understand where Java is going and what it's trying to achieve at this point.
Update: Jeremy Manson says that Plan B, if adopted, is a good thing, for at least some of the work done prior to the Oracle acquisition might actually be finished and released.
Frankly, it's not clear that this matters. The future of Java is completely unclear, from mergers and lawsuits, to the scattering of the core JDK team, to the questions of whether Java is or isn't open source, to the overall shifting of the development landscape to mobile devices, the web, and HTML 5.
I'm glad they're trying to be open and engage the community, and I wish them well, but it's awfully hard to understand where Java is going and what it's trying to achieve at this point.
Update: Jeremy Manson says that Plan B, if adopted, is a good thing, for at least some of the work done prior to the Oracle acquisition might actually be finished and released.
Eating your own dogfood
"Eating your own dogfood" is a phrase that has a long history in the software industry. It refers to the technique of using your own product internally, yourselves, before, during, and after you deliver that product to your customers.
It's not always possible to practice this; sometimes your company writes a product for which you have no use, internally. I've been at those companies. I've also been at company where we do use our own products internally, and the difference is stunning.
So I was particularly pleased, earlier this week, when, at my day job, we upgraded our master internal server to the latest nightly build of the code. Not only are we continuing to eat our own dogfood, but this latest release contains a number of enhancements that I contributed to, so I'm excited to have that software in the hands of real users.
It will be a few more months before we decide that this software is sufficiently well-constructed that we are ready to deliver it to our customers, but this week was a major step in that process.
It's not always possible to practice this; sometimes your company writes a product for which you have no use, internally. I've been at those companies. I've also been at company where we do use our own products internally, and the difference is stunning.
So I was particularly pleased, earlier this week, when, at my day job, we upgraded our master internal server to the latest nightly build of the code. Not only are we continuing to eat our own dogfood, but this latest release contains a number of enhancements that I contributed to, so I'm excited to have that software in the hands of real users.
It will be a few more months before we decide that this software is sufficiently well-constructed that we are ready to deliver it to our customers, but this week was a major step in that process.
Tuesday, September 7, 2010
All of a sudden it seems like nothing but lawsuits
- Oracle sues Google
- HP sues Oracle
- Crossroads sues 3PAR
- Interval Research sues almost everyone.
- Everyone (via a class-action) sues almost everyone
What is going on in this world?
Just looking at the HP/Oracle thing for a bit, how will this story play out? I've seen it suggested that this all goes back to the pretexting mess of 2006; the New York Times article goes on to say:
According to “The Big Lie: Spying, Scandal and Ethical Collapse at Hewlett-Packard,” an authoritative account by the former BusinessWeek writer Anthony Bianco, Mr. Hurd was very involved in H.P.’s efforts to hunt down the leakers. After the scandal broke, he hijacked H.P.’s internal investigation, hiring an outside law firm and ordering it to report directly to him, instead of the board, which is the normal practice.
I've also read that Oracle had been recruiting Hurd for some time, and that they simply made their move when he became available. I know one thing for sure: I'm not the only one confused by what's going on!
When I was young, and just starting out in the computer field, Hewlett Packard was always associated with Management by Walking Around and In Search of Excellence and so forth:
we dropped by a small calculator and electronics store the other day to buy a programmable calculator. The salesman's product knowledge, enthusiasm, and interest in us were striking and naturally we were inquisitive. As it happened, he was not a store employee at all, but a twenty-eight-year-old Hewlett-Packard (HP) development engineer getting some first-hand experience in the users' response to the HP product line. We had heard that a typical assignment for a new MBA or electrical engineer was to get involved in a job that included the practical aspects of product introduction. Damn! Here was an HP engineer behaving as enthusiastically as any salesman you'd ever want to see.
HP: Wha' happen?
Monday, September 6, 2010
Drivers versus passengers
Interesting essay/rant by Dave Kellogg on his blog about drivers versus passengers:
I think this is fair criticism:
It is, however, a very easy trap to fall into.
Meanwhile, completely unrelated except that I was reading his essay at about the same time, Paul Graham makes an interesting point about the future of Venture Capital funding:
Having had opportunities to work for companies held captive by their investors, as well as companies which had other ways to manage their finances, I have seen both the good and the bad of the VC process. Graham has proposed a variety of interesting ideas over the last decade to solve some of the problems, and his essays are always intriguing.
Strategically and operationally, I think there is a huge difference between drivers and passengers that comes out when they are placed in a new situation. When placed in their next company:
- Drivers assess the situation and develop strategies and tactics appropriate for the new reality.
- Passengers do what worked last time.
I think this is fair criticism:
I am repeatedly stunned by the number of otherwise very intelligent people who show up and do what worked last time. Often with the very same cohort / entourage with whom they did it.
It is, however, a very easy trap to fall into.
Meanwhile, completely unrelated except that I was reading his essay at about the same time, Paul Graham makes an interesting point about the future of Venture Capital funding:
Different terms for different investors is clearly the way of the future. Markets always evolve toward higher resolution.
Having had opportunities to work for companies held captive by their investors, as well as companies which had other ways to manage their finances, I have seen both the good and the bad of the VC process. Graham has proposed a variety of interesting ideas over the last decade to solve some of the problems, and his essays are always intriguing.
Friday, September 3, 2010
It's not magic, it's physics!
Step 1: Shave head
Step 2: Run really fast.
Step 3: Kick it really hard, in just the right way.
Result: Bend it like Roberto Carlos!
It's all a matter of the tradeoff between aerodynamics and gravity:
Step 2: Run really fast.
Step 3: Kick it really hard, in just the right way.
Result: Bend it like Roberto Carlos!
It's all a matter of the tradeoff between aerodynamics and gravity:
one can identify sports dominated by aerodynamics (table tennis, golf and tennis) and sports dominated by gravity (basketball and handball). In between, we find sports where both gravity and aerodynamics play a comparable role (soccer, volleyball and baseball). Indeed, in the first category of sports, the spin is systematically used, while it is not relevant in the second category, and it only appears occasionally in the third one, in order to produce surprising trajectories.
This is what happens when you play chess!
Thursday, September 2, 2010
Anti Semi Joins
I happened across an unfamiliar term today, "anti semi join".
I encountered it in this nice little writeup from the MS SQL support team.
So, of course, I went hunting for more information about semi joins and anti semi joins. It turns out that I had a fairly good understanding of them already, but it was nice to find several well-written articles describing them, including:
Cunningham's point about not needing to do "TOP 1" in existence sub-queries is a very good one.
Query optimizers are incredibly sophisticated algorithms. I've only been studying them for 25 years. Eventually I'll understand them :)
I encountered it in this nice little writeup from the MS SQL support team.
So, of course, I went hunting for more information about semi joins and anti semi joins. It turns out that I had a fairly good understanding of them already, but it was nice to find several well-written articles describing them, including:
- Craig Freedman's Introduction to Joins
- Roger Schrag's Speeding Up Queries with Semi-Joins and Anti-Joins
- Conor Cunningham's Conor vs. Anti-Semi-Join Reordering
Cunningham's point about not needing to do "TOP 1" in existence sub-queries is a very good one.
Query optimizers are incredibly sophisticated algorithms. I've only been studying them for 25 years. Eventually I'll understand them :)
Wednesday, September 1, 2010
It's crazy days in the software business again
Check out this wild story on Mike Arrington's TechCrunch today. Here's a snip:
Sources close to Google tell us that about 80% of people stay when they’re offered a counter to a Facebook offer. But some still leave.
Subscribe to:
Posts (Atom)