Nerd alert! It's reading list time!
- How to Write your own Minesweeper AI. A nifty worked-out example of how to write a game-solving algorithm, with lots of illustrations.
How much of a speedup do we get? In this case, the green region has 10 tiles, the pink has 7. Taken together, we need to search through 2^17 combinations. With segregation, we only have 2^10 + 2^7: about a 100x speedup.
Practically, the optimization brought the algorithm from stopping for several seconds (sometimes minutes) to think, to giving the solution instantly.
- Occupy ACM: We are the 99%. Some interesting notes about the current state of affairs w.r.t. the ACM and Open Access publishing of academic results. For the record, I gave up on the ACM some 20 years ago, so I'm just an observer in this debate. But I don't believe I've suffered from that decision, other than being unable to have discussions with certain researchers who insist on making their research available only to a select few. In that respect, I agree with the author when he says:
Since these days almost everyone puts their papers on electronic archives, perhaps these backward stances of ACM don’t have too much negative effect on actual access to papers. But they may have very negative effects on the ACM itself.
- What does randomness look like?. A nicely-illustrated article about human psychology and why we're innately wired to have trouble with the concept of randomness.
Poisson’s idea was both ingenious and remarkably modern. In today’s language, he argued that Quitelet was missing a model of his data. He didn’t account for how jurors actually came to their decisions. According to Poisson, jurors were fallible. The data that we observe is the rate of convictions, but what we want to know is the probability that a defendant is guilty. These two quantities aren’t the same, but they can be related. The upshot is that when you take this process into account, there is a certain amount of variation inherent in conviction rates, and this is what one sees in the French crime data.
- Nulls Make Things Easier? (Part 1/11). Bruce Momjian has embarked upon what looks like will become a nice series of articles about the complexities of NULL handling in relational databases.
even if you never type "null", you can get nulls into your database. The use of not null when creating columns is recommended, especially for numeric columns that should contain only non-null values. Ideally you could have not null be the default for all columns and you would specify null for columns that can contain nulls, but that is not supported.
- This looks like a nice online book: FXT: a library of algorithms. Many thanks to the author for making it available. I probably won't spend much time working my way through it, since I already own (and periodically consult) the first edition of Henry Warren's astonishingly-great Hacker's Delight.
- My former colleague (and wonderful writer) Robert Hodges takes a step back and considers the state of the MySQL community: The MySQL Community: Beleaguered or Better than Ever?
If there is a problem, it is how to keep a strong multi-polar community going for as long as possible. Competition creates uncertainty for users, because change is a given. Pointy-haired bosses have to make decisions with incomplete information or even reverse them later. Competition is hard for vendors, because it is more difficult to make money in efficient markets. Competition even strikes against the vanity of community contributors, who have to try harder to get recognition.
- An epic year-end ramble from the well-known David Heinemeier Hansson: The Parley Letter. As I'm constantly fighting with those (well-intentioned, but wrong) folk who keep insisting that we have to start by first writing a framework, I found myself cheering as Heinemeier Hansson writes:
I think the responsible thing is to make the best damn piece of software facing CURRENT DAY constraints and let tomorrow worry about tomorrow. When the actual changed requirements or new people with "new ideas" roll around, they'll have less baggage to move around. The predictions for what the app is going to look like a decade from now are pretty likely to be wrong anyway.
- Four interesting articles about network performance, page load speed, and web caching:
- Finally, two very interesting GitHub-related articles:
- GitHub Says ‘No Thanks’ to Bots — Even if They’re Nice
On GitHub, these offers — called pull requests — are supposed to come from people. That’s part of the power of GitHub — coders can see each other’s software and share fixes much in the same way that the rest of us swap photos on Facebook. It’s a very social, and very human kind of interaction. The whole world can debate the merits of a software change before it is accepted or rejected.
But here was a pull request from a GitBot. Bots don’t debate.
- Downtime last Saturday
When the agent on one of the switches is terminated, the peer has a 5 second timeout period where it waits to hear from it again. If it does not hear from the peer, but still sees active links between them, it assumes that the other switch is still running but in an inconsistent state. In this situation it is not able to safely takeover the shared resources so it defaults back to behaving as a standalone switch for purposes of link aggregation, spanning-tree, and other layer two protocols.
By the way, regarding the DRBD issues discussed in the GitHub post-mortem, here are some interesting background resources you might want to study:
- GitHub Says ‘No Thanks’ to Bots — Even if They’re Nice
No comments:
Post a Comment