Friday, January 29, 2010

EASTL and runtime library design

I spent a bunch of time reading through the documentation for EASTL -- the Electronic Arts Standard Template Library. This library is now close to 3 years old, but this is the first I'd heard of it. Of course, I'm somewhat out of touch with the C++ world nowadays; I did most of my C++ programming in the mid-90's, when we were still using Windows 3.1 and the C++ STL was just being created.

Although I no longer work in C++, I still found the overall paper very interesting. I think it's great when somebody takes a really close, detailed, critical look at a large and substantial piece of software, as there is always something to learn. In fact, I think that the best point that the EASTL paper makes is: no matter how good your standard library and your programming infrastructure is, it can always be improved, and it can always benefit from a thorough and detailed critique. There are similar efforts in the Java world which I am familiar with; for example, consider the Google Collections Library, which is a similar attempt to improve on the base JDK collections library.

My favorite part of the EASTL paper was the part where the author itemizes the issues that game programmers encounter, including:

  • No matter how powerful any game computer ever gets, it will never have any free memory or CPU cycles.

  • Game developers are very concerned about software performance and software development practices.

  • Game software often doesn't use conventional synchronous disk IO such as <stdio.h> or <fstream> but uses asynchronous IO.

  • Game applications cannot leak memory. If an application leaks even a small amount of memory, it eventually dies.

  • Every byte of allocated memory must be accounted for and trackable. This is partly to assist in leak detection but is also to enforce budgeting.

  • Game software rarely uses system-provided heaps but uses custom heaps instead. See Appendix item 26.

  • A lot of effort is expended in reducing memory fragmentation.

  • A lot of effort is expended in creating memory analysis tools and debugging heaps.

  • A lot of effort is expended in improving source and data build times.

  • Application code and libraries cannot be very slow in debug builds. See Appendix item 9.

  • Memory allocation of any type is avoided to the extent possible.

  • Operator new overrides (class and global) are the rule and not the exception.

  • Use of built-in global operator new is verboten, at least with shareable libraries.

  • Any memory a library allocates must be controllable by the user.

  • Game software must be savvy to non-default memory alignment requirements.



As I read this list, I was struck by the realization that, really, almost nothing here is specific to game programming. Really, this is just a list of considerations that apply to any software that you care about. This list of concerns is equally important to people writing operating systems, database systems, networking software, etc.

There was a time when people worked really hard on software: slaved over it, deeply considered every line of code, measured and re-measured and tuned every bit of it. This is still important; using great software is just a completely different experience than using sloppy software.

It's nice to see a paper written by some people who obviously care a lot about writing really great software, discussing, in detail, why that's hard to do, and why you really have to constantly focus on every little detail in order to do it.

Even if you aren't a C++ programmer, give the paper a try: I think you'll enjoy it.

Wednesday, January 27, 2010

Logicomix, by Doxiados and Papadimitriou

I read a rather unusual book: Logicomix, by Apostolos Doxiadis and Christos Papadimitriou.

The book is a graphic novel about mathematicians.

Specifically, the book is about Bertrand Russell, about the writing of Principia Mathematica, about Ludwig Wittgenstein, and about Kurt Godel and the Incompleteness Theorem.

As it turned out, I mostly knew the history-of-philosophy and history-of-mathematics story lines of the book, as I happened to have taken undergraduate classes in this area in college in the 80's. And I was particularly fond of the Incompleteness Theorem so I actually paid attention in that class!

I knew rather less about the social history of Russell's life, about his personal misfortunes, and about his biography. Unfortunately, these are areas in which the book exercises a certain amount of literary privilege; as they note in their afterword:

Also, though our major characters are based as closely as possible on their real-life counterparts, we have on more than one occasion departed from factual detail, in order to give our narrative greater coherence and depth.


It's a tricky business, making history entertaining. The book is pleasantly illustrated, and easy to read; it flows along and I finished it in a few days. But for a book about characters who cared so very, very deeply about truth and accuracy, departing from the facts is a treacherous technique.

If you know little about mathematical logic and the underpinnings of theoretical computer science, you will probably find this an interesting book. But you won't learn much actual mathematics; you'll mostly learn about the history of some fairly interesting and noteworthy individuals of 100 years ago.

Non-empty mount points

Why does Unix allow non-empty directories to be used as mount points?

A few weeks back, I managed to get myself horridly confused, and at the core of the confusion was a directory that I was using as a mount point, and that directory was not empty.

Of course, I didn't know that at the time. Instead, I got a call:

You know that disk we were having trouble with, and you worked on it? Well, something is really weird, because that disk is now showing a 2-year-old backup? Did you somehow restore some ghost files?


Here's the overall sequence of actions that caused my confusion:

  • I have some custom backup tools, which back up my system to an external disk, which is mounted as /mnt/backup

  • I had some problems with that external disk.

  • After a certain amount of investigation, I decided that the external disk had become scrambled irretrievably, and I re-formatted the disk (that is, I ran mkfs on it)

  • I rebooted the system, and got ready to run a new backup to verify that my /mnt/backup file system was now happy, and was astonished to see that /mnt/backup was non-empty; moreover, it contained valid backup information from 2007!



It took me a while, but I figured out what was going on: it was due to the iteraction of two behaviors, one which is fairly new (to me), and one which is extremely old:

  • My Ubuntu system, and, I think, most modern Linux systems, is now using UUID identification for file systems. I'm not exactly sure when this changed; I don't think that Dapper Drake behaved this way. This particular system was upgraded from Drake, and that's probably when the UUID filesystem identification was switched on, and I didn't notice.

  • My Ubuntu system, and, I think, most Linux systems, allowed me to use a non-empty directory as a mount point



Due to the first behavior, the newly-formatted file system was not getting mounted at /mnt/backup, because its UUID had changed, and my /etc/fstab was still specifying the old UUID for my external drive. So Ubuntu had mounted my new disk at something like /media/my-new-uuid.

Due to the second behavior, since I didn't have an external file system mounted at /mnt/backup, my system was happily using the old contents of the directory.

I think that, during the early development of my backup scripts, I had at some point run them without the external filesystem being mounted, so the scripts had happily written a backup into the real /mnt/backup directory. Then, later, I had arranged for the external filesystem to always be mounted, and so all subsequent backups were going to the external filesystem.

I didn't realize that I was mounting the disk on a non-empty directory, so I didn't realize that my system had been operating for several years with all these hidden files, which were present on the root directory, but which were hidden by the mounted file system and thus effectively invisible (yes, the overall disk space usage on the root filesystem was higher than it should have been, but I wasn't paying close enough attention to notice that).

It seems like it would be a nice behavior if the mount command didn't allow mounting an external filesystem onto a non-empty directory mount point, but that's not the sort of thing that Unix systems tend to worry about. The user is always right, they say.

Friday, January 22, 2010

Good Java trivia quiz

This Java trivia quiz is actually quite good.

The first pdf link is to the trivia questions.

The second pdf link is to the answers.

I got about 60% of the answers correct. I completely nailed the JDBC section and the history section, and did OK on the JDK 7 question and the concurrency, IO, and collections questions, but completely bombed the Swing questions.

Thursday, January 21, 2010

The Secretary of State on Internet Freedom

Ms. Clinton's speech on Internet Freedom is now online here.

New releases in the web world

Seems like this has been a busy few weeks in the world of web development:



Meanwhile, although it's not really web-related, the proceedings are now online for last fall's LLVM/Clang developer's meeting. LLVM is now used for lots of things, with ActionScript/JavaScript being just one of many worlds that are involved with LLVM. Think you are doing some serious parallelism? Check out this release from a team that has used the LLVM architecture to build a program that runs on 120,000 processor cores! (Actually, according to these notes, they are now 50% higher, passing 180,000 processor cores!!!)

Wednesday, January 20, 2010

Checking items off the list

I've been spending a lot of time fixing bugs lately.

There's something very satisfying about fixing bugs. It really feels like you're getting something done, accomplishing something.

When I get into these moods, and feel like fixing a bunch of bugs, I get very methodical about it. I start compiling lists of bugs and reading through them. For each bug, I have a fairly rigid discipline:

  • Do I understand what the bug writer was describing?

  • Can I write a regression test which demonstrates this bug?

  • With the test in place, can I see where it's going wrong? Or step through it in the debugger?

  • Does my fix make the regression test pass?

  • Do the other regression tests reveal any downsides to my fix?

  • Is there anybody I should show this fix to?

  • Check in the test and fix, mark the bug resolved, and move on.



On the various bodies of software that I work with, there have accumulated a long list of potential bugs to fix. So I can afford to be rather choosy when I get into these moods. If a particular bug report starts to act up; for example, if it's hard to write a regression test, or if the obvious fix doesn't seem to work, or if other tests start to pitch a fit at my change, I just take a pass on this bug, put some notes in the bug report about what I tried and where I got stuck, and move on to the next.

In periods like this, my goal is to make a serious dent on the open bug list. I understand that I'm not necessarily fixing the most important bugs, or even the bugs which are most deserving of my time. I'm just trying to clear away the clutter and keep it from overwhelming me and the people around me.

So many bugs, so little time. It's just the way it is with software.

Tuesday, January 19, 2010

The Google/China dispute has stimulated many interesting discussions

I'm still quite confused about what is really going on with the Google/China dispute. If you haven't been paying much attention to the event, this article in the New York Times is a good place to start.

There's lots of interesting technology information to learn here, and though I'm no security expert, I do try to keep up with the basics, so I'm keeping busy reading various explanations of what went on.

But even more interesting, in my opinion, are the flood of fascinating articles that are starting to appear as people think more deeply about what Google is really trying to say, and what their actions mean about the future of the Internet.

For example:


I'm sure there will be more articles and essays written in the near future, as people continue to think about these issues. What are your favorite essays on the topic? Send them my way!

Ant version 1.8 is nearing release

The Apache Ant developers have made a release candidate build of Ant 1.8 available for download.

As I looked at the recent work on Ant, I realized that I've now been using Ant as my primary build tool for almost a decade now. I experimented with Ant 1.1 and wrote some of my first Ant build scripts using Ant 1.2, some time in the fall of 2000 or possibly the winter of 2001.

Periodically I've looked at some other build tools, but I keep sticking with Ant. Partly this is because I have a large body of existing Ant scripts, but partly it's because the other build tools that I've seen haven't offered any huge breakthroughs. The most promising contender that I saw was Rake. Is Rake being actively developed and enhanced? I haven't looked at Rake in several years.

The most interesting Ant release was version 1.6; Ant completely changed as of release 1.6 and went from being a frustrating tool to being a powerful and usable tool, all because of two features:

  • macrodef

  • import


Before these features were added, it was very hard to modularize your Ant scripts and build them as independent re-usable build files, but with careful use of macrodef and import, it is certainly possible to build enormous systems while still retaining a reasonable amount of legibility and maintainability.

In my experience, there are three types of Ant scripts that you encounter "in the wild":

  • Small Ant scripts, generally Java-only, which can use most of Ant's default behaviors and are clear and simple. A lot of open source build scripts are this way.

  • Serious commercial Ant scripts written before macrodef and import became available. These are generally impossible to understand and evolve, and the reality is that a small cadre of Build Wizards keep them running. Such systems often involve a substantial number of custom Ant tasks.

  • Serious commercial Ant scripts written to use macrodef and import. In my experience, the need for custom Ant tasks drops way off with Ant releases post-1.6.


With a certain amount of effort, build scripts in the second category can be transformed into build scripts in the third category. Here's a nice article that explains a reasonable approach. I went through a similar effort at my day job in 2005-2006, and the result was well worth it.

I was recently looking at the Android developer kit's use of Ant scripts and it was rather surprising, as your project's script is quite small and clear. An extremely clever custom Ant task provided by the Android developer kit runs at startup, and automatically generates your build on-the-fly by importing the android_rules definitions into your project.

I'd like to learn more about how the Android developer kit's custom Ant tasks work; is the source code for the Android developer kit itself available as open source? I should go look and see if I can figure out how that magic actually works.

Sunday, January 17, 2010

Border Songs by Jim Lynch

I read Jim Lynch's Border Songs at a fever clip (for me): cover to cover in 4 days.

It was a disturbing book for me; perhaps I identified a bit too closely with Norm, the crotchety old dairy farmer whose dreams all seem to be vanishing out of reach:


The boat had already swallowed eleven years. Eleven. Why hadn't anyone warned him about wasting what time remained on a project he'd never finish? He was thirty-five short, probably forty. And that left the cabin rough. Homey, isn't she, Jeanette? He glanced around his darkening farm. Could he blame Washington Mutual for balking at a loan? Who wanted to gamble on a snake-bit dairy? His eyes settled on the boat barn. A monument to his ego? No. To his incompetence? Probably. To his insanity? Definitely.


The book is beautiful, but tragic. Things go wrong, then worse, then just horribly wrong. At times, long periods of time, I was quite distressed, and wasn't sure if I wanted to finish the book.

But I did, and I'm glad I did, for it was a very enjoyable book, and a wonderful ending.

And Lynch's writing is wonderful, fluid and spiritual:


A dozen barn swallows had gathered on the telephone and power lines looping from Sophie's house to Northwood. Another dozen were approaching from far north of the ditch, then an incoming cloud -- multiple clouds, actually -- that broke up as they neared the three lines, the birds spinning like ice skaters or stunt pilots before lining up side by side and carrying on in high, grating voices that sounded like glass marbles rubbing against one another. He tacked toward their temporary roost at a forty-five degree angle, the din of Sophie's party fading beneath the excited banter of the assembling acrobats. As the sagging lines filled up, they created the illusion that the weight of all these little birds was pulling the telephone poles toward each other and that the swallows were about to be launched from this flexed slingshot.


If you're looking for a book to read, and want something relevant, but a bit different, with a great set of characters, give Border Songs a try.

Friday, January 15, 2010

Browsing the list of available Android applications

I went to the Android Market to see if I could browse through the list of available Android applications.

I'd like to learn more about the applications that are currently available.

But the Android Market page says:

Check out our site for some of the more popular applications and games available in Android Market. For a comprehensive, up-to-date list of the thousands of titles that are available, you will need to view Android Market on a handset.


Why are only a few applications shown to ordinary web browsers? Is there a technical reason for this, or is this some sort of marketing or sales strategy thing?

Thursday, January 14, 2010

Nice success story regarding performance improvements in JIRA 4.0.1

JIRA is the Atlassian bug tracking system. I have a fair amount of experience with JIRA because it is widely used for Apache projects.

Here's a nice success story from the JIRA development team regarding how they investigated and overcame several performance problems in their most recent release.

Boiled down to its basics, the JIRA development team did several things right:

  • They had invested in a suite of performance tests

  • The performance tests detected the problem

  • The performance tests maintained a history, so they could get clues about when the problem arose (in their case, they narrowed it down to a single day!)

  • The performance tests compiled extensive statistics about the behavior of the system, so they could elaborate their theory with real data



Congratulations to the JIRA team! This is not an easy thing to do, and they must be quite pleased that their investment in these tests paid off.

At work, I am part of a team which build and maintain a fairly similar set of automated performance regression tests. These tests are hard to write, and hard to maintain, so I am quite familiar with all the effort that went into a setup such as the one the JIRA team describes. Over the years we have successfully used our performance tests to catch and resolve a number of performance problems during development, before they reach our customers. Each time we get a success like that, it becomes slightly easier to persuade the company of the benefits and value of the performance test suite.

SEC has proposed new rules for regulating Flash Trading

I've been following the Flash Trading controversy for some time now. If you haven't heard about Flash Trading, it's a term for using very sophisticated ultra-high-speed computer stock trading algorithms in a fashion which, some people feel, is inappropriate. Flash Trading is part of what is sometimes referred to as "High Frequency Trading", and which also includes other activities such as Naked Trading and Dark Pools (what great names!).

Here's a few links with some more thoughts on Flash Trading.

According to Reuters, recent measurements now indicate that naked high frequency trading now accounts for 38 percent of US equities trading.

Now it appears that the SEC are proposing to introduce new regulations governing the operation of such computer software. Here's their press release.

The press release is a bit confusing, and I'm not really sure that it is describing the same thing. It says, in part:


Through sponsored access, especially "unfiltered" or "naked" sponsored access arrangements, there is the potential that financial, regulatory and other risks associated with the placement of orders are not being appropriately managed. In particular, there is an increased likelihood that customers will enter erroneous orders as a result of computer malfunction or human error, fail to comply with various regulatory requirements, or breach a credit or capital limit.

The SEC's proposed rule would require broker-dealers to establish, document and maintain a system of risk management controls and supervisory procedures reasonably designed to manage the financial, regulatory and other risks related to its market access, including access on behalf of sponsored customers.


I'm mostly interested in this topic from the computer software point of view; this is clearly some extremely sophisticated software that is being used, and I'd love to learn more about how it works. It's possible to get hints about this by looking at things like job postings, but that's just a taste; it would be neat if somebody who actually knew what the software looked like would post a "architecture of a high frequency trading software system" article some day.

Wednesday, January 13, 2010

Google IO 2010 sessions list is now online

Google have announced many of the details about their upcoming 2010 developer conference, Google I/O. In particular, they have placed quite a bit of information about the sessions that are already on the agenda.

Last year's Google I/O was one of the hottest developer conferences on the planet, if not the hottest. Will Google be able to top that this year?

Tuesday, January 12, 2010

Why does sort have a -o flag?

Most Unix/Linux commands use the ancient convention of standard input, standard output, and standard error. In general, when a command needs input, it reads from standard input, and when a command produces output, it writes to standard output. The user can specify the source of the input and the target of the output using the redirection operators < and >, as in:


grep 'xxx' < /tmp/myFile.txt > /tmp/output.txt


Many commands follow this pattern: cat, grep, sed, awk, uniq etc.

There are a few special commands which use the -o flag to specify where to put their output: cc, as, and ld in particular. Note that these commands:

  • produce binary output, which makes no sense to be displayed in a terminal window (and if you try, usually corrupts your terminal session)

  • put their output into a specially-named file (a.out) if the -o flag is not specified, not to stdout



Given all this, the sort command is really weird, as it has an optional -o flag which specifies where to put the output, and if the -o flag is not specified sort sends its output to standard output.

I can't see any reason why sort has this flag, when the > redirection operator works just fine, and is used for this purpose in almost every other Unix command I know.

Does anybody know why sort has a -o flag, what purpose that flag serves that the output redirection operator doesn't serve, or whether there are any other Unix commands which follow this pattern?

Update: My co-worker Joe says that the sort command's implementation of -o is very special, because it is legal to specify the same file as both the output file and the input file. Apparently, the sort command, when processing its output using the -o argument, will write all the output to a file with a temporary name (thus not overwriting the input during the sort), and then at the end of the sort, it will rename the temporary file to the name given in the -o argument, thus allowing you to successfully sort a file back into itself. It's still not clear to me why sort is the only command which decided to have this behavior.

Monday, January 11, 2010

Pain, suffering, grief, and lamentation

Since Android doesn't provide an implementation of javax.naming.*, but Derby expects it to be present, I thought maybe I could code up my own stub implementation and include it in my app.

So I tried an experiment in my Android developer kit, and provoked this; is this the greatest error message ever written?


Attempt to include a core class (java.* or javax.*) in something other than a core library. It is likely that you have attempted to include in an application the core library (or a part thereof) from a desktop virtual machine. This will most assuredly not work. At a minimum, it jeopardizes the compatibility of your app with future versions of the platform. It is also often of questionable legality.

If you really intend to build a core library -- which is only appropriate as part of creating a full virtual machine distribution, as opposed to compiling an application -- then use the "--core-library" option to suppress this error message.

If you go ahead and use "--core-library" but are in fact building an application, then be forewarned that your application will still fail to build or run, at some point. Please be prepared for angry customers who find, for example, that your application ceases to function once they upgrade their operating system. You will be to blame for this problem.

If you are legitimately using some code that happens to be in a core package, then the easiest safe alternative you have is to repackage that code. That is, move the classes in question into your own package namespace. This means that they will never be in conflict with core system classes. If you find that you cannot do this, then that is an indication that the path you are on will ultimately lead to pain, suffering, grief, and lamentation.


I've definitely encountered pain and suffering; is that grief on the horizon?

Saturday, January 9, 2010

Colony collapse disorder is still a puzzle

As the annual almond pollination season draws near, Wired magazine has a short article summarizing the latest information about colony collapse disorder.


So, come almond-tree flowering season, which begins in February, apiarists load up their hives on flatbeds and truck them to San Joaquin Valley. While this pilgrimage may be necessary to keep churning out cheap almonds, it also creates a melting pot of pathogens. And the moving and trucking itself could negatively impact the bees, too.


This puzzle is a hard one; scientists and ranchers have been studying it for 5 years now:


The most optimistic analysis is that, now that people are aware of the problem, they are at least working harder at taking proper care of the bees and keeping them alive.

Update: It's not just the bees who are suffering: here's an interesting story about butterfly population change.

Friday, January 8, 2010

Android development: trying to use derbyclient.jar, but failing due to javax.naming not present.

I tried to take a fairly large next step in my Android development: I tried to use derbyclient.jar in my application.

The idea was that I wanted to write an Android application which created a JDBC connection to a Derby database served by the Derby network server on some other machine, and then I would use JDBC calls to access data from that Derby server over the network.

So, I:

  • Added derbyclient.jar to the libs directory of my sample application

  • Imported java.sql.DriverManager and java.sql.Connection into my HelloAndroid.java class

  • Tried to call DriverManager.getConnection



My application crashed.


Sorry!

The application HelloAndroid (process com.example.helloandroid) has stopped unexpectedly. Please try again.


So I searched around for a while, and found adb logcat, and ran that and re-directed the output to a file so I could look through it.

In the logcat output, I found this:


I/dalvikvm( 348): Failed resolving Lorg/apache/derby/jdbc/ClientBaseDataSource;
interface 212 'Ljavax/naming/Referenceable;'


I see in the Android docs that, indeed, javax.naming is not listed as a supported package.

And, unfortunately, references to javax.naming exist in a number of places in the Derby source, so this is kind of a significant problem.

I'm not quite sure what to do next, but for now I posted this finding to DERBY-4458 since that seemed like a reasonable place to track the info.

We'll see if anybody posts any suggestions there.

Wednesday, January 6, 2010

Android development: creating an AVD and a project

The next steps in learning about Android are to try to follow the HelloWorld tutorial. The first parts of this involve creating an AVD, and creating a project.

An AVD is an Android Virtual Device. The Android SDK comes with an emulator, which can emulate a variety of actual Android devices on your development computer. So, I created a simple AVD for use in my HelloWorld example:


C:\bryan\src\android\learning>android create avd --target 1 --name simple_avd
Android 2.0.1 is a basic Android platform.
Do you wish to create a custom hardware profile [no]
Created AVD 'simple_avd' based on Android 2.0.1, with the following hardware config:
hw.lcd.density=160


I appear to have created something with a Liquid Crystal Diode screen with a screen density of 160.

Then, I created a project in which to develop my HelloWorld code:


C:\bryan\src\android\learning>android create project --target 1 --name HelloAndroid --path ./HelloAndroid --activity HelloAndroid --package com.example.helloandroid
Created project directory: C:\bryan\src\android\learning\HelloAndroid
Created directory C:\bryan\src\android\learning\HelloAndroid\src\com\example\helloandroid
Added file C:\bryan\src\android\learning\HelloAndroid\src\com\example\helloandroid\HelloAndroid.java
Created directory C:\bryan\src\android\learning\HelloAndroid\res
Created directory C:\bryan\src\android\learning\HelloAndroid\bin
Created directory C:\bryan\src\android\learning\HelloAndroid\libs
Created directory C:\bryan\src\android\learning\HelloAndroid\res\values
Added file C:\bryan\src\android\learning\HelloAndroid\res\values\strings.xml
Created directory C:\bryan\src\android\learning\HelloAndroid\res\layout
Added file C:\bryan\src\android\learning\HelloAndroid\res\layout\main.xml
Created directory C:\bryan\src\android\learning\HelloAndroid\res\drawable-hdpi
Created directory C:\bryan\src\android\learning\HelloAndroid\res\drawable-mdpi
Created directory C:\bryan\src\android\learning\HelloAndroid\res\drawable-ldpi
Added file C:\bryan\src\android\learning\HelloAndroid\AndroidManifest.xml
Added file C:\bryan\src\android\learning\HelloAndroid\build.xml


There are places in the project for resources, binaries, and libraries. I'm not sure what the acronyms HDPI, MDPI, and LDPI stand for, yet, but I'll guess that H=High, M=Medium, and L=Low.

A sample Java source file is automatically generated, and an Ant build script is generated. Following the instructions, I edited HelloAndroid.java as described, then ran ant debug to build my project, and then ant install to install my project into my emulator (oh yeah I had to start the emulator).

Then, in the emulator, I saw my HelloAndroid app, and I clicked on it to run it, and it printed Hello, Android to the screen!

I have successfully completed the HelloWorld tutorial. Yay!

Tuesday, January 5, 2010

Android development: first steps

I've been interested in learning more about Android development, so I thought I'd try to act on that interest.

The first step is to go to the developer web site, and to set up the Android SDK. This involves:

  • Downloading the SDK

  • Configuring my machine by

    • Placing the Android tools directory in my PATH

    • Ensuring that my default JDK version is JDK 1.5 or higher


  • Bringing up the Android SDK Manager and adding two packages:

    • The documentation

    • A particular platform




Since I don't know much about the platform at this point, I chose the latest platform, which is currently Android 2.0.1.

That's probably about all I'll manage to get done on this today.

The transition to 4K sectors on hard disk drives

I hadn't been paying very much attention to this recently, but it appears that hard disk drives are finally beginning to switch from 512 byte sectors to a larger sector size. This has been a long time coming; I can't remember when hard drives had anything other than 512 byte sectors, so it must have been a long time ago :)

Here's two good articles to get you up to speed:

Monday, January 4, 2010

Ken Watts praises IDEA

I happened to be over on the IntelliJ site looking at some information about IDEA, and I happened to notice that the featured IDEA testimonial is currently from Ken Watts.

I think that their web site chooses the featured testimonial somewhat randomly, so to see Ken's testimonial you need to search for his name on the testimonials page, probably.

I worked for Ken years ago and have nothing but praise for him, so I suppose this is sort of a testimonial for a testimonial-giver :)

AI in gaming

I'm back at work after an enjoyably-hectic multi-week holiday.

One of the things I did over my holiday was to play a computer game, which I haven't done in a while. This particular game was King's Bounty: The Legend. I've been enjoying this game immensely, as did my father. In fact, I've been enjoying it enough that I might consider giving Armored Princess a try, once it is released (is it already released?).

Anyway, one of the things that fascinates me about computer games is the programming challenge of implementing a computer opponent. I've followed the work in this area for many years, and have even taken a few tries at programming computer opponents myself, once in the context of a railroad strategy game called 1830, another time in the context of random dungeon creation algorithms in a NetHack-style dungeon game. Of course, this is a huge and deep field, filled with lots of fascinating sub-problems.

So it was a delight to stumble across this fascinating presentation by one of the game designers at Valve, discussing some of the ideas and concepts behind the automated computer opponents and automated computer team-mates in the Left 4 Dead game.

The game itself may not be your cup of tea (it's a shoot-em-up game with a horror/zombie theme), but the presentation is mostly about issues that translate well across a wide variety of games, such as how to model unpredictability, how to construct co-operating actors, and how to provide re-playability.

If you're at all interested in what goes into the underlying logic of providing computer player behaviors in a modern game, I'm sure you'll enjoy reading through the presentation.