Friday, July 31, 2009

Lake Sabrina backpacking

I haven't posted anything for a week; sorry about that. But I have a good excuse, as I was unplugged and unwinding at 11,000 feet in the Eastern Sierra Nevada mountains.

We took a 4-day, 3-night backpack trip out of the Lake Sabrina trailhead above Bishop. The trailhead is at 9,180 feet, our trail crested at 11,200 feed, and we spent most of our time at 10,975 foot Baboon Lake. Our trip route was similar to the trip marked on this map.

Although we had a few problems with mosquitos, the weather was nearly perfect, we had the lakes almost completely to ourselves, and the setting was as beautiful as any I've seen.

As a reward for the mosquitos, we were treated to vistas of snowpacks, foaming and churning waterfalls, and meadows of gorgeous wildflowers.

See more pictures of the trip here.

And here's a slideshow, if you wish:
Posted by Picasa

Tuesday, July 21, 2009

Perforce pure Java API

Perforce have released a 100% pure Java client-side API (in beta status).

I've used several of the other Perforce client APIs before, including the Ruby and Python APIs. Generally, these APIs work by invoking the Perforce command-line tool (p4) in "-ztag" mode, and then parsing the returned output.

Based on those ideas, I built my own small client-side Java API by spawning p4 from my own Java code, and parsing the output. It is simple and straightforward and works great. However, it is code that I have to maintain myself.

Unlike these other APIs, it appears that the new Java API is a complete implementation of the client-side Perforce networking protocol, so it speaks directly to the server without requiring the command-line tool to be installed and run by the API libraries.

I don't currently need to make heavy use of writing wrappers and tools which automate Perforce commands, and my current code is pretty stable, so there's no rush, but the next time I need to write any code like this, I will certainly investigate the Perforce P4Java API in more detail, as it looks quite nice.

Here's the javadoc.

For example, a common task that I do with my current code is to get a short description of a submitted changelist, and format it nicely for display in my UI. It seems like this would be quite straightforward in the new API, requiring little more than:

P4Server p4Svr = P4ServerFactory.getServer(...);
P4Changelist myCL = p4Svr.getChangelist(12345);
... access myCL.getDescription(), myCL.getFiles(), etc. ...
This is about as clean as I could possibly want; off the top of my head it looks ideal. Yay Perforce!

Java performance slides from Cliff Click

Cliff Click has posted the slides from his talks at the 2009 JavaOne conference.

If you aren't already familiar with Cliff's work, he's with Azul, the company which makes the custom servers for ultra-high-end Java applications, and he is deeply involved with Java performance issues, particularly those which involve multi-threaded systems.

This year's presentations from Cliff include:

The hardware talk basically makes the point that single-processor-performance has pretty well maxed out, and all the action is in making multiprocessor machines, and so the important questions are:
  • How well does application software use many CPUs?
  • Can the hardware guys provide an adequate memory subsystem ("memory is the new disk", says Cliff)?
The benchmarking slides are a great review of the problems of trying to design and run a decent Java benchmark.

This description of the typical performance cycle is all-too-true:
Typical Performance Tuning Cycle
> Benchmark X becomes popular
> Management tells Engineer: “Improve X's score!”
> Engineer does an in-depth study of X
> Decides optimization “Y” will help
● And Y is not broken for anybody
● Possibly helps some other program
> Implements & ships a JVM with “Y”
> Management announces score of “X” is now 2*X
> Users yawn in disbelief: “Y” does not help them
Also, given our discussion a few weeks ago about the odd sizing of application memory, it was interesting to read that Azul are running benchmarks with 350 Gb heaps.

Anyway, the slides are fascinating, even though (as is often the case) it is hard to read presentation slides without having the listener explain them to you. But they're well worth reading, so: Enjoy!

Thursday, July 16, 2009

Ubuntu boot speed rocks!

The boot-up speed of Ubuntu 9 is remarkable!

I have a variety of machines that I use regularly.

Several are RedHat Linux machines that reside in a machine closet. I leave those machines always-running; it's not uncommon for 9 months to elapse between reboots.

Many are Windows desktop and laptop machines. I shut these machines down routinely, but never happily, because these machines take 2, 5, sometimes 7 minutes or more to boot up. If I can avoid it, I try not to shut these machines down, because boot is so painful.

I have an old laptop (a Dell Latitude 610) which runs Ubuntu 9, after years of painfully running Windows XP.

This machine boots up in lightning speed! It gets to the login prompt in 6-7 seconds, and gets to the full Ubuntu desktop in about 5 more seconds.

I'm not sure how they accomplished this (though I recall reading a detailed article several years ago by a team which was focusing on improving startup speed, so I suspect the answer is simple: hard, sustained work), but I sure am happy that they did it!

Tuesday, July 14, 2009

To String.intern, or not to intern?

I don't have a lot of hands-on experience with String.intern.

This function has been around for a long time, but I recently started thinking about it as a possible tool for controlling memory usage. As you'll see, the Sun documentation describes the function as a tool for altering the behavior of String comparisons:

Returns a canonical representation for the string object.

A pool of strings, initially empty, is maintained privately by the class String.

When the intern method is invoked, if the pool already contains a string equal to this String object as determined by the equals(Object) method, then the string from the pool is returned. Otherwise, this String object is added to the pool and a reference to this String object is returned.

It follows that for any two strings s and t, s.intern() == t.intern() is true if and only if s.equals(t) is true.

All literal strings and string-valued constant expressions are interned. String literals are defined in §3.10.5 of the Java Language Specification

In my particular case, we maintain a large cache of object graphs, where the object data is retrieved from a database. Furthermore, it so happens that these object graphs contain a large number of strings which are used and re-used quite frequently.

So I was recently pawing through an enormous memory dump, and I was skimming through the dump of all the active String objects, and I was struck by how there was a lot of duplication, and that made me think of whether or not we were using String.intern appropriately.

So I did some research, and found several quite interesting essays on the topic.

My reaction so far is that:
  • Yes, it looks like String intern'ing could really help.
  • Unfortunately, the need to potentially configure PermGen space is a bummer.
  • And, it seems important to have a really good handle on what strings are worth interning. Too few, and I've just changed a bunch of code to no real effect. Too much, and I've exchanged a memory waste problem for a PermGen configuration problem, plus possibly burdened the VM by making it do more work on allocations for little gain.
In general, given my vague understanding of the state of the art in JVMs nowadays, it seems like the JVM teams are working on making memory allocation fast and cheap.

And, as we've discussed previously in this blog, memory is becoming cheap and widely available.

So, it doesn't seem to be immediately obvious that intern'ing will be worth it, because in general it seems like a bad strategy to be asking the CPU to be doing more work in order to conserve memory, unless we have a strong reason to believe that we have a lot of memory duplication and the memory savings are either
  • so substantial that they will outweigh the extra expense and hassle of managing the intern pool, or
  • so substantial that the conservation of that much memory will open up a broad new range of applications for the code (e.g., we can now handle some problem sizes that were just way too large for us to handle without interning).
So I think that for now I will read some more, and think about this some more, but I'm not going to race to start planting a lot of intern calls in the code.

Are there profiling features that look at a benchmark underway, and analyze whether or not interning would have been useful?

Threads for JavaScript!

I recently upgraded to Firefox 3.5, and was browsing the release notes.

In the release notes, there was a quiet reference to the new WebWorkers feature:
Support for native JSON, and web worker threads.
Somehow, I just skimmed over this, but then a separate posting to woke me up and made me pay more atttention:

Web Workers, which were recommended by the WHATWG, were introduced in Firefox 3.5 to add concurrency to JavaScript applications without also introducing the problems associated with multithreaded programs. Starting a worker is easy - just use the new Worker interface.
Reading a bit of background material, it seems as though the Web Workers feature was partially added to previous releases of Firefox, but wasn't quite ready for prime time. Now it seems that the bespin guys have been actively using this, and have proven it out, and so it's become a Real Feature of Firefox 3.5.

The master documentation at the WHATWG is quite thorough, and contains a lot of examples, including:

  • The simplest use of workers is for performing a computationally expensive task without interrupting the user interface.In this example, the main document spawns a worker to (na├»vely) compute prime numbers, and progressively displays the most recently found prime number.
  • In this example, the main document spawns a worker whose only task is to listen for notifications from the server, and, when appropriate, either add or remove data from the client-side database.
  • In this example, the main document uses two workers, one for fetching stock updates for at regular intervals, and one for fetching performing search queries that the user requests.
  • In this example, multiple windows (viewers) can be opened that are all viewing the same map. All the windows share the same map information, with a single worker coordinating all the viewers. Each viewer can move around independently, but if they set any data on the map, all the viewers are updated.
  • With multicore CPUs becoming prevalent, one way to obtain better performance is to split computationally expensive tasks amongst multiple workers. In this example, a computationally expensive task that is to be performed for every number from 1 to 10,000,000 is farmed out to ten subworkers.
  • offload all the crypto work onto subworkers.

Apparently, this support is also part of Thunderbird, as well. Active background JavaScript threading in my email reader! Zounds!

Sunday, July 12, 2009

Fakes, Mocks, and Stubs

The other day, I was listening to Roy Osherove on Scott Hanselman's podcast, Hanselminutes, and I really liked the way that Roy described the difference between Fakes, Mocks, and Stubs.

As I heard it (which of course, may not be the way Roy intended it to be heard), it is something like the following:

  • Fakes are the various bits of test-infrastructure that you end up piecing together in order to write decent unit tests. Any code that is written solely to support testing falls into this category. There are lots of kinds of fakes, among which are Stubs, and Mocks.
  • Stubs are fakes which have no behavior, or which at best have trivial behavior. Stubs are simple-minded, and have no logic and make no decisions, simply providing an interface to compile/load/run with.
  • Mocks are fakes which verify the correct behavior of the test. That is, you can assert against mocks; mocks are part of the deciding about whether the test passed or failed.
Here's a link to Osherove's blog posting discussing this in more detail, with a bit of a diagram. In his post, he refers to Martin Fowler's longer and more detailed essay about Mocks and Stubs. Fowler, in turn, describes the various types of testing objects as follows, and cites Gerard Meszaros for the origin of this terminology:

The vocabulary for talking about this soon gets messy - all sorts of words are used: stub, mock, fake, dummy. For this article I'm going to follow the vocabulary of Gerard Meszaros's book. It's not what everyone uses, but I think it's a good vocabulary and since it's my essay I get to pick which words to use.

Meszaros uses the term Test Double as the generic term for any kind of pretend object used in place of a real object for testing purposes. The name comes from the notion of a Stunt Double in movies. (One of his aims was to avoid using any name that was already widely used.) Meszaros then defined four particular kinds of double:

  • Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
  • Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).
  • Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it 'sent', or maybe only how many messages it 'sent'.
  • Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.
Of these kinds of doubles, only mocks insist upon behavior verification. The other doubles can, and usually do, use state verification. Mocks actually do behave like other doubles during the exercise phase, as they need to make the SUT believe it's talking with its real collaborators - but mocks differ in the setup and the verification phases.
There is an enormous amount of additional information available at, including several quite readable papers about the Mock Objects philosophy of testing.

Friday, July 10, 2009

Is it time to start learning Scala?

Generally, by the time I get interested in something, I'm about the last person on the block. It's sort of a variant on that old saw about not trusting people over 30; I'm that guy who jumps on the bandwagon long after it's been rolling down the street...

Anyway, I've been a fan of Bill Venner's Artima website for a long, long time. I've subscribed to his newsletter for over a decade and I read the articles and interviews regularly.

And it's no secret that Bill Venner has been a big fan of Scala, the alternate JVM programming language developed by Martin Odersky. Up til now, I'd been sort of skimming the Scala articles, not quite paying enough attention, but definitely aware that something was going on.

Then, somewhat serendipitously, 3 events occurred almost simultaneously to alert my sensors and make me wonder if I'm not quite paying enough attention to Scala:

  • First, there's the James Strachan blog post that everybody's been talking about. Strachan is the inventor of Groovy, another alternate JVM programming language, and one which has found a certain amount of favor in some of the circles I run in. So Strachan definitely has some dynamic programming language props, and when he declares

    Though my tip though for the long term replacement of javac is Scala. I'm very impressed with it! I can honestly say if someone had shown me the Programming in Scala book by by Martin Odersky, Lex Spoon & Bill Venners back in 2003 I'd probably have never created Groovy.

    you just have to sit up and pay attention. And the world did pay attention, as Strachan has already followed up with another detailed post, and there's an avalanche of comments to wade through.

  • Secondly, the most recent issue of the Artima newsletter had a truly great article on the intricacies of overriding Object.equals(). This is, of course, one of the great Java interview questions of all time, and if you've read your Joshua Bloch you can probably muddle your way through it, but this article was superb, covering the subject deeply, well, and with thorough examples. So what? Well, the article was written by Odersky, Spoon, and Venners.
  • Lastly, just while all this stuff was sort of leaking through my psyche, I happened across this note on Weiqi Gao's web log, reflecting on a presentation by Kevin Nilson:

    Is Scala for real? Kevin told us its the hottest thing at Silicon Valley. Mark and I still have some lingering doubts. Mark is focusing on something called persistent data structures. I'm more of a Luddite, fearing the years of learning that I have to go through to be proficient.

There's some sort of superstition about things that arrive in threes. Since I'm a sailor, I'm vulnerable to these superstitious behaviors, so there it is.

I guess I'd better go start at the beginning.

Thursday, July 9, 2009

Data Deduplication

Apparently "Data Deduplication" is a New Hot Thing.

I find "deduplication" a very awkward word, but from what I can tell, it refers to software systems which can automatically detect redundant information and consolidate it.

That is, it's a form of compression.

I started paying attention to it when EMC and NetApp engaged in a takeover battle to purchase Data Domain, which is apparently a big player in this field. EMC won the takeover battle this week.

Here's a Wikipedia page about Data Deduplication.

I guess I'm kind of surprised that this technique is more successful than simple compression; NTFS has been able to compress a file system automatically for 20 years, I believe. And Microsoft recently added a very sophisticated automatic compression feature to SQL Server 2008; it supports both row compression and page compression.

But apparently this new Data Deduplication technology is attracting a lot of interest; here's a recent Usenix article about extending the technology to clustered file systems, from a team at VMWare. The article points to work done about 10 years ago, in the context of backup/archiving processing, under the project named Venti.

Robin Harris at StorageMojo speculates that NetApp might decide to go after Quantum, now that they lost out on DataDomain. Apparently Quantum was an early leader in Data Deduplication technology. I wonder if my old friend Nick Burke still works at Quantum?

There are many new things under the sun, and now I'll start paying more attention to Data Deduplication (once I train my fingers to start spelling it properly -- ugh what a mouthful!)

Wednesday, July 8, 2009

Google Chrome OS looks intriguing

Everybody's talking about the new Google Chrome OS.

There isn't a lot of information about it. As Google's post says, they are just describing the vision now, and will roll out the details later. I think that's fair, and we should hold our criticism until more information is known.

But the initial blog post is definitely exciting. First, their description of the overall architecture:

The software architecture is simple — Google Chrome running within a new windowing system on top of a Linux kernel. For application developers, the web is the platform. All web-based applications will automatically work and new applications can be written using your favorite web technologies.

I'm a little worried about this "new windowing system" part, but the rest of the description seems great.

And secondly, their description of the desired user experience:

People want to get to their email instantly, without wasting time waiting for their computers to boot and browsers to start up. They want their computers to always run as fast as when they first bought them. They want their data to be accessible to them wherever they are and not have to worry about losing their computer or forgetting to back up files.

I think this, too, is very true. I don't personally experience the "always run as fast as when they first bought them" problem, but I agree with the general observation that each additional piece of sofware that gets installed locally reduces the overall reliability and stability of the machine, so keeping that to a minimum is a big step toward improving the user experience.

Wired's web site has some interesting articles with analysis and questions.

The place where I have the most skepticism, though, is this notion that browser-based applications don't require much computing power. Wired claims that

Chrome OS is designed to run on low-powered Atom and ARM processors, and web-based applications don’t require that much horsepower on the client end so it should be faster still

I am quite unconvinced of this. Yes, many low-end web applications place only trivial usage requirements on the browser host, but the sort of serious desktop-replacement web applications that are being discussed here:

  • word processors
  • databases
  • spreadsheets
  • animations
and so forth, place serious processing demands on the browser.

But it's still a fundamentally great concept, and I'm looking forward to learning more about ChromeOS in the near future and getting access to more technical information soon.

UPDATE: Of course, Google are not the only company understanding that modern browsers are essentially entire operating systems of themselves. Microsoft is doing lots of work in this area as well.

Monday, July 6, 2009

Assignability and parameterized collections

I was involved in a wide-ranging discussion of interfaces-vs-implementations, classes-and-sub-classes, Liskov substitutability, covariance, and what-not, with a colleague, when I was surprised by a Java behavior.

To explain what was surprising to me, consider the following program. To understand this program, you probably need to know that class String extends class Object, and implements the interface Comparable.

import java.util.List;
import java.util.ArrayList;

public class parm
public static void main(String args[])
Object object = new Object();
String string = new String();
Comparable comparable = string;
String []stringArray = new String[1];
Object []objectArray = new Object[1];
List<String> stringList = new ArrayList<String>();
List<Object> objectList = new ArrayList<Object>();
ArrayList<String> stringArrayList = new ArrayList<String>();
ArrayList<Object> objectArrayList = new ArrayList<Object>();

if (object.getClass().isAssignableFrom(string.getClass()))
System.out.println("Object = String is OK");
if (string.getClass().isAssignableFrom(object.getClass()))
System.out.println("String = Object is OK");
if (comparable.getClass().isAssignableFrom(string.getClass()))
System.out.println("Comparable = String is OK");
if (objectArray.getClass().isAssignableFrom(stringArray.getClass()))
System.out.println("Object[] = String[] is OK");
if (stringArray.getClass().isAssignableFrom(objectArray.getClass()))
System.out.println("String[] = Object[] is OK");
if (objectList.getClass().isAssignableFrom(stringList.getClass()))
System.out.println("List<Object> = List<String> is OK");
if (stringList.getClass().isAssignableFrom(objectList.getClass()))
System.out.println("List<String> = List<Object> is OK");
if (objectArrayList.getClass().isAssignableFrom(stringArrayList.getClass()))
System.out.println("ArrayList<Object> = ArrayList<String> is OK");
if (stringArrayList.getClass().isAssignableFrom(objectArrayList.getClass()))
System.out.println("ArrayList<String> = ArrayList<Object> is OK");

objectArray = stringArray;

Before you compile and run the above program, try to guess what it's going to print out.

Well, I'm going to spoil it for you. Here's what it prints out:

Object = String is OK
Comparable = String is OK
Object[] = String[] is OK
List<Object> = List<String> is OK
List<String> = List<Object> is OK
ArrayList<Object> = ArrayList<String> is OK
ArrayList<String> = ArrayList<Object> is OK

The first few lines of output make sense; they correspond quite well to my simple-minded understanding of the Liskov substitution principle.

And the substitution principle (also often called the IS-A principle), seems to hold for simple Java arrays, too. A String array can be assigned to an Object array, but not vice versa.

However, when we get to parameterized collection types, things start to go a little funky. All of a sudden the isAssignableFrom method seems to disregard the concrete type argument (<String> or <Object>) and just returns TRUE, indicating that the various parameterized list types should all be assignment compatible.

But they are not!

If you try to compile the following code:

stringList = new ArrayList<Object>();
objectList = new ArrayList<String>();
stringArrayList = new ArrayList<Object>();
objectArrayList = new ArrayList<String>();

You get: incompatible types
found : java.util.ArrayList<java.lang.Object>
required: java.util.ArrayList<java.lang.String>
stringArrayList = new ArrayList<Object>();
^ incompatible types
found : java.util.ArrayList<java.lang.String>
required: java.util.ArrayList<java.lang.Object>
objectArrayList = new ArrayList<String>();

So what is going on here? Are the types assignable, or not?

After thinking about this for a while, I find that I can't really explain the behavior of the Class.isAssignableFrom method on parameterized types, but I do think that the behavior of the parameterized types is reasonable.

A List<String> is not a sub-type of a List<Object>, because a List<Object> is a type which allows an instance of any type of object to be added to it, while a List<String> is a type which only allows instances of type String to be added to it. So the String list does not implement the same contract as the Object list, and so cannot be considered a sub-type of it, and cannot be casually assigned without a cast.

So I think that the Java compiler is doing something reasonable here, but it sure is complicated.

Here's a nice short article that tries to explain the behavior in clear terms.

And, the next time you are at a cocktail party, and want to wow-em, you can casually say:

In Java, parameterized types are not covariant.

And just watch everybody's jaw drop with respect :)

Sunday, July 5, 2009

Which change caused this test failure?

A test failed this weekend.

We hadn't been running our tests as often as we should, so it wasn't immediately clear what had caused the test failure. The best that the Build Farm could tell us was that it had happened in the past few days, and that it was one of about 15 changes to the code that we had made during that time.

So, I used a feature of the Build Farm, and submitted 2 builds:
  • One build against Perforce changelist 122438, which was about 5 changelists back from the head of the code,
  • One build against Perforce changelist 122401, which was about 10 changelists back from the head of the code.
After a couple hours, the Build Farm came back with the results: the test passed in both builds.

This told me that the change was in the last 5 changelists. One more build and we were able to identify the precise changelist which caused the test to fail.

Some methodologies perform a complete build after every single change, but for us that's very hard because complete test runs take hours or days.

So we run our build and test runs less frequently, which leaves us exposed to the problem of trying to figure out which change caused a particular test failure.

Sometimes it's obvious, but when it's not, as long as we are careful about submitting small, focused changes in each changelist, Perforce can help us figure out which change caused the test failure.

Friday, July 3, 2009

DERBY-4187 is in the trunk

I've committed DERBY-4187 to the Derby trunk.

This patch was contributed by Eranda Sooriyabandara, who is interning with Derby as part of the Google Summer of Code. Eranda has been working on converting Derby's old test suites to modern JUnit techniques, as well as fixing bugs in Derby. This is the first of Eranda's patches to be committed to the Derby trunk, I believe; it was a large test and quite complicated to convert.

Hopefully we'll have several more patches from Eranda before the summer is complete!

Wednesday, July 1, 2009

The peculiar plateauing of memory sizes

I was struck by a recent interview with Gil Tene of Azul posted on Artima . In the interview they discuss the surprising observation that

the practical size of a single [application] instance hasn't changed since about 2000.

I think this is very interesting; I've seen very similar results, and wondered the same thing. Certainly their observations about the progress in server memory capacity seems true; witness this recent announcement of a machine from Cisco that supports 384 Gb of physical memory on a single server.

The Artima interview suggests that one stumbling block for such large applications involves the fact that many modern applications are written using modern VM-based languages, such as Java or .NET, and that many of these virtual machines don't handle very large memory sizes well:

We know that most VMs tend to work well with one or two gigabytes, and when you go above that, they don't immediately break, but you end up with a lot of tuning, and at some point you just run out of tuning options.

I don't have a lot of experience with trying to run JVMs at that size; our in-house testing systems rarely have more than 4 Gb of physical memory, so I've mostly built test rigs by combining several smaller nodes, rather than exploring the scaling behaviors of a single large node.

I suspect that, in addition to the basic JVM problems, it is also true that applications aren't generally coded well to handle enormous amounts of physical memory. Many Java programmers haven't though much about how to build in-memory data structures that can scale to stupendous sizes. When your HashMap has only a few tens of thousands of instances, you can survive the occasional inefficiency of, for example, opening an Iterator across the entire map to search for an object via something other than the hash key, but if your collections have millions of objects, your application will slow to a crawl.

As a counter-example of the Artima article's claim, however, let me point to the Perforce server. Our internal Perforce server runs on a dedicated machine with 32 Gb of real memory, and Perforce does an excellent job of using that memory efficiently. Over the last decade, we have upgraded our Perforce environment from a 8 Gb machine, to a 16 Gb machine, to the current 32 Gb machine, and Perforce's server software has automatically adapted to the memory in a trouble-free fashion, using the resources effectively and efficiently.

At the time, I thought that was a pretty big machine, but of course Google has been running Perforce on a machine with 128 Gb of physical memory since 2006, and they're probably on a larger machine now, and VMWare have been running Perforce on a machine with 256 Gb of physical memory since 2008 (in VMWare's case, they also give their machine solid-state disk since they apparently couldn't give it enough physical memory).

I seem to recall reading that Google had to build their own physical hardware for their Perforce server, as at that time you couldn't buy a machine with that much memory from a standard vendor. I wonder if this is still true; I just wandered over to HP's web site to see what their systems look like, and it seems like their servers generally max out at 128 Gb of physical memory. However, Sun seem to be advertising a system that can handle up to 2 Terabytes of physical memory, so obviously such systems exist.

Of course, nowadays big companies don't even think about individual computers anymore :)