As Welsh observes, measuring mobile web performance is tricky because the experience will:
depend on what the user is trying to do. Someone trying to check a sports score or weather report only needs limited information from the page they are trying to visit. Someone making a restaurant reservation or buying an airline ticket will require a confirmation that the action was complete before they are satisfied. In most cases, users are going to care most about the "main content" of a page and not things like ads and auxiliary material.
He then covers a variety of efforts that are underway to make these measurements more concrete and precise, including the tools at http://www.webpagetest.org, with their thorough and detailed documentation, and this great presentation from an O'Reilly conference last spring.
I'm enough of a dinosaur to remember the "bad old days" of performance benchmarking, when the whole field was so volatile and commercialized and fraught with acrimony and hostility that Jim Gray had to publish his seminal database benchmarking paper anonymously.
It's great to see the Google team being so open with their work, sharing not only their results, but more importantly their techniques, methods, and reasoning. Thinking about performance is hard, and studying how a team goes about considering the problem (defining metrics, establishing benchmarks, compiling results, analyzing findings) is incredibly helpful for learning to become a better performance analyst in any area of computer science.
So thanks, Googlers, and please keep on sharing what you're finding!