Friday, June 15, 2012

Stuff I'm reading on a Friday afternoon

A variety of stuff, in a variety of areas (what else did you expect?)

  • It's been a few weeks now since the great LinkedIn password disaster. You might be sick of reading about it, but these particular essays are worth your time:
    • Brian Krebs talks to Thomas Ptacek about password security: How Companies Can Beef Up Password Security
      The difference between a cryptographic hash and a password storage hash is that a cryptographic hash is designed to be very, very fast. And it has to be because it’s designed to be used in things like IP-sec. On a packet-by-packet basis, every time a packet hits an Ethernet card, these are things that have to run fast enough to add no discernible latencies to traffic going through Internet routers and things like that. And so the core design goal for cryptographic hashes is to make them lightning fast.

      Well, that’s the opposite of what you want with a password hash. You want a password hash to be very slow. The reason for that is a normal user logs in once or twice a day if that — maybe they mistype their password, and have to log in twice or whatever. But in most cases, there are very few interactions the normal user has with a web site with a password hash. Very little of the overhead in running a Web application comes from your password hashing. But if you think about what an attacker has to do, they have a file full of hashes, and they have to try zillions of password combinations against every one of those hashes. For them, if you make a password hash take longer, that’s murder on them.

    • Steven Bellovin points out some counter-intuitive aspects of the LinkedIn compromise: Password Leaks
      There's another ironic point here. Once you log in to a web site, it typically notes that fact in a cookie which will serve as the authenticator for future visits to that site. Using cookies in that way has often been criticized as opening people up to all sorts of attacks, including cross-site scripting and session hijacking. But if you do have a valid login cookie and hence don't have to reenter your password, you're safer when visiting a compromised site.
    • Patrick Nielsen has a great two-part series of posts: Nielsen again discusses some of the differences between cryptographic hashes and password hashes:
      If you create a digest of a password, then create a digest of the digest, and a digest of that digest, and a digest of that digest, you've made a digest that is the result of four iterations of the hash function. You can no longer create a digest from the password and compare it to the iterated digest, since that is the digest of the third digest, and the third digest is the digest of the second digest. To compare passwords, you have to run the same number of iterations, then compare against the fourth digest. This is called stretching.
    • Francois Pesce of Qualys took some time to share some statistical observations about the leaked hashes: Lessons Learned from Cracking 2 Million LinkedIn Passwords
      The hashes in the 120MB file sometimes had their five first characters rewritten with 0. If we look at the 6th to 40th characters, we can even find duplicates of these substrings in the file meaning the first five characters have been used for some unknown purpose: is it LinkedIn that stores user information here? is it the initial attacker that tagged a set of account to compromise? This is unknown.
  • I must sadly admit that somehow I had never heard of the brilliant young computer scientist Mihai Pătraşcu before his tragic death last month. If, like me, you were ignorant of this young man and his work, read on:
  • Nat Torkington does something unusual. Instead of his normal "Four Short Links", he chooses to supply just One Short Link, as he is so impressed by the results of Marc Hedlund and the engineering team at Etsy with their summer research program.
    With help from all of you, Hacker School received applications from 661 women, nearly a 100-times increase from the previous session.
  • Interesting tidbits about the new FLAME virus are starting to emerge: FLAME – The Story of Leaked Data Carried by Human Vector
    So, how is the memory stick carried between the two systems? Well, here is where the human factor kicks in. So it’s amazing how two instances of Flame communicate with one another using a memory stick and a human as a channel. A private channel is created between two machines and the person carrying the memory stick has no idea that he/she is actually contributing to the data leak.
  • File systems and storage systems continue to evolve:
    • Introduction to Data Deduplication in Windows Server 2012
      Deduplication creates fragmentation for the files that are on your disk as chunks may end up being spread apart and this causes increases in seek time as the disk heads must move around more to gather all the required data. As each file is processed, the filter driver works to keep the sequence of unique chunks together, preserving on-disk locality, so it isn’t a completely random distribution. Deduplication also has a cache to avoid going to disk for repeat chunks. The file-system has another layer of caching that is leveraged for file access. If multiple users are accessing similar files at the same time, the access pattern will enable deduplication to speed things up for all of the users.
    • VMFS Locking Uncovered
      In order to deal with possible crashes of hosts, the distributed locks are implemented as lease-based. A host that holds a lock must renew a lease on the lock (by changing a "pulse field" in the on-disk lock data structure) to indicate that it still holds the lock and has not crashed. Another host can break the lock if the lock has not been renewed by the current holder for a certain period of time.
    • NFS on vSphere – A Few Misconceptions
      There are only two viable ways to attempt load balancing NFS traffic in my mind. The decision boils down to your network switching infrastructure, the skill level of the person doing the networking, and your knowledge of the traffic patterns of virtual machines.
  • This game looks like fun: Ninja: Legend of the Scorpion Clan. Here's the BGG page
  • Ryan Carlson with an intriguing, infuriating post about How I manage 40 people remotely. Why do I say infuriating? I guess it's just that I believe that a setup like this is doomed:
    I’m in the UK with one other person on the Support Team, our main office is in Orlando and the rest of the Team is spread out all around the States.
    Carlson seems to have come to, at least partly, the same conclusion:
    I’ve decided it’s no longer viable to manage the team from another country. We’re still going to operate remotely as a company with everyone spread out around the US, but as the CEO I really need to be on US time.

    So I’m moving my family to Portland Oregon where we’re going to setup another office for Treehouse. A lot of the team will still be remote but being closer will really help. My goal is to slowly gather Team Members in our Portland office

  • Cliff Mass wonders how the success of Space-X will play out for other space-related government activities, such as weather forecasting. Will there be a Weather-X?
    The National Weather Service prediction efforts are crippled by inadequate computer infrastructure, lack of funds for research and development, an awkward and ineffective research lab structure out of control of NWS leaders, and government personnel rules that don't allow the NWS to replace ineffective research and development staff. Lately there has been talk of furloughs for NWS personnel and a number of the NWS leadership are leaving. The NWS has fallen seriously behind its competitors (e.g., the European Center for Medium Range Weather Forecasting, UKMET office, Canadian Meteorological Center) even though the U.S. has a huge advantage in intellectual capital (U.S. universities and the National Center for Atmospheric Research are world leaders in field, as are several U.S. government research labs--e.g, NRL Monterey).
  • Here's a nice short page at the IETF summarizing the current state of the various Http 2.x proposals
  • People always ask me what I use for my Integrated Development Environment. It's often hard to explain to them that the operating system itself is my IDE. Now I can just point them to this great description: Using Unix as your IDE
    I don’t think IDEs are bad; I think they’re brilliant, which is why I’m trying to convince you that Unix can be used as one, or at least thought of as one. I’m also not going to say that Unix is always the best tool for any programming task; it is arguably much better suited for C, C++, Python, Perl, or Shell development than it is for more “industry” languages like Java or C#, especially if writing GUI-heavy applications. In particular, I’m not going to try to convince you to scrap your hard-won Eclipse or Microsoft Visual Studio knowledge for the sometimes esoteric world of the command line. All I want to do is show you what we’re doing on the other side of the fence.
  • Lastly, what a great conference this must have been: Turing’s Tiger Birthday Party
    Alan Turing earned his Ph.D. at Princeton in 1938 under the great Alonzo Church. This alone gives Princeton a unique claim to Turing. But there are many other connections between Turing and Princeton. Two of the other great “fathers of computation,” John von Neumann and Kurt Gödel, were also at Princeton and promoted his transfer in 1936 from Cambridge University.

    ...

    The meetings were held in McCosh 50. I (Dick) taught freshman Pascal CS101 there years ago there with Andrea LaPaugh, while Ken remembers taking Econ 102: Microeconomics there. This is the same hall where Andrew Wiles gave his “general” talk to a standing-room audience on his famous solution to Fermat’s Last Theorem, after he had repaired it.

No comments:

Post a Comment