It's quiet in the house today; others have to work, but I get to lounge around reading computer science papers :)
If you, too, find yourself with a few quiet hours, you might find these items interesting:
- Analyzing Big Data with Twitter. The UC Berkeley School of Information held a special class this fall with Twitter as the subject. The school kindly taped the lectures, and put them up on YouTube for your learning pleasure!
How to store, process, analyze and make sense of Big Data is of increasing interest and importance to technology companies, a wide range of industries, and academic institutions. In this course, UC Berkeley professors and Twitter engineers will lecture on the most cutting-edge algorithms and software tools for data analytics as applied to Twitter microblog data. Topics will include applied natural language processing algorithms such as sentiment analysis, large scale anomaly detection, real-time search, information diffusion and outbreak detection, trend detection in social streams, recommendation algorithms, and advanced frameworks for distributed computing. Social science perspectives on analyzing social media will also be covered.
- A team of students at Stanford set out to reproduce various network research findings, by setting up and re-performing the experiments discussed in the research literature. They report on their results in the paper: Reproducible Network Experiments Using Container-Based Emulation
We report lessons learned from a graduate networking class at Stanford, where 37 students used our platform to replicate 16 published results of their own choosing
Reproducing the research results of others is a wonderful teaching tool; I'm pleased to see it being used successfully.For their final project, students were given a simple, openended request: choose a published networking research paper and try to replicate its primary result using Mininet-HiFi on an Amazon EC2 instance.
The students not only reproduced but improved upon the research results:After three weeks, 16 of the 18 teams successfully reproduced at least one result from their chosen paper; only two teams could not reproduce the original result. Four teams added new results, such as understanding the sensitivity of the result to a parameter not in the original paper.
- Lastly, a ten-year-old paper that is new to me: Practical Self-Stabilization for Tolerating Unanticipated Faults in Networked Systems
As the complexity of networked systems increases, their likelihood of experiencing unanticipated faults sooner or later also grows. By unanticipated network faults, we mean not only (a) occurrence of new types of faults that were not explicitly planned for in network design, but also (b) occurrence of well known types of faults but at frequencies that are abnormal.
. It's not clear to me if the team is still working on these ideas, but you can find some of their older work from their research page....
Stabilization provides one alternative that avoids case-by-case handling of unanticipated faults of sort (b) and can deal with faults of sort (a). We illustrate this point with an abstract explanation of stabilization: Stabilization involves defining “correct” behavior of the network and by continually checking whether the network is presently conforming to this behavior. In other words, instead of defining specific patterns of incorrect behavior, stabilization checks for occurrence of any anomalous behavior. Should incorrect behavior be detected, stabilization promises to restore the network behavior so that eventually it is correct.
No comments:
Post a Comment