Sunday, May 19, 2019

Various things I've been reading

A little bit of this, a little bit of that.
  • Privacy Rights and Data Collection in a Digital Economy (Senate hearing)
    But there is a second, more fundamental sense of the word privacy, one which until recently was so common and unremarkable that it would have made no sense to try to describe it.

    That is the idea that there exists a sphere of life that should remain outside public scrutiny, in which we can be sure that our words, actions, thoughts and feelings are not being indelibly recorded. This includes not only intimate spaces like the home, but also the many semi-private places where people gather and engage with one another in the common activities of daily life—the workplace, church, club or union hall. As these interactions move online, our privacy in this deeper sense withers away.

    Until recently, even people living in a police state could count on the fact that the authorities didn’t have enough equipment or manpower to observe everyone, everywhere, and so enjoyed more freedom from monitoring than we do living in a free society today. [Note: The record for intensive surveillance in the pre-internet age likely belongs to East Germany, where by some estimates one in seven people was an informant.].

    A characteristic of this new world of ambient surveillance is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. However sincere our commitment to walking, the world around us would still be a world built for cars. We would still have to contend with roads, traffic jams, air pollution, and run the risk of being hit by a bus.

    Similarly, while it is possible in principle to throw one’s laptop into the sea and renounce all technology, it is no longer be possible to opt out of a surveillance society.

    When we talk about privacy in this second, more basic sense, the giant tech companies are not the guardians of privacy, but its gravediggers.

  • Solving Puzzles to Protect the Cloud: CTO Taher Elgamal on His Role at Salesforce and the Future of Cryptography
    Elgamal sees interesting challenges emerging in the next decade or so as quantum computing becomes a reality. He’s passionate about agility in cryptography, noting that, currently, when changes need to be made because an implementation has been shown to have weaknesses, it causes a big slowdown for security engineers. We can’t wait ten years, Elgamal says, to start the effort to protect against new technologies.
  • Amazon’s Away Teams laid bare: How AWS's hivemind of engineers develop and maintain their internal tech
    The Away Team model and generally easy access to source code means that investment can easily cross service boundaries to enhance the power of the entire system of services. Teams with a vision for making their own service more powerful by improving other services are free to execute.
  • OPP (Other People's Problems)
    If it’s your job (or the job of someone who reports to you), great. Go to it! Tend your own garden first. Make systems that are as robust as you believe systems should be. Follow processes that you believe are effective and efficient. If you are not leading by example, you have to start there. Stop reading now and go fix the things!

    If there’s no clear owner, do you know why? Is it just because no one has gotten around to doing it, or has the organization specifically decided not to do it? If no one’s gotten around to doing it, can you do it yourself? Can your org do it, just within your org?

    If it’s someone else’s job, how much does it affect your day to day life? Does it bother you because they’re doing it wrong, or does it actually, really, significantly make it harder for you to do your job? Really? That significantly? There’s no work around at all? If it is not directly affecting your job, drop it!

  • Strong Opinions Loosely Held Might be the Worst Idea in Tech
    The idea of strong opinions, loosely held is that you can make bombastic statements, and everyone should implicitly assume that you’ll happily change your mind in a heartbeat if new data suggests you are wrong. It is supposed to lead to a collegial, competitive environment in which ideas get a vigorous defense, the best of them survive, and no-one gets their feelings hurt in the process.

    On a certain kind of team, where everyone shares that ethos, and there is very little power differential, this can work well. I’ve had the pleasure of working on teams like that, and it is all kinds of fun. When you have a handful of solid engineers that understand each other, and all of them feel free to say “you are wrong about X, that is absolutely insane, and I question your entire family structure if you believe that, clearly Y is the way to go”, and then you all happily grab lunch together (at Linguini’s), that’s a great feeling of camaraderie.

    Unfortunately, that ideal is seldom achieved.

    What really happens? The loudest, most bombastic engineer states their case with certainty, and that shuts down discussion. Other people either assume the loudmouth knows best, or don’t want to stick out their neck and risk criticism and shame. This is especially true if the loudmouth is senior, or there is any other power differential.

  • People + AI Guidebook: Explainability + Trust
    Key considerations for explaining AI systems:
    1. Help users calibrate their trust. Because AI products are based on statistics and probability, the user shouldn’t trust the system completely. Rather, based on system explanations, the user should know when to trust the system’s predictions and when to apply their own judgement.
    2. Optimize for understanding. In some cases, there may be no explicit, comprehensive explanation for the output of a complex algorithm. Even the developers of the AI may not know precisely how it works. In other cases, the reasoning behind a prediction may be knowable, but difficult to explain to users in terms they will understand.
    3. Manage influence on user decisions. AI systems often generate output that the user needs to act on. If, when, and how the system calculates and shows confidence levels can be critical in informing the user’s decision making and calibrating their trust.
  • Decision Tree Learning
    The Decision Tree Learning algorithm adopts a greedy divide-and-conquer strategy: always test the most important attribute first. This test divides the problem up into smaller subproblems that can then be solved recursively. By “most important attribute,” we mean the one that makes the most difference to the classification of an example. That way, we hope to get to the correct classification with a small number of tests, meaning that all paths in the tree will be short and the tree as a whole will be shallow.
  • Awesome decision tree research papers
    A curated list of decision, classification and regression tree research papers with implementations.
  • Video Game Workers See Power in a Union
    This is the first labor-related walkout in the video game industry, but it likely will not be the last. The sector has been under scrutiny for years over exploitative practices, including lack of job security, mass layoffs and “crunch.” Short for “crunch time,” crunch is an industry-wide practice that requires employees, especially developers, to put in extra, unpaid hours—making for 60 to 80-hour work weeks—to deliver a game by its release date. One of the most egregious examples of crunch came in October of 2018 when reports surfaced that employees of Rockstar Games were working 100-hour weeks to finish the game Red Dead Redemption 2.
  • This World of Ours
    The worst part about growing up is that the world becomes more constrained. As a child, it seems completely reasonable to build a spaceship out of bed sheets, firecrackers, and lawn furniture; as you get older, you realize that the S.S. Improbable will not take you to space, but instead a lonely killing field of fire, Child Protective Services, and awkward local news interviews, not necessarily in that order, but with everything showing up eventually. Security research is the continual process of discovering that your spaceship is a deathtrap.

No comments:

Post a Comment