To the woman who wrote A Heartfelt Message of Gratitude:
Thank you for writing, and for sharing your experience.
You are very right; it's just stuff.
Short notes and essays about stuff that interests me (mostly technical stuff).
To the woman who wrote A Heartfelt Message of Gratitude:
Thank you for writing, and for sharing your experience.
You are very right; it's just stuff.
But there is a second, more fundamental sense of the word privacy, one which until recently was so common and unremarkable that it would have made no sense to try to describe it.
That is the idea that there exists a sphere of life that should remain outside public scrutiny, in which we can be sure that our words, actions, thoughts and feelings are not being indelibly recorded. This includes not only intimate spaces like the home, but also the many semi-private places where people gather and engage with one another in the common activities of daily life—the workplace, church, club or union hall. As these interactions move online, our privacy in this deeper sense withers away.
Until recently, even people living in a police state could count on the fact that the authorities didn’t have enough equipment or manpower to observe everyone, everywhere, and so enjoyed more freedom from monitoring than we do living in a free society today. [Note: The record for intensive surveillance in the pre-internet age likely belongs to East Germany, where by some estimates one in seven people was an informant.].
A characteristic of this new world of ambient surveillance is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. However sincere our commitment to walking, the world around us would still be a world built for cars. We would still have to contend with roads, traffic jams, air pollution, and run the risk of being hit by a bus.
Similarly, while it is possible in principle to throw one’s laptop into the sea and renounce all technology, it is no longer be possible to opt out of a surveillance society.
When we talk about privacy in this second, more basic sense, the giant tech companies are not the guardians of privacy, but its gravediggers.
Elgamal sees interesting challenges emerging in the next decade or so as quantum computing becomes a reality. He’s passionate about agility in cryptography, noting that, currently, when changes need to be made because an implementation has been shown to have weaknesses, it causes a big slowdown for security engineers. We can’t wait ten years, Elgamal says, to start the effort to protect against new technologies.
The Away Team model and generally easy access to source code means that investment can easily cross service boundaries to enhance the power of the entire system of services. Teams with a vision for making their own service more powerful by improving other services are free to execute.
If it’s your job (or the job of someone who reports to you), great. Go to it! Tend your own garden first. Make systems that are as robust as you believe systems should be. Follow processes that you believe are effective and efficient. If you are not leading by example, you have to start there. Stop reading now and go fix the things!
If there’s no clear owner, do you know why? Is it just because no one has gotten around to doing it, or has the organization specifically decided not to do it? If no one’s gotten around to doing it, can you do it yourself? Can your org do it, just within your org?
If it’s someone else’s job, how much does it affect your day to day life? Does it bother you because they’re doing it wrong, or does it actually, really, significantly make it harder for you to do your job? Really? That significantly? There’s no work around at all? If it is not directly affecting your job, drop it!
The idea of strong opinions, loosely held is that you can make bombastic statements, and everyone should implicitly assume that you’ll happily change your mind in a heartbeat if new data suggests you are wrong. It is supposed to lead to a collegial, competitive environment in which ideas get a vigorous defense, the best of them survive, and no-one gets their feelings hurt in the process.
On a certain kind of team, where everyone shares that ethos, and there is very little power differential, this can work well. I’ve had the pleasure of working on teams like that, and it is all kinds of fun. When you have a handful of solid engineers that understand each other, and all of them feel free to say “you are wrong about X, that is absolutely insane, and I question your entire family structure if you believe that, clearly Y is the way to go”, and then you all happily grab lunch together (at Linguini’s), that’s a great feeling of camaraderie.
Unfortunately, that ideal is seldom achieved.
What really happens? The loudest, most bombastic engineer states their case with certainty, and that shuts down discussion. Other people either assume the loudmouth knows best, or don’t want to stick out their neck and risk criticism and shame. This is especially true if the loudmouth is senior, or there is any other power differential.
Key considerations for explaining AI systems:
- Help users calibrate their trust. Because AI products are based on statistics and probability, the user shouldn’t trust the system completely. Rather, based on system explanations, the user should know when to trust the system’s predictions and when to apply their own judgement.
- Optimize for understanding. In some cases, there may be no explicit, comprehensive explanation for the output of a complex algorithm. Even the developers of the AI may not know precisely how it works. In other cases, the reasoning behind a prediction may be knowable, but difficult to explain to users in terms they will understand.
- Manage influence on user decisions. AI systems often generate output that the user needs to act on. If, when, and how the system calculates and shows confidence levels can be critical in informing the user’s decision making and calibrating their trust.
The Decision Tree Learning algorithm adopts a greedy divide-and-conquer strategy: always test the most important attribute first. This test divides the problem up into smaller subproblems that can then be solved recursively. By “most important attribute,” we mean the one that makes the most difference to the classification of an example. That way, we hope to get to the correct classification with a small number of tests, meaning that all paths in the tree will be short and the tree as a whole will be shallow.
A curated list of decision, classification and regression tree research papers with implementations.
This is the first labor-related walkout in the video game industry, but it likely will not be the last. The sector has been under scrutiny for years over exploitative practices, including lack of job security, mass layoffs and “crunch.” Short for “crunch time,” crunch is an industry-wide practice that requires employees, especially developers, to put in extra, unpaid hours—making for 60 to 80-hour work weeks—to deliver a game by its release date. One of the most egregious examples of crunch came in October of 2018 when reports surfaced that employees of Rockstar Games were working 100-hour weeks to finish the game Red Dead Redemption 2.
The worst part about growing up is that the world becomes more constrained. As a child, it seems completely reasonable to build a spaceship out of bed sheets, firecrackers, and lawn furniture; as you get older, you realize that the S.S. Improbable will not take you to space, but instead a lonely killing field of fire, Child Protective Services, and awkward local news interviews, not necessarily in that order, but with everything showing up eventually. Security research is the continual process of discovering that your spaceship is a deathtrap.
Cue the line about how a master craftsman needs to thoroughly understand his tools, as we explore another fascinating series from the great Raymond Chen (After we first share one unrelated bonus link)
One of Git's core value-adds is the ability to edit history. Unlike version control systems that treat the history as a sacred record, in git we can change history to suit our needs. This gives us a lot of powerful tools and allows us to curate a good commit history in the same way we use refactoring to uphold good software design practices. These tools can be a little bit intimidating to the novice or even intermediate git user, but this guide will help to demystify the powerful git-rebase.
I take a snapshot of what’s in our internal staging repo and push it to the public repo. All of the intermediate steps are squashed out, so that the public repo isn’t cluttered with noisy history.
I want the commit to be a merge of win10-1507 and the changes specific to that branch. To do this, I use the commit-tree command, but provide multiple parents. The first parent is the previous commit for the branch, and the second parent is the incoming changes from its ancestor branch.
Suppose you have a series of commits you want to cherry-pick and squash onto another branch.
The traditional way of doing this is to cherry-pick the series of commits with the -n option so that they all stack on top of each other, and then perform a big git commit when you’re done. However, this mechanism churns the local hard drive with copies of each of the intermediate commits, and if there are merge conflicts, you may end up having to resolve the conflict in the same block of code over and over again.
you could hard reset the master branch back to M1 and redo the merge. But that means you have to redo all the merge conflicts, and that may have been quite an ordeal. And if that was a large merge, then even in the absence of conflicts, you still have a lot of files being churned in your working directory.
Much faster is simply to create the commit you want and reset to it.
Since all of the commits we want to squash are consecutive, we can do all this squashing by simply committing trees.
The point is that we were able to rewrite a branch without touching any files in the working directory.
This tree-based merge is the trick I referred to some time ago in the context of forcing a patch branch to look like a nop. In that diagram, we started with a commit A and cherry-picked a patch P so we could use it to patch the master branch. Meanwhile, we also want a nop to go into the feature branch. We did it with a git revert, but you can also do it in a no-touch way by committing trees.
For best results, your rename commit should be a pure rename. Resist the tempotation to edit the file’s contents at the same time you rename it. A pure rename ensure that git’s rename detection will find the match. If you edit the file in the same commit as the rename, then whether the rename is detected as such will depend on git’s “similar files” heuristic.¹ If you need to edit the file as well as rename it, do it in two separate commits: One for the rename, another for the edit.
Wait, we didn’t use git commit-tree yet. What’s this doing in the Stupid git commit-tree tricks series?
We’ll add commit-tree to the mix next time. Today was groundwork, but this is a handy technique to keep in your bag of tricks, even if you never get around to the commit-tree part.
The problem is that octopus merges work only if there are no conflicts. We’re going to have to build our own octopus merge.cat dairy fruits veggies | sort >foodThe git write-tree creates a tree from the index. It’s the tree that a git commit would create, but we don’t want to do a normal commit. This is the tree we want to commit, but we need to set custom parents, so we’ll ask git write-tree for the tree that would be committed, so we can build our custom commit.
git rm dairy fruits veggies
git add food
Thank you for writing. I am so very, very sorry.
Mental health issues are, in my opinion, the greatest unsolved tragedy of my time here on earth.
Erik Larson is deservedly famed for his The Devil in the White City, which I enjoyed quite a lot when I read it years ago.
But Larson has written a number of books in addition to The Devil in the White City, and recently I happened to read Thunderstruck.
Thunderstruck is set at the turn of the 19th century, from about 1895 through about 1910, and focuses primarily on the unusual and turbulent life of Guglielmo Marconi, the Irish-Italian inventor who, at a very young age, moved to London and invented a set of devices which could send and receive messages using radio waves, something which quickly became known as the "Wireless Telegraph".
As a narrative device, Larson spins Marconi's story together with another story, that of H.H. Crippen, a physician of sorts and a peddler of the sorts of homeopathic "cures" that were popular at that time.
Alternating back and forth between the two stories, Larson brings things to a climax with a certainly entertaining depiction of a heinous crime and its unraveling by Scotland Yard.
But, overall, I found myself rather unaffected, for a variety of reasons.
Most importantly, the two stories didn't really have anything to do with each other, except for the rather boring observation that Scotland Yard were able to make use of the Wireless Telegraph during their apprehension of the murderer.
Furthermore, neither of the main characters are all that gripping. Marconi certainly had a vivid time in the world spotlight, rushing here and there to demonstrate his invention, build a company to deliver it, and grow it into a worldwide success. But Marconi (at least in Larson's telling) was a quiet, private, almost reclusive man, with not much more than a string of failed relationships to add background and color to the story of his tireless work on refining and improving the wireless transmitters and receivers.
Crippen, meanwhile, is even less appealing, and Larson is forced to tell Crippen's story primarily through the stories of those he came in contact with: wives, girlfriends, business associates, etc.
The most interesting character in the book, I thought, was the completely fascinating John Nevil Maskelyne: magician, entertainer, skeptic, and general gadabout; the best part of the entire book, in my opinion, is the wonderful story of how Maskelyne attempts to disrupt one of Marconi's public exhibitions of wireless technology by commandeering a wireless transmitter of his own and beaming risque limericks into the on-stage receiver. And, we must not forget, Maskelyne is one of those rare creatures whose own book is still read and enjoyed now, 125 years after it was first published!
As he always does, Larson does a fine job of painting the picture of the time, and the book is pleasant enough to read.
But I guess I think he tried to force his material to carry more than its fair share of a story, and the result felt to me like one of those situations where the enormous build-up with which Thunderstruck arrived led to an unavoidable disappointment when its reality sunk in.
... but here's a little snapshot, anyway.
The flight management computer is a computer. What that means is that it’s not full of aluminum bits, cables, fuel lines, or all the other accoutrements of aviation. It’s full of lines of code. And that’s where things get dangerous.
Those lines of code were no doubt created by people at the direction of managers. Neither such coders nor their managers are as in touch with the particular culture and mores of the aviation world as much as the people who are down on the factory floor, riveting wings on, designing control yokes, and fitting landing gears. Those people have decades of institutional memory about what has worked in the past and what has not worked. Software people do not.
In the 737 Max, only one of the flight management computers is active at a time—either the pilot’s computer or the copilot’s computer. And the active computer takes inputs only from the sensors on its own side of the aircraft.
When the two computers disagree, the solution for the humans in the cockpit is to look across the control panel to see what the other instruments are saying and then sort it out. In the Boeing system, the flight management computer does not “look across” at the other instruments. It believes only the instruments on its side. It doesn’t go old-school. It’s modern. It’s software.
The TRITON intrusion is shrouded in mystery. There has been some public discussion surrounding the TRITON framework and its impact at the target site, yet little to no information has been shared on the tactics, techniques, and procedures (TTPs) related to the intrusion lifecycle, or how the attack made it deep enough to impact the industrial processes. The TRITON framework itself and the intrusion tools the actor used were built and deployed by humans, all of whom had observable human strategies, preferences, and conventions for the custom tooling of the intrusion operation. It is our goal to discuss these adversary methods and highlight exactly how the developer(s), operator(s) and others involved used custom tools in the intrusion.
In this report we continue our research of the actor’s operations with a specific focus on a selection of custom information technology (IT) tools and tactics the threat actor leveraged during the early stages of the targeted attack lifecycle (Figure 1). The information in this report is derived from multiple TRITON-related incident responses carried out by FireEye Mandiant.
Using the methodologies described in this post, FireEye Mandiant incident responders have uncovered additional intrusion activity from this threat actor – including new custom tool sets – at a second critical infrastructure facility. As such, we strongly encourage industrial control system (ICS) asset owners to leverage the indicators, TTPs, and detections included in this post to improve their defenses and hunt for related activity in their networks.
(Yes, he was the man who invented Erlang. And did a surprising amount of other stuff that you never knew about.)
Over lunch at an Avenue Matignon café in Paris, I asked a literate media entrepreneur and political expert to explain the Yellow Vests mystery. Who are they, what do they want, who leads them? He started by offering a metaphor. The interior of France, some call it La France Profonde (think of Red States), is like a forest that has been progressively desiccated by climate change. Then gasoline was poured on the trees. Sooner or later a spark would set it ablaze.
I’ve seen the desiccation, the emptying of France. In an October 2017 Monday Note titled The Compostelle Walk Into Another France, I recounted how, with daughter Marie, we walked through empty villages, no café, no bakery, no garage. Barking dogs roamed the streets during the day, the inhabitants having rushed to their distant jobs. I had had a similar impression years before, driving small country roads in Northern Burgundy, but walking the Compostelle (Camino de Santiago) made it more vivid.
Over the past three or fours decades, La France Profonde has been slowly hollowed out as jobs moved out of the country or to larger urban areas. This desiccation isn’t unique to France, but gasoline was poured on the forest in the form of an accumulation of laws and regulations that exasperated the remaining France Profonde population, creating a schism between “real people” and “those people in Paris”.
This year’s developer festival will be held May 7-9 at the Shoreline Amphitheatre in Mountain View, CA
We all know about catastrophic headline-generating failures like AWS East-1 region falling apart or a major provider being down for a day or two. Then there are failures known only to those who care, like losing a major exchange point. However, I’m becoming more and more certain that the known failures are not even the tip of the iceberg - they seem to be the climber at the iceberg summit.
There are some cases when a person really does want control. If the person wants to determine their own path, if having choice is itself a personal goal, then you need control. That’s a goal about who you are not just what you get. It’s worth identifying moments when this is important. But if a person does not pay attention to something then that person probably does not identify with the topic and is not seeking control over it. “Privacy advocates” pay attention to privacy, and attain a sense of identity from the very act of being mindful of their own privacy. Everyone else does not.
Overall, it’s a nice design. They have adopted a conservative process node and frequency. They have taken a pretty much standard approach to inference by mostly leaning on a fairly large multiple/add array. In this case a 96×96 unit. What’s particularly interesting to me what’s around the multiply/add array and their approach to extracting good performance from a conventional low-cost memory subsystem. I was also interested in their use of two redundant inference chips per car with each exchanging results each iteration to detect errors before passing the final plan (the actuator instructions) to a safety system for validation before the commands are sent to the actuators. Performance and price/performance look quite good.
An important optimization for Git servers is that the format for transmitted objects is the same as the heavily-compressed on-disk packfiles. That means that in many cases, Git can serve repositories to clients by simply copying bytes off disk without having to inflate individual objects.
But sometimes this assumption breaks down. Objects on disk may be stored as “deltas” against one another. When two versions of a file have similar content, we might store the full contents of one version (the “base”), but only the differences against the base for the other version. This creates a complication when serving a fetch. If object A is stored as a delta against object B, we can only send the client our on-disk version of A if we are also sending them B (or if we know they already have B). Otherwise, we have to reconstruct the full contents of A and re-compress it.
This happens rarely in many repositories where clients clone all of the objects stored by the server. But it can be quite common when multiple distinct but overlapping sets of objects are stored in the same packfile (for example, due to repository forks or unmerged pull requests). Git may store a delta between objects found only in two different forks. When someone clones one of the forks, they want only one of the objects, and we have to discard the delta.
Git 2.20 solves this by introducing the concept of “delta islands“. Repository administrators can partition the ref namespace into distinct “islands”, and Git will avoid making deltas between islands. The end result is a repository which is slightly larger on disk but is still able to serve client fetches much more cheaply.
It had been six months, so I picked up the next episode of The Expanse: Babylon's Ashes.
I'm not actually sure how many series I've read this far, i.e., all the way to Book Six. Patrick O'Brian's stories of Jack Aubrey and Stephen Maturin is the only other that comes to mind.
Of course, O'Brian's books were one of the great literary achievements of the 20th century; that's a high bar!
But clearly The Expanse has something going on.
I agree with those who say that Babylon's Ashes was not the strongest of the series so far. Book Five was much better. The Free Navy are not very interesting, and I'm not sure where I stand on the investigation into the soul of Filip Nagata.
However, one of the strongest parts of the overall series is the way that characters with dark pasts are developed into rich and fascinating stories. Think of Clarissa Mao, or Basia Merton, or even Joe Miller from Book One (Leviathan Wakes).
And, we get some great space battles, we get a very new interesting character in Michio Pa.
My pattern of late has been to take a bit of a break between books. After all, I have much else to read.
But I'm sure I'll be back for Book Seven. I suspect that, just as summer turns into fall, Persepolis Rising will be finding its way into my hands...