Megan McArdle making a great point, writing for The Atlantic:
“If you see a person — or a company — doing something that seems completely and inexplicably boneheaded, then it’s unwise to assume that the reason must be that everyone but you is a complete idiot who is blind to fairly trivial insights such as ”people desire inexpensive and conveniently available movie services, and will resist having those services made more expensive, or less convenient“. While it’s certainly true that people do idiotic things, it’s also true that a lot of those ‘idiotic’ things turn out to have perfectly reasonable explanations.”
Everyone has their reasons — and everyone else has a reason for why those reasons are dumb. In leadership, as in writing, it’s important to remember the latter, and recognize the patent absurdity of the former.
In a draft titled Starting Over, I try to condense a decade’s worth of outdoor gear experience. The 8,000 word missive, started in 2017, highlights “The Gear I Would Buy if I Had to Do it All Over Again”. After thousands of dollars spent searching for the best of the best, it tries help those just starting out avoid some of the expensive lessons I had to learn. That post has not left my drafts folder, though, in part because the list keeps changing. Two weeks in Arizona made me reconsider my chosen boot, the Vasque St. Elias GTX, plus I need to try the new Phantom 50 I bought while there to go with my Matador Freerain 24. I want to move to a more lightweight and season-agnostic setup, and I will have to see how that plays out before I publish the be-all, end-all list for aspiring adventurers. Until then, take a look at these sites. I do a lot of research before I buy, and that research starts here.
“As pre-Covid life fades into history, large sections of the professional classes face a version of the experience of those who became former persons in the abrupt historical shifts of the last century. The redundant bourgeoisie need not fear starvation or concentration camps, but the world they have inhabited is evanescing before their eyes. There is nothing novel in what they are experiencing. History is a succession of such apocalypses, and so far this one is milder than most.”
As far as disaster’s go, COVID-19’s direct effects have been far less impactful than its indirect ones. Once we have some distance from the event, and can look back on it with less bias, it will become clear that the panic that led to a national shutdown and then economic stagnation caused far greater pain and suffering than the disease ever could have, even if we had taken no action. This should be the legacy of COVID-19: not that of a deadly physical disease, but rather a panic-inducing mental one.
Every time I see an article like this one, I think back to something Horace Dediu said in Making rain:
“I propose a way to think about [the Facebook Home and the Google Fiber issue] as: Google tries to make a business succeed through having a huge amount of flow in terms of data, traffic, queries and information that is indexed. So think about this idea of them tapping into a vast stream. The more volume that is flowing through the system the more revenue they generate.”
Another interesting piece, among several others, on encouraging writing within an organization. As I prepare to move on to a new role, I’m happy to report that my team’s efforts to publish an internal written product on a regular schedule has gone well so far. I hope it continues in my absence, and I look forward to starting a similar project with my next team.
Today I announce a new project: Swig. Swig is a monolithic, multithreaded, micro web framework designed for an air-gapped intranet environment. Aside from Python 3, it has zero dependencies; just download and deploy. Out of the box, Swig supports IPv4 and IPv6, HTTP and HTTPS, block and chunked responses, and gzip compression. I encourage you to go through the README for more information, and to check out the code on GitHub repo.
If you looked at First Crack’s internal commit history, you would see that most features take at most a few days to write. Even the entirety of First Crack’s rewrite happened over the course of a couple weeks, in the mornings before work and on the weekends. I have stuck with monthly releases since June of last year, though, and today I want to explain why.
I happened across The Law of Requisite Variety the other day, which states that a system for which D possible disruptions exist requires R countermeasures to keep itself stable, where R >= D. Having spent some time on my projects’ more theoretical side lately, I found this idea at once interesting and then familiar. Today, I want to talk about the simple way I apply this concept to my code, as a way to architect more reliable programs.
I spent most of my development time in April working on a project that I can, at best, call tangentially related to First Crack. After fighting with Flask, Bottle, and then Python’s own http.server library, I decided to write my own web framework. I won’t spend much time on this now, since I plan to deploy it in an Intranet soon and then release it after some real use, but I will say this: I liked Flask, but it has far too many dependencies to work in my target environment. I liked Bottle even more, since it mirrors most of Flask’s functionality without any dependencies, but it lacks the ability to handle concurrent connections. The surprisingly capable http.server library has zero dependencies and supports concurrent execution, but is ill-suited for building out an entire web application. My project, Swig1, solves all of these problems. For now, though, let’s talk about First Crack — a day late, yes, but I hope not a dollar short.
I opened Tobias Pfeiffer’s article expecting something along the lines of Your configs suck? Try a real programming language. Tobias focused not on configuring the environment, though, but rather best practices for configuring the control flow in the program itself. “Early Validation” was a particularly good point. Tobias has some sound advice, most of which I incorporated into the dev projects I started during the shelter-in-place period. I hope to talk about them more soon.
Cal started strong: his recommendation that experts improve the decentralized distribution of critical information by moving beyond Twitter, the original microblog, to their own blogs hits the nail on the head. I cannot agree more. I wish I could say he finished strong as well, but he just completely missed the mark. In closing, he played up the importance of institutional backing for these sites as a way to lend them credibility. As Ben Thompson explained in Zero Trust Information, though, not only have most of the institutional players put forth bad information throughout this crisis, they actively sought to suppress critical (factual) information as well. Although not all did so knowingly (some just sought to suppress dissenting voices), some did; the idea that we can improve the shortcomings of Twitter with greater institutional oversight is unbelievable. Read the first half of Cal’s piece and then close your tab; the rest is just ridiculous.
“With over 20 years experience at the time I recognized the first obvious flaw [with test-driven design]; writing tests prior to coding is mindful of the old adage about no battle plan surviving contact with the enemy. ... The second problem here is that TDD presumes that developers should write their own tests. This is supremely ridiculous. I’ve seen this many, many times; the project appears solid to me, I can’t break it, but someone else can break it in less than a minute. Why? Because the same blind spots that I had in design will appear in my tests.”
I have avoided test-driven design for similar reasons. For my projects, I like to start by defining performance goals; I write a few test cases for each feature I finish as I finish it, then move on. This has all the benefits of test-driven design — defining requirements up-front, clear success criteria, objective verifiability throughout the development process and at its conclusion — without the downsides Chris highlights: wasted effort and lack of actual code coverage.
I cannot stress the importance of unit tests enough: when I wrote a socket web server in Python, they gave me an easy way to make sure each change did not unintentionally break a key performance objective I had already checked off. This saved me hours of periodic, manual, boring checks. I have no such affinity for test-driven design, and I would encourage you to reevaluate your loyalty if you do.
The Internet succeeded in no small part thanks to the humble hyperlink. The link enabled it to flourish as a network rather than languish as a series of closed silos, which led to its widespread adoption and the prevalence it enjoys today. Although a disturbing trend of centralization has emerged in recent years, many people have made great efforts to combat it; they may yet succeed. Their efforts have relied on the link to bring users together, re-focusing the spotlight on this unassuming yet important tool and highlighting the importance of attribution as both the currency and the lifeblood of the Internet.
Modern computers have gotten so complex that the prospect of trying to understand them intimidates a lot of people. Unfortunately, many use that fear as an excuse not to even try. Nelson Elhage has some great advice for tackling this gargantuan task.
From Paul Rascagneres and Vitor Ventura at Cisco Talos Intelligence:
“Our tests showed that — on average — we achieved an ~80 percent success rate while using the fake fingerprints, where the sensors were bypassed at least once. ... The results show fingerprints are good enough to protect the average person’s privacy if they lose their phone. However, a person that is likely to be targeted by a well-funded and motivated actor should not use fingerprint authentication.”
“In Notes on the Synthesis of Form, Christopher Alexander points out that design always speaks of form and its context. A good design is not just a property of the form, but it is a matter of fit between the form and the context. The reason why we cannot evaluate an isolated form is not because we are unable to precisely describe the form itself, but because we are unable to precisely describe the context with which it will interact. ... Exactly the same limitations exist in the world of programming. No matter how precisely we can talk about programs, we also need to exactly understand the environment with which they interact. This is the hard part.”
The idea that suitability determination involves a multi-dimensional performance assessment on a scenario-by-scenario basis, rather than a simple checklist, is a departure from the process I often see. This approach represents a more complex process, and even a more fraught one, too — but, perhaps, also one that might deliver something usable.
And then on the unintended consequences of avoiding maintenance, later:
“When discussing maintenance, [Stewart] Brand mentions the cautionary tale of vinyl siding, which is used to avoid problems with peeling paint. Rather than repainting a wooden wall, you cover it with a layer of vinyl siding, which is durable and weather resistant. The problem is that vinyl siding blocks moisture and the humidity behind it can cause structural damage to the building. Many traditional materials have the attractive property that they look bad before they act bad and, furthermore, the problems with traditional materials are well understood. ... The lesson about using traditional materials has a relatively easy parallel. If you build software using tools whose problems you understand, you will be able to expect and resolve those problems. If you are using a new material, you will not anticipate where problems might occur.”
“By learning to use frameworks instead of the tools and protocols they implement, developers not only miss out on foundational knowledge that will help them become better at their job, but also hamstring themselves to the subset of features the frameworks’ creators’ felt important enough to enable. Expanding a project beyond that expected use case will require diving into those low-level tools and protocols.”
“You see, frameworks exist to offload repetitive work from you. They do not exist so that you can not care at all what’s going on under the hood and rely on the fact that it’s all magic. The first time you choose a framework like React or Angular for your projects should be when you’re confident that you can create that project without React or Angular too.”
You should learn C not to use it, but because it will make you a better programmer. Learning C will help you understand the code that underlies your usable high-level language. node.js vinyl siding is cool, but do not ignore the old-fashioned tools it “replaced”: we used — and still use — them for a reason.
Steve and I came up hearing the same refrain: “Learn C, it’ll make you a good programmer — it’s how the computer works behind the scenes.” As he points out in his threepartseries, though, using C just means having a thinner abstraction layer between your code and the hardware on which it runs. That layer still exists. I do believe new developers should learn to work in this environment: understanding the code that underlies usable, high-level languages like Python will help you write better Python for the same reason that understanding assembly will help you write better C; I do not believe they should use it, though, and here’s why:
I have an innate distrust of all writing advice that seem to come from a large organization. I blame academia, for all those years I spent years reading and writing the most rigid, boring prose known to man. I prefer advice from actual writers — and that’s what I found in Harry Guinness’s recent article for The New York Times, How to Edit Your own Writing. He offers some great advice, and makes some excellent book recommendations. To add to that list, I also recommend William Zinsser’s On Writing Well, and Stephen King’s On Writing: Harry’s books focus on the mechanics, while these concern themselves with the craft.
I agree with Kev, that readers should have the ability to view my work in whatever format they please, but I still truncate the posts in my RSS feed. Here’s why:
Whenever I find a new writer, I go through everything they have ever written — turns out, good writers write good things often; this helps me find great works from their past. RSS feeds with, say, the site’s ten most recent posts make this much harder than ones with every article in them, so when I restarted this site, I took the latter route. Over a thousand posts would make my feed a hefty __ MB, though, so truncating the individual posts allowed me to strike a nice balance between the two.
Mozilla’s mistakes — although concerning — worry me less than Google’s methodical Internet takeover. Indulge me while I bring the uninformed up to speed: after eviscerating its competitors, over half the Internet now uses Chrome. This dominance gives its creator the power to force sweeping change across this decentralized system: although superficially optional, failure to comply means longer load times, lower search ranking, and lost revenue. Over the last few years in particular, the company has shown an increased willingness to wield that power with more and more aggressive mandates. Publishers who do not expose their content through Accelerated Mobile Pages lose viewers and income. Users who prefer other browsers, perhapsbecause they value their privacy, either cannot access many of Google’s popular services, or have to live with a degraded user experience. It all leaves a bad taste in my mouth. So while I find Mozilla’s mistakes concerning, I like the thought of supporting Chrome even less, so I decided to give Firefox another try.
“Some of the key management systems — 5 out of 73, in a Citizen Lab scan — seem to be located in China, with the rest in the United States. Interestingly, the Chinese servers are at least sometimes used for Zoom chats that have no nexus in China. ... The report points out that Zoom may be legally obligated to share encryption keys with Chinese authorities if the keys are generated on a key management server hosted in China.”
This just makes an adversary’s easy job even easier, though, thanks to the weak encryption scheme those keys facilitate:
“A security white paper from the company claims that Zoom meetings are protected using 256-bit AES keys, but the Citizen Lab researchers confirmed the keys in use are actually only 128-bit ... Furthermore, Zoom encrypts and decrypts with AES using an algorithm called Electronic Codebook (ECB) mode, ‘which is well-understood to be a bad idea, because this mode of encryption preserves patterns in the input,’ according to the Citizen Lab researchers. In fact, ECB is considered the worst of AES’s available modes.”
Bill Marczak and John Scott-Railton’s study, Move Fast & Roll Your Own Crypto, goes into more detail, and concludes with this key takeaway: “As a result of these troubling security issues, we discourage the use of Zoom at this time for use cases that require strong privacy and confidentiality”. Unfortunately, though, most of Zoom’s competitors don’t do much better.
Most articles focus on the negative side of giving anyone the power to publish: in recent years, it has enabled massive disinformation campaigns so effective that even its own citizens now question Democratic underpinnings of the world’s premiere superpower. In Zero Trust Information, though, Ben Thompson argues that while this power did lead to an increase in misinformation, it also lead to the proliferation of much more valuable information, too. For proof, one need only look to the Seattle doctors who defied a government gag order to share their findings on COVID-19.