In Defense of Slow Communication

I love computer science, though I don’t necessarily love computers. That itself is a blog post for another day. What I don’t love about computers — or more specifically, our current use of them — is their use as a constant interruption device. There are many great advantages to the speed and precision of communication with technology today; the collateral damage of our current use of this superpower is our own intention and focus.

There is plenty of evidence to show us that interruptions kill our focus. Being reactive is hardly energizing, but there’s a constant incentive for people and apps to be grabbing our attention. This stresses not only our personal wellbeing but also our relationships, as it becomes more commonplace in our personal and work communications to expect an immediate response.

I am not alone in this proposal, but I shall iterate it here: We can change this norm. Say ‘NO!’ to interruptions! The idea that all texts should be replied to in 5 minutes is garbage, as is the idea that emails should be replied to within the hour. There are always exceptions, but the problem I see is that too many forms of communication fall into the “emergency” bucket.

Here are some ways you can try to communicate slow:

  • When you’re engaged in an activity (a project at work, watching a movie with a friend) - put your phone on Do Not Disturb, close slack, close your email tab, just be there and focus.

  • Do not apologize for responding ‘late’, especially if ‘late’ means a few hours or even (gasp) two days.

  • Send a handwritten letter or card to a friend.

  • Hold a dinner party or salon with friends to discuss an issue you’re interested in, instead of discussing it on social media.

There are many advantages to using social media, texting, phones, Slack, and email. There are certainly times when these are the best tools for the job. But before you communicate your next item, just consider — can I communicate this slowly? Go ahead, be a rebel.

Cybersecurity And Babies

I wrote this blog post in July of 2020:

Preparing for child so far has been both exciting and overwhelming. There are so many details to consider and decisions to make. I try my best to not get worried about each step and let my anxiety grow to the point where every choice seems like a potential death trap. I have faith in the fact that my husband and I have good problem solving abilities and that our baby will be brought into a safe environment. But there is one choice I’m certain I am justifiably paranoid about: baby monitors.

The proliferation of “smart” home devices (a wildly unapt adjective, in my opinion) has been a concern to me since my time in the security industry about a decade ago. I was trained to see each connection in a physical space as a potential eye and ear of an intruder in your private environment, who can often go undetected. There are those in the camp of “privacy is dead”, who might throw up their hands and say “it doesn’t matter anyway”, or, worse yet, “I’ve got nothing to hide” (which is not the sole purpose of privacy). I also know those in the security field who will lock down every device they have, share as little as able online, and have personal disinformation schemes in place to protect themselves.

I take to a more nuanced middle-of-the-road approach, but to do that I think that one thing must come first: full awareness of the risks. Once one is aware of the security risks, and clear on their own priorities, they can make informed trade-offs. For example, maybe someone is OK with giving up some of (a lot of) their personal information to engage with friends or build a presence on Instagram. Others share their personal opinions and information on Twitter to be able to engage in debate or discussion. The challenge is often that informed trade-offs sometimes require a level of technical understanding that most don’t have access to. We might think - “What does it matter if I add my fingerprint to this profile?” without considering that the same company saving your biometrics also has a rich set of emails, photos, and other data points on you that they can aggregate and use machine learning with to build complex profiles of your history, personality, and anticipated actions. Now, that full knowledge is accessible the next time you use your fingerprint to sign onto a device or access a physical location.

So, it’s important to consider first two things: 1) What’s your main objective in using this technology?, and 2) what are the risks, surface-level and bigger-picture, of using this technology? We can’t always have a clear idea on #2, but taking the time to consider the possibilities will put you in a much better position than most apps and tech companies try to program their users to do, which is to passively accept an impression of a product that belies its own risks. Considering some possibilities is better than consider few to none.

And here we come to a seemingly unlikely application of this framework: Baby monitors. The objective for using a baby monitor is to give the parents an audio and/or video signal of what’s happening in the baby’s room. Parents can get the information they need to get up and check the baby, time cries for sleep training, make sure the baby is not in distress, from the comfort of their own adult bedrooms. This is the basic objective. Several monitors offer additional features, like WiFi-connection so that you can check on your children while traveling or at work, while they are under the care of someone else. Some offer data tracking that help you optimize your baby’s sleep patterns. On the surface, these can be features that are particularly useful or a great fit for some parents. But are they worth the trade-offs? To me, the advantages are minimal compared to the weight of the risks. It seems being able to see your child while you’re traveling is nice but not necessary, because presumably they are already under the care of someone you trust. And building a data profile on your child from infancy? The sleep pattern optimization may be nice, but is it that much more so than this data getting into the wrong hands? (Not to mention considering whether our persistent optimization culture if something you truly want to pass on).

My objective for a using a baby monitor is simply to be able to hear and see my baby, and possibly the video isn’t even all that crucial, though the option is worthwhile (and nice). I definitely don’t need WiFi to do that, and I don’t need any other fancy features that necessitate data collection. I’m not going to share the specific monitor I chose, because it’s not important to you. My encouragement is to consider the purpose and risks before choosing any technology, and those trade-offs are going to be unique to you. There are several baby gear recommendations from friends I am taking on without much research because I simply don’t have the time, and I trust their experiences. But technology choices are ones I consider more carefully. My child will enter a world driven by a mountain of data, and I want to give that child the chance to build that data profile on their own, and to be protected from creepy hackers, because the technical features I really need are pretty minimal. Instead of pouring over data or worrying about WiFi intruders, I hope I’ll be able to save some of that energy for bonding directly with my baby. As Cal Newport says in Digital Minimalism, “humans are not wired to be constantly wired.” I hope in considering the intention and risk behind your technology choices that you can feel empowered by your selection, and that any little humans you’re keeping alive can too.

A Luddite Computer Scientist's Blog

Ned Ludd is widely believed to have never existed, but a group of textile workers in England in the 19th century took on his name as a symbol of insurrection as they prepared to demolish the machines they feared would put their jobs, their skills, their very identities at risk.

A luddite computer scientist might seem like a contradiction in terms. And I’m not interested in smashing my computer or taking down the Internet (on most days). But I do believe we need to reinvent the way we relate to technology — how we understand it, how we make it, how we use it. Computer science and technology are two different things; more people understand how to use technology much better than they understand any of the computer science. We are constantly bleeding data without an understanding of how it is being used. An industry full of intelligent people is busy making apps that can put a mustache on your face, or inventing unusable futuristic cars, when this brain power could be harnessed to address problem that affect real human lives today. That industry is losing out on more great minds because of stereotypes, discrimination, and harassment. Our favorite apps make us sadder and we spend so much time looking at screens we’re getting bad at looking at faces. I’m not a luddite despite my interest in computer science; It’s more accurate to say I’m a luddite because of it.

But where I might break with my machine-smashing forebears is this: I see tremendous hope and potential in what humanity could do with computer science. A reimagined relationship with this discipline — which is fascinating, beautiful, philosophical — might give us the tools we need to solve, or help solve, some of the world’s greatest problems.

I intend to use this blog and my newsletter to explore these questions and the tension, or occasional harmony, between humanity and computer science. Here, I plan to share some of my work, a bit of my personal life, an occasional (?) rant, or just whatever I am thinking about at the time — which may go slightly off-topic, but I intend to always return to this primary issue. I also plan to biweekly consolidate my thoughts and takes of the state these issues in the world over on my Substack newsletter, in a more organized and consistent form (probably). I hope that you enjoy these offerings.