Turning
Turning things over—in your head or in your hands—can be contemplative. You might be accused of doing the same thing over and over again and expecting different results, but I don’t think that’s quite right. It is about expecting something, but expecting something more, not different. It’s about allowing time for things to seep in.
Some things respond easily to that kind of examination, like old furniture or hand tools: weathered and witness to use and carelessness and care. Some new physical things do, too, but many are purposefully designed to be abstract and minimal—merely portals for designed information. There aren’t a lot of obvious features to turn over and examine, because they’re not supposed to be there: they’re supposed to be magic, like a sleek, black, glossy hat.
And yet, when my phone is off (“off-off”: when it is powered down into a sleep mode that means it can’t be roused with a touch or a lift) I can better appreciate its understated aesthetic. (After all, I would never eagerly dismiss or disparage the beauty of a smooth stone.) It bears repeating: it’s not supposed to draw a lot of attention to itself—it should simply be what you need, when you need it. It should provide functions in a way that you naturally reach for them when you want them, but don’t otherwise notice them. And I can still examine the details of its elegant design and the places that dust and lint find its tiny seams and pockets. Tiny imperfections and limitations; an almost microscopic wabi-sabi.
My phone and my coffee cup have more in common than I thought. They are both attractive and well-designed and they both show signs of wear; and my attention is ordinarily drawn more toward what they hold than their affordances.
Three Laws
Isaac Asimov wrote in his science fiction stories about laws of behavior programmed into robots to prevent them from harming humans or destroying humanity entirely. They formed a hierarchy based on human safety and utility:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Many of Asimov’s robot stories are interesting because they deal with situations that introduce ambiguities and unknowns that cause the robot to fail to act appropriately. He later wrote that these concepts were obvious and applicable to any human tool:
First Law: A tool must not be unsafe to use.
Second Law: A tool must perform its function efficiently unless this would harm the user.
Third Law: A tool must remain intact during its use unless its destruction is required for its use or for safety.
Many of our tools are now embodied by software. While some of these are, indeed, mission-critical and/or safety-related, most of them are pretty ordinary. I propose here a set of usability laws for software and related devices that we all must use regularly:
First Law: Software must not automatically or by default piss off the user, or through some inexplicable delay or aborted function cause the user to become pissed off.
Second Law: Software must perform its functions efficiently unless it produces a surprising, nonsensical, counter-productive or useless result that would surely piss off the user.
Third Law: Software must perform updates as needed in order to maintain and/or improve its functions, unless the update actually degrades its functions; or the timing, duration and sheer frequency of those updates would cause the user to become pissed off.
FutureAnimal
For as much as we might wish to, we cannot un-invent things. The popularity of a given product or technology may wane due to fashion or regulation; they may be “lost” or “forgotten” because something better takes their place, but inventions address a need, they solve a problem and so seem to persist. Vinyl records and vinyl jackets will probably always live on, for better or worse, even as they are eclipsed by bits streaming down from the Cloud and Gore-tex.
Which really isn’t the point. The point is we can’t go back to a time when we didn’t have recorded music or clothing made out of synthetic materials. We also can’t go back to a time when we didn’t have cars or microwaves or smartphones.
One of my main rants is that technology can unfortunately replace almost every form of effort that humans can engage in. Since there’s not really a biological force that notices this and says, “Whoa, pony! Let’s not use that labor-saving device!” we end up indiscriminately letting technology solve every problem we have; whether for survival, convenience or entertainment.
A Luddite might rail against technology and sabotage manufacturing (for all the reasons that we think Luddites did this but actually didn’t). A common-sense approach might be to emphasize the tenets of adequate physical exercise and moderation in consuming all things. A philosopher might illuminate and encourage greater discernment between things that truly enrich our lives and help us achieve a greater, fuller expression of our humanity and those things that merely amplify our fears and reassure our own ignorance, comfort and status.
But as I’ve hinted, the Luddites would fail. And moderation is for chumps (until we change the culture). And philosophy is stuffy and boring (even though it isn’t: people just don’t recognize when they’re doing it or watching it in action).
My perspective now is that since technological “progress” is inexorable, we might simply find ourselves these days in an uncanny valley: technology is good enough to give us nearly everything we want, but at the cost of our long-term animal needs like musculoskeletal health, good sleep habits and a liver and pancreas that are not waging open rebellion. It feels a little risky, but my bet is on better, more insightful design and more developed technology that allows us to be the human animals we are without needing to go backwards in time.