Three Laws
Isaac Asimov wrote in his science fiction stories about laws of behavior programmed into robots to prevent them from harming humans or destroying humanity entirely. They formed a hierarchy based on human safety and utility:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Many of Asimov’s robot stories are interesting because they deal with situations that introduce ambiguities and unknowns that cause the robot to fail to act appropriately. He later wrote that these concepts were obvious and applicable to any human tool:
First Law: A tool must not be unsafe to use.
Second Law: A tool must perform its function efficiently unless this would harm the user.
Third Law: A tool must remain intact during its use unless its destruction is required for its use or for safety.
Many of our tools are now embodied by software. While some of these are, indeed, mission-critical and/or safety-related, most of them are pretty ordinary. I propose here a set of usability laws for software and related devices that we all must use regularly:
First Law: Software must not automatically or by default piss off the user, or through some inexplicable delay or aborted function cause the user to become pissed off.
Second Law: Software must perform its functions efficiently unless it produces a surprising, nonsensical, counter-productive or useless result that would surely piss off the user.
Third Law: Software must perform updates as needed in order to maintain and/or improve its functions, unless the update actually degrades its functions; or the timing, duration and sheer frequency of those updates would cause the user to become pissed off.