The best thing about technology is that we can use it to do what we want. But if we’re not careful, technology can end up taking over our lives.
The key question is: how do you make technology not suck? The answer is two-fold: one part is making the right design choices, and the other part is understanding how people actually use these technologies.
Designers of technologies build a model of how they expect their users will behave. For example, when Google first launched Gmail, they tried different patterns to see which ones would work best. They’d start with a small number of users, watch them using the product, then roll out changes to the rest of their user base.
So what is this “right” design? To find out, you have to look at how people actually use technology in real life; not just what they say they do, but what they really do.
Technology as a whole is getting better and better, but somehow our tech products are actually becoming harder and harder to use. How can we make technology not suck?
We’ve already seen the answer from a hundred directions. If you want to make something that people can use, you have to think about what they’re doing while they use it. You need an idea of how they behave. At the moment we’re building software the way you’d build a house without knowing what’s going to live there.
You don’t have to make everything yourself. Many of the projects I work on involve lots of other people. But if you’re designing something, you need to be able to imagine how it will fit into your users’ lives. For example, if you’re writing a web app, it doesn’t really matter that there are still some people using IE6; what matters is who those people are and why they haven’t upgraded yet. It’s possible their browsers are so old because their computers are so old that upgrading would be too expensive for them. Then again, maybe those users just don’t care about having a modern browser and all of this is irrelevant. You have to figure out which is true by talking to real users and thinking hard about what you hear.
A lot of technology companies are run by programmers. This is a mistake.
Programmers should be running programming teams, or maybe technology teams, and they should report to someone else: a CEO who has a broad and ruthless view of the company’s strategic interests.
A friend was recently asked to take over a software project that had gone off the rails. It was two years late and the code didn’t work. My friend asked how long it would take to finish if it had been done right. Three months, he was told.
What had gone wrong? The team was run by engineers. They made up for not knowing how to do their jobs by working insane hours. It’s hard to say whether this is pathological altruism or pathological selfishness, but either way it’s bad for the company. A CEO would never have let them get away with it.
Do you really want your company run by someone who thinks that having programmers work 80 hour weeks is normal?
The most dangerous thought you can have as a creative person is to think that you know what you’re doing.
Because once you get into a groove, once you think you’ve got it all figured out, that’s when someone else comes along and blows your whole game away.
It’s happened over and over again in the technology industry. The company that thinks they’ve got it all figured out? They’re the ones who get complacent. They stop innovating. They stop creating new things. And then some tiny startup comes along with a whole new perspective, and they blow them away with a better product.
The thought that you know what you’re doing? That’s death for an artist. The moment you get arrogant about your skills is the moment that someone else starts to catch up to you.
So embrace the beginner’s mind. Embrace uncertainty and doubt and insecurity, because that’s where all the best stuff happens. It happens on the edge of chaos, on the edge of confusion, when we don’t know what we’re doing anymore – that’s when great art gets made.
The most common response I’ve gotten from people who read this essay is, “That’s all well and good for you, but I need to make my pages work in Netscape 4.”
My response to this is: The Web isn’t about making pages that work the same way on all browsers. It isn’t even about making pages that look the same on all browsers. The Web is about communication and collaboration between authors and readers. Your goal should be to create pages that work well for the vast majority of your users, not a few fringe readers with specialized needs.
You have to design your site for your audience, not for yourself or for some abstract ideal of interoperability. When I redesign my site I test it in multiple browsers and multiple versions of each browser, because I have a large number of visitors who use those browsers. But I don’t worry about supporting Netscape 4, and neither should you.
When people design their site so it can be viewed only in their favorite browser and their favorite version of that browser, they are saying “Only people like me are important.” If you want your site to grow, you need to realize that other people’s preferences are as valid as yours.
In the last years of the nineties, it seemed that everyone who was anyone had to have a website. As more and more people got online, it became less of a novelty and more of a utility. It was nothing special that you were online; what mattered was what kind of site you had. Was it cool? Did it tell people something about your taste, your opinions, your interests?
The web was turning into a kind of mirror for the self. It reflected not only our vanity but also our ambivalence about that vanity: we wanted to put our best face forward, but we also wanted to broadcast our flaws and foibles. The web was an expression of who we were as individuals: it was personal. For those who created sites in this era, there was the thrill of having an audience, but also the dread of having one. People could react to what you wrote or designed; they could link to you and send you email in response to what you posted on your site. You had readers—and critics—as well as friends, family members, or colleagues who might drop by.
The web was becoming social before social networks existed: it wasn’t just publishing technology; it was communication technology too. Because people could read any site
The web is a wonderful thing, with the potential to bring knowledge and entertainment to people all over the world. It’s also a medium that can be used to steal money from these same people, in ways they won’t notice until it’s too late.
The problem is this. The two things that keep the web honest are the scarcity of good URLs, and the fact that you can browse anonymously. If you want to steal money from someone on the web, you need a good URL so people don’t confuse your site with someone else’s; and you need to be able to get credit card numbers from your victims.
If you could be exposed by putting up a bad site at “www.citibank.com”, no one would do it; but there is no such risk, because “www.citibank.com” is already taken by Citibank. If you wanted to steal credit card numbers, but had to do it without ever knowing who your victims were, you probably wouldn’t bother; but as things stand now, you can ask for the numbers all you like, and if someone gives them to you they’re fair game.
I suggest that we split these two functions apart by creating a new internet layer underneath HTTP (the current protocol