Blog Post: What Barriers Will New Input Methods Remove?


Remember how it was ludicrous when you heard the number of advertising messages you receive a day? Thousands, or rather hundreds of thousands, of different brands and ideas are projected to you every day for consumption. Now, we just try to make sure we understand all of them and process the ones important enough to us. We even flock to curators of content to help with this task.

While the numbers are not comparable yet, the amount of ways we can input data into computers is accelerating just as fast.

Almost 30 years ago, the fine folks at Apple and IBM introduced us to the first two methods of input. They both depended on our fingers: the keyboard and mouse. Crazy to think that I have been perfecting my typing and clicking skills for that long. Before we chuckle, think of the computing ability both methods gave us. We could write, play games, create art, read, research, publish, and countless other activities with just our hands.

Makes sense why there wasn’t a need to iterate past those two. Fortunately for me I was born at the right time. By the time I finish this post, my father can bang out a paragraph or two. You know I love you Dad, but it’s true.

My how times have changed.

Mobile devices and the multi-touch interface started a revolution. Now that we can take computers wherever we want — with data connectivity to multiply our productivity — why limit our imagination? 

The question was multiplied after rewatching a talk from Luke Wroblewski at the October 2013 dContruct conference. It’s worth the time to watch, because he takes us from 2007 to today with all the various methods. Just to name a few, here are some of the ways we can currently provide our devices input:

  • Light
  • Motion
  • Your Voice
  • Temperature
  • And this is just for starters!

In the future, you will have the ability to use your heartbeat to unlock and start your car. A special grip could be used to open your house door. Certainly, you will soon be turning any flat surface (or just project it into the air) into a touch interface. The Starbucks table I am working on certainly will need a good cleaning before that happens.

Problem is, it kind of seems overwhelming to process all of this. Not only will there be specialized apps collecting and curating this data, but we will need to consume it.

Nobody dared come up with different input methods once we had the keyboard and mouse, because usage patterns solidified. If the QWERTY keyboard has kept it’s current configuration for this long, how are we to process all of these new input methods?

Just because we can create data all of these ways, does that mean we should?

Removing hurdles will be the barrier most important for input creation in the next 10 years. It’s no big deal to attach a device that automatically uses my heartbeat (which is as diverse as a fingerprint) to authenticate my identity, so the price of those devices will be the key barrier. If price is okay, but the heartbeat is too easy to fake, then how viable will it be?

Greatest example is what gesture typing is doing on the Android platform. We type on our mobile devices just like on a keyboard just because it’s what we are used to. When apps like Swype arrived a year or two ago, though, you could type a single word in one type. Even the hard ones. I will admit, it’s one of the reasons I bought a Nexus 7 instead of an iPad. One barrier is removed, allowing for easier input.

What will be the next barrier to fall? Can’t wait to see.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s