The end for keyboards and mice?


You’re in fast moving traffic on a busy motorway approaching a complicated junction with just seconds to get into the right lane. Your phone, sensing that now is not the moment to disturb you, diverts an incoming call straight to voicemail. Later, when you are in a more relaxed state, it plays the message back and offers to ring the caller back.

Even if you are packing an iPhone 5 or the latest Samsung, it is fair to say that your phone is still a long way from doing this. Despite the impressive array of features offered by today’s handsets – including voice commands – most people still interact with their phones by pressing buttons, prodding a screen or the occasional swipe or pinch.

It is a similar story with computers. Take Microsoft’s new Windows 8 operating system, due to be launched later this week. Its colourful, tile-laden start screen may look startlingly different to older versions of Windows, but beneath the eye candy it’s still heavily reliant on the keyboard and mouse.

In fact, with one or two notable exceptions, it is striking just how little the way we interact with computers has changed in the last few decades.

“The keyboard and mouse are certainly a hard act to follow,” says George Fitzmaurice, head of user interface research for US software maker Autodesk. But, despite an apparent lack of apparent novelty in the majority of interfaces of today’s mass market devices, there are plenty of ideas in the pipeline.

Take, for instance, technology that can monitor your stress levels. One technique being developed is functional near-infrared spectroscopy(fNIRS) that monitors oxygen levels in the blood, and therefore activity, in the front of the brain. “It measures short term memory workload, which is a rough estimate of how ‘busy’ you are,” says Robert Jacob, professor of computer science at Tufts University, near Boston, Massachusetts.

The technology is currently used in medical settings, but could one day be used to help filter phone calls, for example. Today fNIRS works via a sensor placed on your forehead, but it could also be built into baseball caps or headbands, allowing the wearer to accept only important calls. Perhaps more immediately, it could also help organisations assign workloads efficiently. “You could tell your phone only to accept calls from your wife if you get busy beyond a certain gradation of brain activity,” adds Jacob. “If a machine can tell how busy you are it can tell if you can take on an additional workload, or it could take away some of your work and assign it to someone else if you are over-stretched.”

Other forms of “brain-computer interface” are already being used and developed for a growing number of applications. Electroencephalography (EEG) picks up electrical signals generated by brain cell interactions. It has long been used to diagnose comas, epilepsy and brain death in hospitals and in neuroscience research. The variation of frequencies of signals generated can be used to determine different emotions and other brain states. Recent years have seen the launch of simplified EEG headsets that sell for as little as $100.

For example, a British company called Myndplay makes interactive short films, games and sports training software which users interact with via these brain wave measuring headsets. Those who can successfully focus their minds or mentally relax sufficiently when required can influence film plots and progress to higher levels in games.

Similar technologies are increasingly being used to help the disabled. Two years ago an Austrian company called Guger Technologies released a system designed to help paralysed patients type by highlighting letters on a grid one by one until the desired letter is selected and the associated EEG signal is detected. Spanish researchers have developed EEG-controlled wheelchairs and are working on using the same method to control prosthetic arms.

Other researchers are working on “emotionally intelligent” interfaces that use cameras to read users’ emotions. By analysing facial expressions, these systems can spot universal characteristic signs of anger, confusion or other feelings. Emotions give rise to very similar facial expressions across different cultures, so such systems could be used anywhere, says Peter Robinson, professor of computer technology at the University of Cambridge.

Call centre staff are routinely given training or scripts to help them deal with angry customers, and teachers use facial expressions to understand how well their students are coping with lessons. Researchers are developing systems that provide computers with similar information using algorithms that analyse the position of features such as the mouth and eyebrows in images of a user’s face. “If a computer can tell that the student is confused then it could adopt the same techniques as a human teacher, perhaps by presenting information differently, or by providing more examples,” says Robinson.

 

 

Air kick

While these kinds of systems try make the interface almost invisible, another option tries to integrate existing technologies like cameras, sensors, screens and computers into everyday objects. This approach – known as “tangible computing” – would allow anyone to interact with computers using physical “things” rather than through special input devices. For example, you could indicate your chess move to a computer by moving a piece on a physical board rather than by entering your move using a keyboard and mouse.

Already, researchers at the Massachusetts Institute of Technology in the US are beginnign to explore the extremes of this notion, proposing everything from building blocks and trays of sand to sheets of malleable material and levitating balls to control on-screen experiences.

“The mouse and keyboard won’t go away completely as they are an extremely fast and efficient way of interacting with computers, but we are going to see a lot more manipulating and placing of real life things,” says David Kurlander, formerly of Microsoft’s User Interface and Graphics Research Group. “We’ll also see more pointing, speech, and combinations of these.” He also predicts that flat surfaces such as tabletops, walls or windows will be used as display screens, with images projected from personal projectors mounted on clothing or worn around the neck.

Gamers have become used to controlling games consoles with physical movements using devices such as the Kinect – Microsoft’s motion-sensing device that can track the movement of objects in three dimensions using a camera and depth sensor. And the software giant has even created software that could allow people to use a Windows 8 machine using Kinect. Inevitably, comparisons are drawn with the futuristic user interface used by Tom Cruise’s character in the film Minority Report to manipulate vast swathes of information and swipe it around multiple virtual screens using extravagant arm movements. However interface experts are sceptical of Hollywood’s take on the future.

“People tend to be lazy and large arm movements are very tiring, so I think it is very doubtful that people will ever be communicating with computers using dramatic gestures when they could achieve the same results with a mouse or their voice, or some other way, such as tiny finger movements,” says Kurlander.

This last idea is the concept behind Leap, a small 3D motion sensing device that sits in front of the a computer and allows users to browse websites, play games or use other software using finger and hand movements. An impressive promotional video released earlier this year by its creators Leap Motion, based in San Francisco, shows examples including someone navigating around a satellite image of a city with mid-air swipes and pinches, and then using thumb movements to fire a gun in a first person shoot ’em up game. The $70 gadget is due on the market early next year.

But if all of this sounds too energetic – or perhaps dangerous – for an open-plan office, there are other less exhausting ways of communicating with your computer. For example, systems that track eye movements are already used to help those with disabilities. Applications include controlling wheelchairs and as a substitute for traditional keyboards as a means of typing. Steven Feiner, professor of computer science at Columbia University, and leading augmented reality researcher, believes eye tracking interfaces will become commonplace. “If you look in a particular place or direction, that could be the signal to call up certain information,” he says. “For example, you could look into the top right of your field of vision to call up the time, or perhaps your latest email message.”

Under the skin

Augmented reality, as this is known, does away with the need for separate screens by overlaying information from computers directly into a user’s field of vision using glasses, contact lenses or some form of head-mounted visor. “What we are trying to do is integrate information with the world around you, so that instead of looking at information in a separate place to a thing, you see it in the space of the thing,” he says.

The field was first envisaged as far back as the 1960s, but only now is the technology becoming available to make it practical. Some recent innovations have come close. Earlier this year Google demonstrated Project Glass, a system which projects information such as incoming calls and emails onto the user’s glasses. This isn’t really augmented reality because the content is unrelated to what the wearer is looking at. Some argue that mobile phone apps that provide information related to tourist attractions or the night sky when these are viewed through the device’s camera also fall short because they track the position and orientation of the camera rather than the user’s eye movements.

True augmented reality will allow a maintenance engineer to look at an engine, read repair instructions and see specific components highlighted all within his field of vision. It could also be used in corporate meetings and even at dinner parties to call up people’s names and other information about them just by looking at them, for example.

The logical conclusion to this form of augmented reality design might be a system embedded within the user’s eyes. Such technology is clearly far off enough to deserve to be placed in the realm of science fiction, embedding other interface devices into other parts of the body, such as just under the skin, may not be. For example, tiny embedded computers or sensors that interface with the human body itself could prove extremely useful or even lifesaving. Some researchers havealready begun to explore this notion, implanting tiny chips under their skin to control doors and lights in a building. Imagine red LEDs lighting up beneath your skin when you’ve drunk too much to drive, or a device that warns you when your blood pressure or cholesterol level become dangerously high.

Of course, these specialist applications are a far cry from something that would replace the keyboard and mouse, says Autodesk’s Fitzmaurice. “You could have buttons to control an MP3 player, for example, and the benefit would be that they would always be with you and would never get lost, but would you really want them under your skin?”

In most cases, the answer is likely no. Perhaps best to hold on to that qwerty keyboard for a while longer yet.

– BBC

This slideshow requires JavaScript.

Leave a comment