February 12, 2019   4 mins

Previously on UnPacked… I argued that the big tech companies were in trouble because their flagship products have nowhere left to go. The technology is now pretty much unimprovable; markets are becoming saturated; and how far they can go in monetising our data is subject to political and other constraints. Unless they can come up with new game-changing tech – something with as much potential as, say, the iPhone had ten years ago – investors will suspect that the tech giants have peaked or plateaued.

Where might this new direction come from? Batteries that charge instantly and last ten times longer would be nice, but then so would teleportation – and that’s not happening anytime soon either. When trying to spot the next big thing, we should focus on tech that already exists, but isn’t quite ready for mass market take up. Last year, I took a look at the potential of augmented reality (AR) – which might just fit the bill.

However, what the example of the smartphone teaches us is that game-changing IT products emerge from the coming together of several technologies not just one. The smartphone is the perfect combination of a high definition touchscreen interface; 3G/4G networks; social media and other mobile apps; plus the operating system and chip architecture to make it all work. The team matters as much as its members, indeed a smartphone is greater than the sum of its parts.

In a must read post on her blog, Sakunthula looks at what might be next constellation of star technologies:

“A few different technologies are finally starting to reach maturity for commercial use — HD sensors, predictive artificial intelligence, and 3D displays — and combined they can be used to reshape the way people interact with computers.

“This new way of putting together software and hardware can be called a ‘3D human-machine interface’.”

These 3D displays will include “haptic feedback” – the sense-of-touch equivalent to the visual feedback of a screen or the audio feedback of a speaker system. The vibration mode on your smartphone is a very simple haptic interface.)

What would be so revolutionary about this new human-machine interface? The author starts by explaining what’s so limited about our existing interfaces:

“Most computing, especially web and mobile, is mostly for consumption. The user provides very minimal input into the computer, but gets a lot of information out. With a few thumb swipes, the user can endlessly scroll through pages and pages of images, video and text.”

It’s true. When you think about the flow of information from machine to human, it is usually much richer, and certainly much faster, than anything the moves in the opposite direction. What we input via a keyboard, a mouse or a touchscreen is a mere dribble of data compared to the multimedia torrent we get in return.

Sakunthula argues that the hi-def sensors, 3D displays and predictive AI software making up the human-machine interfaces of the near future could greatly speed up our interactions with computers. She gives the example of a system capable of displaying a 3D object and tracking the eye movements of someone designing or modifying it:

“It can tell which part of the 3D model you’re focusing on, and maybe automatically zoom in. It might use an trained AI to guess from your eye fixation behavior to guess what menu you want to open, and automatically click through the menus to get to the action you want.”

For an architect, an engineer or an artist this could supercharge the creative process. However, I think that most people would use the next generation of human-machine interfaces to work on themselves. A system capable of Watching what we do and providing appropriate feedback (both audio-visual and haptic) sounds a lot like a personal trainer to me – but much cheaper and infinitely patient. Imagine a world full of people learning how to paint, play musical instruments or deadlift without putting their backs out.

But perhaps we’d be more interested in convenience than creativity. A human-machine interface capable of anticipating our needs and organising our lives for us would be very convenient indeed.

In fact, the first such systems already exist. Take the example of Amazon Go – Amazon’s fully automated, real world retail outlet. Using a panoply of sensors, it allows shoppers to take what they want from the shelves and, er, go. The store’s technology identifies you, records what you take, works out your bill and deducts the payment from your account. Shop-and-go systems could be commonplace within a decade (assuming they can deal with the problem of shoplift-and-go).

Would we want to have similar technology in our homes? A system that observes our movements and organises our lives might strike many of us as creepy. But if it’s useful, I suspect that we’ll get used to the idea. However, I wonder if we can be persuaded to deal with the fiddliness of filling our homes and augmenting our bodies with the various sensors, displays and feedback systems that these human-machine interfaces are physically made out of.

Amazon Go works because the company sorts out the tech and customers just have to shop. But if we’re talking about the next revolution in personal IT, then the hardware needs to be as simple (from the user point of view) as a smartphone.

It also has to be intuitive. We took to smartphones so quickly because they looked like the mobile phones we had before, which were themselves modelled on fixed-line telephones. Similarly, desktop and laptop computers weren’t so different from the typewriters and television screens we were already familiar with.

But perhaps, in preparing us for the next tech revolution, the tech companies are using stealth tactics. After all, one by one, they’re persuading us to put their sensors into our homes (e.g. smart speakers) and onto our bodies (e.g. fitness monitors) – and, linking these up via increasingly intelligent networks.

If the process continues there must come a stage at which the emerging interface between machine and human encompasses the latter – at which point the most important thing inside the machines we use will be us.

 


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_