What CES got me thinking about.

Kris Hoet
6 min readJan 24, 2018

The Consumer Electronics Show earlier this month is all about drones, robots, smart devices, self-driving cars, gadgets, artificial intelligence, TVs and some of the craziest technologies one will see in life. But CES is also about the discovery of big, trending themes for the next few years, and oftentimes, looking at the smaller things — and connecting the dots — can help shed light on what’s to come. Having attended this year’s CES earlier this month, I have had some time to think about what those are to me:

Smarter interconnectivity

There are a few things at play here. First, we are seeing the arrival of truly smart devices. Most of what we call smart devices right now aren’t all that intelligent — they usually involve any type of gear outfitted with sensors that (directly or via one’s mobile phone) allow someone to look at great visualizations of data on a, hopefully, secure online account. Not that there’s anything wrong with that. I happily use my Garmin Fenix 5 watch to count steps, track my runs and check incoming messages; I use the smart tag on my keys (way more often than I should) to find out where I last put them. But there has to be more than that.

The interconnectivity of devices has changed a lot during the last few months, and CES 2018 proved it. Smart devices are really becoming smart, and AI’s rise has definitely had a hand in that. For instance, my Google Home can distinguish who’s asking for today’s agenda, so that the right information is returned. Tech assistants, likewise, are demonstrating their ability to handle real-time human conversations and requests, versus users having to repeat commands each time.

The biggest reason why our gear is finally getting smarter has to do with better device interconnectivity. In the past, our “smart” and “connected” devices were connected to the Internet individually. My TV, for instance, is connected to my Internet, as are several other devices in my home, but very few of these gadgets are connected to each other.

Now, that’s all changing, which clearly adds to the smartness of devices. Case in point: the “Samsung City” experience at CES. Rather than showing its new product lineup, the tech giant bet big on “Smart Things” — the company’s take on how to better connect different devices so that they all play nicely together — and showcased the benefits of such an environment. While noteworthy and admirable, the use cases shown for many of their interconnected devices weren’t all too big yet, but it’s clear that such potential lies in the future.

Imagine your devices becoming clever enough to not only use sensors to show off your personal data — or perform household commands — but to recognize who you are and understand what it is that you’re doing or want to do. That’s the power of truly smart, interconnected devices: tech gadgets communicating, interacting and collaborating with one another to deliver a seamless consumer experience. Your car turning up your home thermostat once you are close to home, your phone to watch inside your fridge checking if you need something, … the possibilities are limitless.

And of course Samsung wasn’t alone in demonstrating — and pioneering — this glimpse of the future. Google also bet big on its Assistant, making a clear case for how it can help bring about a richer, more connected experience. And so were Sony, LG, … and many others.

I, for one, am very excited about the idea of smarter interconnectivity. But, I’m also curious to see how open the different ecosystems, will be and whether or not they will play nicely with each other. It’s easy to envision a perfect world when all systems are coming from one manufacturer, but things work differently in reality.

Minimalistic UI for maximum UX

Apple introduced its smartphone, the iPhone, with just one button — a huge change in UI versus all of its predecessors. And it didn’t stop there. First, there was fingerprint touch, and then, just recently, we saw the button disappear altogether, in favor of using facial recognition to unlock one’s phone. At CES 2018, we did see some under-glass fingerprint sensors, so the fingerprint touch method doesn’t seem to be all gone; however, efforts are underway both to make the UI of our gear simpler and, at the same time, deliver better experiences.

There’s currently a lot being done to minimize the UI of our technology, while maximizing the UX. Smart speakers and the importance of voice for digital assistants are big drivers of that change — by using voice, we can completely eliminate screen interactions. Image recognition is another key driver of that evolution, and so is the growing intelligence of all the technology behind it. When you can operate or activate things just by being near them — or when they recognize your voice/face — using data to personalize the experience means that less input is required, and the result is a user experience that feels better as a whole.

Amazon has been testing the concept of shopping without a classic checkout (“Amazon Go”) and actually just launched the first store to the public. Start-ups like Aipoly presented similar technologies at CES 2018 as well. From ordering an Uber with your voice assistant to unlocking your home with a face scan to paying without money, credit card or without even using one’s mobile phone — or calling without a phone — the UX of a lot of what we do is about to change significantly, thanks to new technological evolutions that are able to massively simplify things.

From one-click to no more clicking at all. Paying without money, without a credit card and even without using your mobile phone. Calling without your phone. The UX of much of what we do is about to significantly change thanks to these new technology evolutions that are able to simplify things massively.

Experience everything the way you want to

Call it “ultrapersonalization.” When there are no more limitations as to how much content and data we can capture, nor restrictions on how — and where — to store all that info, we don’t have to think about directing what we capture anymore. And if we capture everything, we can offer consumers an unlimited set of possibilities for personalization.

Sound abstract? Let’s take video, for instance. The GoPro Fusion (launched a few months ago) is a great example of what’s coming. This camera allows one to capture 360° video, which subsequently allows the user to then easily make a regular video — from any POV — from that 360° overall view. But this still requires a lot of editing work.

Conversely, at CES 2018, Intel announced its partnership with Ferrari and indicated how it will allow the latter to film races (with drones) using real-time image recognition, so each car can be identified automatically. That, in turn, will allow consumers to define the output the way they want, as users can create a video feed of any car in the race, not just the leading car.

Ultrapersonalization can be even more extreme, as Canon illustrated with the launch of its Free Viewpoint Video technology. This uses a series of cameras to capture footage at a stadium or during a big gaming event; and afterwards, users can view that video — from all viewpoints and angles — using a virtual camera. Literally any viewpoint you want, no matter where the actual cameras were. How’s that for ultrapersonalization?

If the innovations unveiled at this year’s CES are any indication, technology of the future will allow consumers to create immersive personal experiences that are still hard to imagine, as we break through the boundaries of what we think is possible today.