Predictions for the Year Ahead

Thought Leadership

Huge strategists, technologists, UX designers, and creatives weigh in on what’s really relevant in 2019.

Written by: XXX XXX

Can we just make the element below a line break instead of line break and say thought leadership?


Raise your hand if you’ve recently heard or said the following: We can automate that. We will just get an algorithm to do that. Machine learning will solve it.

As a designer and data expert, I love the acceptance and enthusiasm of algorithms because all of those statements are probably true. The joy of my job is to remind (or educate) people that algorithms are designed. They don’t just “autoMAGICALLY” materialize, meaning we still need some humans (that’s us) to design them.

Designing data-driven products without humans at the center leads to a pernicious cycle of exploiting tech and losing the trust of consumers. This is bad for people and it’s bad for business. Data privacy, transparency of proprietary black box systems, and lack of regulation are all issues exacerbated by the problematic data product and service design we’ve seen in recent headlines. And when businesses or platforms misuse customer data, it creates an environment of mistrust, which is harmful to all our progress. This lack of trust can make clients unwilling to share data externally, choosing instead to keep work in-house and contributing to an echo chamber of biased datasets.

It’s not magic — it’s hard work. But when done right, the results can be magical.

Here’s what you need to know about human behavior in order to design ethical, intelligent systems:

  1. Humans are terrible at predicting the future, but we are excellent at building it.
  2. Without fail, we always go too far.
  3. But once we do, we find clarity, try to correct and move forward.

The Paradox of Trust

At the crux of the data issue is the “paradox of trust,” which essentially posits: you can’t trust something until you understand it or know it, but you can’t get to know something without first trusting it.

We are currently moving away from the physical machines people know and trust (computers or devices we can see and turn on/off), to intelligent, digital systems that are harder to “see” and understand—and therefore trust. As the general public has begun dipping their toes in to the data-driven products and services offered today, there have been a slew of bad actors that could set us back years with their breach of customers trust. In order to root out the bad actors and put the power back in customers’ hands, we must prioritize designing with humans in mind.

The golden utterance of voice-driven AI has presented a beautiful and juicy challenge to designers, and we are all navigating these uncharted waters together. Should we design voice syntax to be so perfect that I’m unaware it’s not a human? If an algorithm has so perfectly profiled my preferences that experiences have effectively been designed only for me, does it remove the possibility of choice? It’s the wild west, for now, as we all work to build this new language of interface and expectation.

Here are a few tools we have as we design intelligent, trustworthy “machines”:

  • Reveal what’s human vs. what’s machine: If humans don’t know they’re talking to a machine, how can they trust it?
  • Design all intelligence to fail: Intelligent systems cannot be impenetrable. They need to be designed with fail-safes (like the on/off buttons of yesterday’s machines) or small intentional errors.
  • Design intelligent processes, not just personas: Move beyond “artificial intelligence” as a single destination and consider that AI can be augmenting intelligence or even ambient intelligence.

It’s not magic — it’s hard work. But when done right, the results can be magical.


Beyond designing with humans in mind, we must also confront the larger issue around “artificial intelligence,” which is that the algorithms are in fact human-designed and thus flawed. Which means we must design with these flaws in mind.

Bias Begets Bias
Data on its own is not a great storyteller. While datapoints may seem objective, the truth is they often lack context, are full of organizational bias, or are otherwise limited in scope. Biased data will only lead to biased outcomes. Ultimately, algorithms are no substitute for a conversation, no matter how much we want them to be. While there is a temptation to offload knowledge to mathematical logic, this not useful long-term. Bias begets more bias, and only further contributes to the cycle of violating consumer trust.


The Paradox of Trust

At the crux of the data

issue is the “paradox of trust,” which essentially posits: you can’t trust something until you understand it or know it, but you can’t get to know something without first trusting it.

We are currently moving away from the physical machines people know and trust (computers or devices we can see and turn on/off), to intelligent, digital systems that are harder to “see” and understand—and therefore trust. As the general public has begun dipping their toes in to the data-driven products and services offered today, there have been a slew of bad actors that could set us back years with their breach of customers trust. In order to root out the bad actors and put the power back in customers’ hands, we must prioritize designing with humans in mind.

The golden utterance of voice-driven AI has presented a beautiful and juicy challenge to designers, and we are all navigating these uncharted waters together. Should we design voice syntax to be so perfect that I’m unaware it’s not a human? If an algorithm has so perfectly profiled my preferences that experiences have effectively been designed only for me, does it remove the possibility of choice? It’s the wild west, for now, as we all work to build this new language of interface and expectation.



Karin Giefer is a seasoned design strategist, specializing in the human, business and infrastructure systems of both our physical and digital world. Part designer, part data-driven business strategist, part implementation specialist, Giefer helps organizations determine what to make and do, why to do it and how to innovate contextually, both immediately and over the long term.

Illustrations by Doug Chayka


The Paradox of Trust

At the crux of the data issue is the “paradox of trust,” which essentially posits: you can’t trust something until you understand it or know it, but you can’t get to know something without first trusting it.

We are currently moving away from the physical machines people know and trust (computers or devices we can see and turn on/off), to intelligent, digital systems that are harder to “see” and understand—and therefore trust. As the general public has begun dipping their toes in to the data-driven products and services offered today, there have been a slew of bad actors that could set us back years with their breach of customers trust. In order to root out the bad actors and put the power back in customers’ hands, we must prioritize designing with humans in mind.

The golden utterance of voice-driven AI has presented a beautiful and juicy challenge to designers, and we are all navigating these uncharted waters together. Should we design voice syntax to be so perfect that I’m unaware it’s not a human? If an algorithm has so perfectly profiled my preferences that experiences have effectively been designed only for me, does it remove the possibility of choice? It’s the wild west, for now, as we all work to build this new language of interface and expectation.



Karin Giefer is a seasoned design strategist, specializing in the human, business and infrastructure systems of both our physical and digital world. Part designer, part data-driven business strategist, part implementation specialist, Giefer helps organizations determine what to make and do, why to do it and how to innovate contextually, both immediately and over the long term.

Illustrations by Doug Chayka

Image of Mockup Square
Topic

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor.

Image of Mockup Square
Topic

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor.

Image of Mockup Square
Topic

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor.