CASE STUDY
You know what you love, you love what you know
A research story about how brand familiarity can shape the way your brain makes choices
______________________________
Part 1
You know what you love
______________________________
Visual designers aspire to simplicity and memorability. Car companies seem to have hit those marks spectacularly -- when was the last time you looked at one of the logos below and couldn't remember which brand it represented? Yet, closer inspection reveals design conventions across logos that actually work against memorability:
Based on similar shapes
Usually monochromatic
Similar balance of positive + negative space
Occupy a similar amount of your visual field
So why do we recognize these brands almost instantaneously?
Of course, you do not have to travel far to see that we do not recognize every car logo that quickly. Many more brands exist that are unfamiliar to us, and more than a few break at least one of the dominant design conventions. See Tata and Perodua below, from India and Malaysia respectively:
It would take some time seeing these out in the real world before you reached that same sense of instant recognition. It turns out that a lifetime of exposure to company logos in the real world has tuned your brain to these visual cues. On its own, this is not surprising -- we experience the same thing every day with the faces of our loved ones. Let's rephrase our original question, then, and explore the link between this common psychological phenomenon and our experiences with visual products.
Can familiarity with
visual brands
influence the brain's
future decisions?
Updating our mental model for making choices:
The classic drift diffusion model proposes that we combine our current senses and our past experience to accumulate evidence toward one of our available options.
When we exceed the decision boundary, our choice is made.
But that assumes our options are perceived by the brain at equal speed. What if our choices were equally strong, but one of them was processed slower?
Before we've even realized we had an alternative choice, we've already committed to Option 1!
Hypothesis:
Familiarity tunes the brain's representations of brand logos, making them faster and more likely to affect future decision-making.
What does faster mean?
When even the simple action of identifying this flying orange thing as a basketball can be broken down into hundreds of sub-steps within the human brain, the term "faster" can have a very specific meaning.
Let's gain some clues by breaking down how we "see basketball"
We used our understanding of visual neuroscience to help fill in the gap between when photons from the basketball hits our eyes to when our brain actually goes "whoa -- basketball!"
A division of labor thought to occur within the human visual system is that color and form are processed by different brain regions. In this toy example, let's assume that each feature processing sub-step has a similar duration:
Imagine trying to identify that same ball in motion. The color will not change much, but the lines that help define form will be degraded by motion blur. Perceptual processing of form takes longer as a result.
Insight:
Lifetime familiarity with basketballs must quicken even basic visual building blocks like color and form.
How do we make an already fast process even faster?
Gestalt processing:
A.K.A. parallel processing for us definitely-human, non-robot folks!
We rely on classic psychology once more to give us some clues! Gestalt processing refers our ability to holistically perceive all of the features on an object simultaneously. In computational terms, we call this ability parallel processing -- but does it happen in people? Using a mathematical psychology approach called Systems Factorial Technology, we can start to understand what actually happens to the little sub-processes and how they affect our recognition time for this basketball in motion. The sub-process diagrams you have seen so far were under serial processing conditions. Let's look at how the diagram changes if the basketball's features were to be processed in parallel instead:
This is my very educated guess: the first basketball you ever saw was processed in serial, but the 1000th or 100000th time you saw a basketball, your brain was able to do it in parallel. But how do we actually find evidence for this shift?
Think of the brain as a decision factory:
And each sub-process as a highly specialized worker
A steady stream of spherical orange objects go into this factory, and a decision of recognition -- "aha, that's a basketball!" -- is the output. If everything stays the same, then the time between input and output -- the brain's reaction time -- will be completely consistent. Here is the sub-process diagram again, but this time we'll really dive into how long things will take in total.
(A) In the ideal case, both the worker processing color ("Carl") and the worker processing form ("Fred") are at peak performance. Good night's sleep, strong coffee, all that. Carl completes his task in 1 arbitrary time unit (the orange circle = 1 millisecond for simplicity's sake), while Fred takes 3ms. Processing form is tough stuff! The Serial factory churns out decisions 1ms slower than the Parallel factory. Interesting -- let's keep going.
(B) What happens to each decision factory if the basketballs start showing up with their color all faded and hard to see? If Carl starts taking twice as long to complete his task, the Serial factory takes 1ms longer than usual to output a decision. Over at the Parallel factory, though, nothing seems to be amiss; Fred's not waiting on Carl to start work, so decisions are still coming out every 3ms. Awesome.
(C) Conversely, the basketballs are being delivered with the perfect shade of orange but now they're half-deflated! Carl's happy again but Fred is taking twice as long, slowing the decisions coming out of both factories by 3ms.
(D) In the worst case scenario, the basketballs are showing up off-color AND misshapen. Can we even really call them basketballs? Bravely, Carl and Fred set to work, though they are now both taking twice as long. The Serial factory is slowed by 4ms, but the Parallel factory is slowed by 3ms.
Put each of these 4 scenarios on top of one another, and it is easy to tell that Serial architectures respond to sub-process impairments differently from Parallel architectures.
If we show people hundreds of pictures of basketballs, and introduced selective impairments of color and form within different subsets of these pictures, we can compare their reaction times to the patterns on the right. Do they more closely match Serial or Parallel processing?
A nice way to visualize these patterns is to subtract the reaction times from each scenario from one another:
(B - A) - (D - C)
For you data science folks, we are taking the survivor curves of the participants' reaction times from each scenario and manually calculating the interaction effect (basically a manual ANOVA).
We end up with extremely characteristic curves that reveal what kind of architecture we are dealing with. If it is "flat", we have a Serial architecture. If it contains "one negative lobe" or negative deflection, it is Parallel.
This is a quantitative, statistically rigorous method for determining
whether any process is performed serially or in parallel.
Stay tuned!
In Part 2, we will apply this method to car company logos.