The Affect of Iconography on a User’s Experience – Part 1 the Psychology of Symbolism in UI


My family and I visited the Crossroads of Civilization exhibit at the Milwaukee Museum the other day.  If you haven’t had a chance to visit the exhibit you should because it really speaks volumes about our attempt throughout the centuries to comprehend complex ideas.

If we think of how we interpret the material world around us it quickly becomes apparent that almost everything represents some form of information. When we grasp this notion, it isn’t a far stretch to consider that to understand the world we live in better we’ve come up with compact ways to represent complex ideas.



Obviously a lot has changed since the pyramid builders of old, but on all accounts the same visual mechanics are at work. Whether it is letters constructed into words on a page that phonetically sound out the idea of a BIRD or a pictographic representation of a physical BIRD the same synaptic and ocular phenomenon is taking place.

One doesn’t have to put on a fedora and leather jacket to see hieroglyphics, in fact we see the ghosts of their impartial use at work when we type emoji’s in our messages.  Why write “happy” when you can send someone an icon.
Happy, feliz, счастлив, 开心,快, Gelukkig, srećan, Heureux


This decoded pictogram transcends ethnic races, languages, and represents a multitude of complex ideas all the while our minds are registering a smiling face.  What’s even more amazing is that our minds register an emotional response to the iconography.

“Iconography, good iconography, strives to convey invisible reality in a visible form.” 
― Peter Pearson

In Part 1 of this blog post we are going to be discussing how the use of icons can positively affect a user’s experience through the concepts of metaphorical affordance and conditioned response.

Like we’ve talked about in previous blog posts building a log of macro user personas that represent a group of individuals helps us to not only understand who we are talking about and what generalized behaviors are similar but also what makes particular groups unique and special.

Questions like:

  • If a user gets confused who will they contact?
  • What is something that would make a person angry or upset if the interface didn’t do it well?
  • How long will a typical user be interacting with the application?

The goal of documenting human behavior inside a workshop focused on current state is then used to help dictate how an interface will ultimately fulfill needs in a future state.  This is in part because from a UI Practitioner’s perspective an application has less to do with colors, themes and branding and more to do with fulfilling needs through usability. 

One of the principles that drives good usability is called Affordance
Usability based on affordance is fairly simple to see running in the wild working incorrectly when we find nicely designed doors in an office building.

Here is an example of a “Norman Door


These types of doors are found everywhere, and as you can see it has pull handles indicating that the door has the interaction of pull to open, while the actual interaction is push to open.  The basic affordance of the door is broken requiring the words PUSH to inform the user what they must do.

Much the same way the broken affordance of a Norman Door functions counter intuitively the icons, symbols and pictograms of a User Interface have a similar psychology when used incorrectly.

This is because while icons and pictograms hold an aesthetic value they also carry an inherent usability because of the affordance that they represent.

You can imagine in the case of a user persona with each behavioral question captured there is the possibility to represent symbolic characteristics that bridge communication barriers.


It’s fairly safe to say what complex ideas or information this icon represents as we’ve all been classically conditioned to respond to this icon.  So we can say that the metaphorical affordance of this icon is high, in that upon clicking the user can assume they will get needs fulfilled that pertains to mail.

But what about this icon?


Aside from representing the concept of climbing into a rocket ship and blasting off to the moon for outer space adventures… You could imagine much like the Norman Door example, in order for this icon to express meaning in a User Interface it would need an explanation or title.
The big take away here is that the aesthetic value of the rocket icon is really nice, but becomes just a pretty picture because it lacks metaphorical affordance in the current context of use.

But what if we only know generalized information about a group of users…?  What if our Q/A sessions haven’t produce enough behavioral information for selecting usable icons in our UI?

To understand how much liberty and the risks taken in UI without having
user persona data let’s take a look at a simple interface.

Spend about 5 seconds looking over the UI below…


If you are someone that deciphered meaning from the icons inside the interface without an explanation, you’ve just created a small positive emotional response.  But if you could not, imagine the assumptions I just took in thinking any reader would be able to understand the interface above…

The idea that perceived usability of any interface is associated by how quickly we ascertain meaning puts weight on deciphering pictographic representation.

OK, Cool… So we pick icons everyone has some preconceived notion of what they represent and we’re good to go… Right?

In theory yes… The big take away here is that as soon as the complexity of an interface increases without having behavioral understanding, the metaphorical affordance degrades and the UI starts to function like a Norman Door.
Let’s take a look at more a complex Interface.
There are 24 visible icons surfaced in the UI Layer.


There are some pictographic or icon problems to look for flagged in yellow:

  • Is the Home tab navigation option the same as the Home Button with icon?
  • If the icon next to the Lists title section is indicating that there is a list below, what does the icon do to the right of the Lists?
  • If the right List icon in the Lists section is for the functionality of sorting, why does the Table section not have the same List icon for sorting?
  • Is that a table icon or a calendar icon?
  • What is the difference between Reports and Forms, as they share the same icon?
  • What is the difference between the book Forms icon and the Forms icon in the top navigation area?

After looking over the dashboard above we come full circle and have the grounds to effectively understand the subtle psychological effects of icon degradation.
While it is true that it is possible to interact with the dashboard above, it is also true that the UI contains subtle pictographic inconsistencies that cause friction in the Usability.

Think about the subtle psychological effects of a Norman Door and how it still functions to open and close, but on some level it causes a negative impact on the overall user’s experience.

If the ultimate goal of a UI Practitioner is the least amount of friction possible than to render an experience with minor flaws hinders User Adoption on a measurable level.

“Simple is hard. Easy is harder. Invisible is hardest.” – Jean-Louis Gassée

In Part 2 of this blog series we will be discussing custom icons and a few extremely handy tools that aid in the creation process.  We will also be discussing the process of creating WebFonts and Symbol Contact Sheets, that can be used to help identify what character codes to use in your CSS.  We will then show the use of the created icons in Responsive Designed SharePoint Wiki project.


ReBranding The Harris Tweed Digital Experience – Bringing an Old Brand Into The New World

I love everything that has to do with Harris Tweed, so it was with great pleasure and old school inspiration that I tackled this project. The main design goal was to make something as traditional as the Harris Tweed Brand seem right in a modern User Experience. I went with a pretty basic UI layout using big blocks of single strip, 4 up, 2 column blocks. The look is very minimal so the focus is less on the page design and more on letting the grittiness and textures of the tweed seep into view. The bright colors of the buttons are meant to be over powering attracting the eye through out the UX almost like climbing spikes helping you navigate through the long scrolling page. The type has characteristics of a digital magazine layout with large titles covering classic Harris Tweed script, mixed with a Helvetica Condensed with only about 3 variations of smaller point sizes.

The page has persistent navigation that would scroll with the user, being a quick means to navigate up and down the long page style.




A Momentary Break From Digital – Our Perception & The Way We See The World – Old Post

I was lucky over a month or so ago to escape into the north woods of Wisconsin.  My entire immediate and extended family convened at the once a year summer cabin for a get-together of adventure.

The mornings were full of communal breakfasts, with coffee pots continuously brewing and kids blurting out cerebral fantasy about finding giants and ogres, strange bugs and poisonous mushrooms.

To see children in this state of euphoria and how tired they are at the end of the day is a wondrous sight to behold and it makes you realize how much our perceptions of the world play a key role in our happiness.  Kids have no idea why at 9pm they cannot lift their arms anymore, for them time doesn’t exist and therefore the physical toll and bodily change comes as an unpredictable onset of sleep.

Our bodies go through nature adaptation like primitive man, adaptation like the bug bite acclimation that our fragile bodies go through as the multitude of insects have drawn enough of our blood and deemed it distasteful because of caffeine and foreign chemicals. Changes like, your ears becoming more attuned to the sounds of your environment and how your mind picks up on coincidences that in a fast paced city have become over looked.  Mundane things like: the patterns of leaves, the direction water flows and the sound the wind makes as it blows through the trees.

There is a magic that happens when your able to sit around a camp fire at night under the stars that brings out a sort of deep ancestral understanding of the world and each other.  Whether it is the smoke from the fire, the circular position everyone assumes or the darkness and its effect on the conscious mind each person becomes closer.

The awe and rapture of the night time sky keeps everyone periodically looking up as though expecting to see something unexplainable to the human eye, and ever so often some distant spirit of the heavens gives off it’s own firework show.

When your older and your taking all of this in ( I can only imagine what my mom is thinking ) you realize that Einstein’s notion of the relativity of time is not only true but that so much of the relative aspects of time are based on our perception of our immediate environment.  People so happy to be around each other, nothing else exists in the world.

When it’s time to go, everyone packs up their cars gives big hugs and set off towards civilization again, time sets back in motion in it’s normal syncopation, cell phones come back out.  It’s then at that moment of reflection when some of life’s secrets are revealed.

The small things, that express greater meaning, the idea that locked inside our perception of the world our surrounding environment is telling us to see with our eyes and our hearts.

I wrote this post months ago and it stayed in my drafts folder, because it needed to be spell checked and proofed ( which says something because there are probably loads of grammatical mistakes still ) but I woke up early this morning and had a moment seeing the small stuff… and I caught it on camera… and thought… this is the kind of small stuff that should be in this post

WP_20140924_06_54_30_Pro WP_20140924_06_54_47_Pro WP_20140924_06_55_04_Pro WP_20140924_06_55_16_Pro WP_20140924_06_55_25_Pro   WP_20140924_06_55_54_ProWP_20140924_06_55_48_ProWP_20140924_06_55_41_Pro

UPDATE Voice Command Running Jacket Prototype – Phase 2 – adding the voice command module

I have an update to my Voice Command Running Jacket Prototype.

The Arduino UNO finally has Phase 2 parts added.  The last time I posted about the project on Word Press I had the two 8×8 matrix displays setup to receive scrolling directions based on text input commands sent through the Serial Monitor.


Building upon that code, I now have those functions that were being called on text input now setup to act inside a switch case block inside the event for voice capture and recognition.  The great part about starting the project off with text input through the Serial Monitor is that the Voice Command Module still operates through the serial port thus making the original code for this project very scalable.

There are a couple of really good Classes/libraries that I am using in this project. The first is the Parola Library:

This Library is a pretty robust scrolling library. The ability to scroll text, isn’t to hard to do from scratch, but the creators of this library have easing, inverted text and support for fonts to name a few.

* If you are going to use this library, there are a couple of pitfalls that you should keep in mind. The library was originally designed to work with a specific 8×8 LED matrix display, it has been updated to use any generic 8×8 but you have to update the .h file.

The next Library in the list is the SoftwareSerial, which actually is an internal Arduino Class that you import. Using this class helps assign RX and TX to other slots on the board.
*This is another pitfall area that I want to call out. All Arduino boards have dedicated TX and RX slots. The catch is the dedicated slots don’t allow interrupts back and forth. This for the most part was undocumented until I found an Ardunio forum page online that explains it in detail which slots on what boards do support interrupts.


The Voice Command Module is really easy to use,  It has 4 connection points ( GND, VCC, TX, RX) which fit as they should on the Arduino board and you can get up and running within 45min… If you have the information I have in the next two paragraphs… before you start…

* The catch here is that in order to interact with the Module you have to use a specific serial monitor app that ships with the device, which isn’t the best piece of software… and the Manual lacks some specific explanation as to the exact format of the HEX CODES the Voice Command Module needs inorder to set it up.  The Manual has a detailed table showing all the Commands ( 0x36 )but doesn’t express that the format  isn’t what is in the table but looks like this ( AAx36 ) instead…

*Also another pitfall that was undocumented was the act of “NOT” flipping the TX and RX connection points on the USB TTL adapter.  In almost every project that I have used on the Arduino you flip RX and TX, but for the recording and saving voice commands from your PC to the Voice Command Module, you don’t…  This took me 2 hrs of trial and error to find this bug.  This was really hard to find because the software is so bad, the issue looks as though it is your RX display settings.

All in all this project has been really fun, and shows the power of prototyping with Arduino and it’s plethora of compliant modules.

The previous video and older Blog post can be found here:



Arduino UNO:

8×8 Matrix Displays:

Voice Command Module:



If you like this comment, and let me know what you think.



UPDATE: HTML5 Game – First Level – Ready For a Little Player Testing – Mothra and The Windy Day

The first level is about 80% done, and is ready for the first round beta testing.

The game is currently being tested on:

  • iPads 2, 3, 4
  • Desktop: FF, Safari, Chrome and IE 9, 10, 11

The fun and hopefully interesting aspect about Mothra and The Windy Day is that the main character is a moth larva… Let’s face it… ( larva are essentially non moving )  which translated into a game setting means ( so slow that it could be bothersome ).This simple aspect about nature fascinates me, in that how does a moth larva actually manage to survive long enough to do anything… let alone spin a cocoon and turn into a moth?Think about it? Given the stacked odds against something intensely slow, with no arms, no sharp teeth, etc. in order to survive a moth larva must have something close to Yoda force skills… I had to spin Star Wars and Yoda into this Blog Post somehow…

Screen Shot 2014-04-03 at 8.56.37 AM
click to play



Because Mothra is really slow, there is a sort of “fight or flight” concept at work in which the player has to take damage, inorder to beat the clock… But how much and when is determined by the game player.  If you spend to much time dodging the Bird, you’ll run out of time.  Likewise if you try and beat the clock to early by not dodging the bird and taking damage you’ll die to soon.  By touching the screen and breaking the floating leaves you feed Mothra making him gain weight, which eventually gives him the ability to Super Jump which can defeat the Bird’s attack once and for all.





Screen Shot 2014-04-08 at 5.03.01 PM



During the past year and a half, I have come to the realization that, with tools like Edge Animate, Google Web Developer and Flash CC exporting to Canvas and CreateJS, that Javascript and HTML5 are here to stay.  With that said I feel as part of the Old Flash Vanguard it’s essential that the things that we used to build in Flash on a regular basis now have a place in this new world of online…

You have to start somewhere.  You can’t be afraid of building something that may fail. Mothra and The Windy Day is based on Adobe Edge’s DOM based animation framework.  So the game is not using HTML5 Canvas instead it is actually pushing div elements around on the screen…

Whether DOM in this situation is better, or Canvas is an argument i feel of the scale of the Game.  With Mothra this game is essentially an experiment to see how far I could push Adobe Edge before the Edge framework buckles under the stress of Javascript DOM computations.

If your reading this article… play the game, crack open the console window and watch the console.log script of the game unfold.  There are Intervals, counters, an attempt at randomization, key board key capturing, dynamically setting properties of text fields, animation, cookie setting, and html 5 video.


Please give me some pointers, comments, suggestions and responses.





Voice Command Running Jacket Prototype – Arduino Mixed With Voice Commands and Scrolling LED Matrix

Here is a fun project that I have been working on over the past month.

I have blogged about wearable technology a few times now, and I am happy to say that this Prototype is coming along nicely and would work very comfortably with a NIKE+ FUEL BAND.

The basic idea is that a runner could wear this “smart jacket” and it could be trained to respond to simple voice commands.  The runner could say “RIGHT” or “LEFT” or “EAT MY DUST”, and the jacket’s on board LED Matrix displays would act accordingly.

A possible design flaw, you could be the act of building this into a jacket vs building the apparatus that could be attached to any jacket… but for now… lets just say it is a jacket…

The over all parts list for building this prototype are fairly in expensive and could probably be achieved for under $100.  if you wanted a super cool Nike running jacket it could add to the cost 🙂

The Arduino compliant voice command shield is here.



WP_20140419_012 WP_20140419_011



Let me know what you think, Comment, Respond.






UX UI Presentation app Idea – Show Off Your User Interaction From Different View Points

I had a great app idea today while looking at Pinterest. First off a quick shout out to Pinterest… Pinterest is such an amazing way to get a graphical and visual experience for just about any flavor of life. I am a magazine junkie and so to have something digital that finally takes the place of the pure visual inspiration of a Design magazine is wonderful.

OK back to the topic of apps… When you look at Design Magazines and Visual Design from places like Pinterest you not only get a sense of the design but more often times than not the design is being put into context in layout like this:

or like this:

While this is great, it doesn’t help for actually interacting with the design UI itself. Wouldn’t it be great to have an app that you could upload your ipa, or .app, .xap, etc.

The app would take the user interface uploaded and put it into an environment in perspective on a device in the wild and then could display it moving while you interacted with close up layer of the UI.

The problems this app would solve could be:

– A Sense of up close and personal with the navigation.
– Seeing the UI from far away. I think this is something that classic UI and UX people leave out. What does the UI look like from the other side of the room? It sounds silly but in comparison to a good Logo designer, a good logo looks good if you shrink it down to postage stamp size, or if you blow it up to the size of a building.
– This could help the application fit into many to many scenarios during a single viewing (hospital, manufacturing, business, etc.)
– Great Presentation, tool.




Prequel to HTML5 Mothra Game – Idea Character Development

I’ve been working on a series of simple Nano touch games. Games are fun, there is no better way to rough out the edges of your development skills than to build a game…

There are so many levels to game development, from trying to think of creative ways to make the game interesting to simply solving “if-then-than-that” situations that arise when dealing with something complex.

Here is the link for the first round rough draft of Mothra: Keep in mind that this will ultimately be the second game.

The sketches below are concepts for the game meant to be the prequel to the Moth flying. The potential animations at the end of having a FAT caterpillar/larva spinning a Cocoon and then emerging from it a beautiful moth could be so awesome, and it would have such an Eric Carl sort of feeling you can’t help but think of reading the “Very Hungary Caterpillar”