An important part of design is understanding the contexts of use, as the use of products rarely occur in perfect or isolated settings. I had an experience which got me thinking about the effects of time on one's interactions.
In particular, I opened a door in my office building, which only swings one way. I pushed it open, and thought to myself how, with that knowledge of recently having pushed the door to get in, I can infer from that information that to get out, I should then pull it.
Doors are a common target for Don Norman. Designers seem to get them wrong so often. I was noticing how the handle was identical on both sides of the door, so it shouldn't have been able to communicate different messages. That is to say, if both sides are identical, barring other factors, how do you (uniquely) communicate "Push" vs "Pull"?
Duration between uses
It should be the case that other factors play an important role, be it the position of hinges, stoppers, fire safety standards, window placement, and so` on. These fall under physical traits or cultural conventions. I would like to highlight the temporal factors, or the effects of time on your interactions. It can be as simple as being unable to perform a task because you haven't done it in a year.
These processes are quite common, like filling forms that are only required once a year (taxes, maybe), or every 4-5 years (voting). Appropriate design considerations need to be taken with this (in)frequency in mind. Fire extinguisher and defibrillator usages are few and far between. When it comes to it, you have to know how to use them or die trying.
Sometimes, you repair something that occurs infrequently enough that you forget your original solution. People who make a living performing repairs benefit off this effect. You may take a few hours to figure out how to repair a burst pipe, and more to get the required tools and materials, but a plumber has everything on hand, and is skilled enough to know how to do the job once, and do it right, in less time.
User, experienced
Naturally, you can also design interactions around the assumption that the user is going to be learned and practised. Some things come to mind, like cars and keyboards. There's a reason why expert keyboard enthusiasts may opt for difficult configurations which may improve typing speeds.
Some products are terrible, apart from ugly, because they aren't designed with the user's growth in mind. Those desk mats with excel shortcuts just don't look right.
![]() |
Not only is it ugly, it's a product that gets more useless the more you use it! |
While it is typically good design to offload the user's memory load into the environment (as described in Molich and Nielsen's (1990) heuristics), this comes at an aesthetic cost. Correct me if I'm wrong and if this sort of product actually helps, but I'll still dislike it on aesthetic grounds.
Just-done actions
Back to the anecdote about the door. There's a case for designing interactions around the method of elimination. The thought process of the user could go something like this: "I just pressed this button to do X task, so I don't have to press that again. If I want to do Y task, it should be one of these other buttons."
The consequence of this is that you can compromise on design, if there are any constraints in budget or space. For tasks that require interacting in specific sequences, exhausting certain options help reduce the cognitive load as well.
I should offer an example. My portable monitor (Arzopa brand) uses a single switch to adjust the brightness and volume. The switch goes up or down, which translates to adjusting the brightness/volume up or down. Your selection depends on the first input: move the switch up to start adjusting the brightness, and the other way for volume. Or vice versa. I don't remember, but that's my point.
I seem to always get it wrong. But having done a mis-input, my next move is usually correct. I'll want to adjust the volume, and move the switch up, and it starts adjusting the brightness instead. I lower the brightness back to its original value, then wait a while for the window to time out. Then, I move the switch down, which starts the volume adjustment.
This interaction is very similar to plugging in connectors like the USB-A. If you get it wrong the first time, you'll get it right the next 1-2 tries (1 try if it's an execution error, 2 if it's an evaluation error).
In the case of my portable monitor, the switch has a dual purpose probably due to hardware constraints. The tradeoff is, in my opinion, worth it, as volume and brightness adjustment are rarely critical operations that require the user to get it right the first time. Anyway, there are labels on the side which I never look at.
"There are no simple answers, only tradeoffs" - Norman, 1983
Related: States in design
I thought about this while operating my air conditioner's remote control. The power button is responsible for sending on/off signals to the aircon. Based on my own conceptual model of the system, I don't think it sends a "toggle" signal, but distinct "on" and "off" signals. That means if you send an "on" signal to an aircon that's already turned on, it won't toggle off. It might still change its state, because it seems the signal also carries other information, like the target temperature.
From a hardware designer's perspective, it is intuitive to have one single power button that acts as a toggle, because that's how devices worked traditionally. Both your TV and your remote have one button that toggles on/off. If you point the remote at your TV and press the power button, it toggles the state instead of acting like the aircon remote. It can thus be jarring if you perform an action at the aircon and it doesn't react, or worse, responds in the opposite manner of what you want.
The aircon remote seems to work differently, because (I assume) it changes the state stored on the remote itself, then sends a whole payload of information to the aircon, rather than separately adjusting parts of the aircon. That is to say, the remote itself keeps track of its current state and overwrites the aircon with it when the signal is received. The TV remote doesn't seem to keep track of any state, and just sends discrete instructions.
I wonder why these two remotes, similar in usage, and ubiquitous in our lives, differ by so much. I still recall having a discussion once with my friend on our conflicting conceptual models for how aircon remotes work. I'll probably talk about it another time if there's any merit in doing so, because there's probably one correct answer that I would need time to research.
Anyway, it seems it sometimes boils down to constraints. You can't always fit all the information you want on your interface if it's physical/hardware, so you have to make do with what you can do, and make some assumptions about the user and their environment.