2025-05-18

Temporal effects on design

An important part of design is understanding the contexts of use, as the use of products rarely occur in perfect or isolated settings. I had an experience which got me thinking about the effects of time on one's interactions.

In particular, I opened a door in my office building, which only swings one way. I pushed it open, and thought to myself how, with that knowledge of recently having pushed the door to get in, I can infer from that information that to get out, I should then pull it.

Doors are a common target for Don Norman. Designers seem to get them wrong so often. I was noticing how the handle was identical on both sides of the door, so it shouldn't have been able to communicate different messages. That is to say, if both sides are identical, barring other factors, how do you (uniquely) communicate "Push" vs "Pull"?

Duration between uses

It should be the case that other factors play an important role, be it the position of hinges, stoppers, fire safety standards, window placement, and so` on. These fall under physical traits or cultural conventions. I would like to highlight the temporal factors, or the effects of time on your interactions. It can be as simple as being unable to perform a task because you haven't done it in a year.

These processes are quite common, like filling forms that are only required once a year (taxes, maybe), or every 4-5 years (voting). Appropriate design considerations need to be taken with this (in)frequency in mind. Fire extinguisher and defibrillator usages are few and far between. When it comes to it, you have to know how to use them or die trying.

Sometimes, you repair something that occurs infrequently enough that you forget your original solution. People who make a living performing repairs benefit off this effect. You may take a few hours to figure out how to repair a burst pipe, and more to get the required tools and materials, but a plumber has everything on hand, and is skilled enough to know how to do the job once, and do it right, in less time.

User, experienced

Naturally, you can also design interactions around the assumption that the user is going to be learned and practised. Some things come to mind, like cars and keyboards. There's a reason why expert keyboard enthusiasts may opt for difficult configurations which may improve typing speeds.

Some products are terrible, apart from ugly, because they aren't designed with the user's growth in mind. Those desk mats with excel shortcuts just don't look right.

Not only is it ugly, it's a product that gets more useless the more you use it!

While it is typically good design to offload the user's memory load into the environment (as described in Molich and Nielsen's (1990) heuristics), this comes at an aesthetic cost. Correct me if I'm wrong and if this sort of product actually helps, but I'll still dislike it on aesthetic grounds.

Just-done actions

Back to the anecdote about the door. There's a case for designing interactions around the method of elimination. The thought process of the user could go something like this: "I just pressed this button to do X task, so I don't have to press that again. If I want to do Y task, it should be one of these other buttons."

The consequence of this is that you can compromise on design, if there are any constraints in budget or space. For tasks that require interacting in specific sequences, exhausting certain options help reduce the cognitive load as well.

I should offer an example. My portable monitor (Arzopa brand) uses a single switch to adjust the brightness and volume. The switch goes up or down, which translates to adjusting the brightness/volume up or down. Your selection depends on the first input: move the switch up to start adjusting the brightness, and the other way for volume. Or vice versa. I don't remember, but that's my point.

I seem to always get it wrong. But having done a mis-input, my next move is usually correct. I'll want to adjust the volume, and move the switch up, and it starts adjusting the brightness instead. I lower the brightness back to its original value, then wait a while for the window to time out. Then, I move the switch down, which starts the volume adjustment.

This interaction is very similar to plugging in connectors like the USB-A. If you get it wrong the first time, you'll get it right the next 1-2 tries (1 try if it's an execution error, 2 if it's an evaluation error).

In the case of my portable monitor, the switch has a dual purpose probably due to hardware constraints. The tradeoff is, in my opinion, worth it, as volume and brightness adjustment are rarely critical operations that require the user to get it right the first time. Anyway, there are labels on the side which I never look at.

"There are no simple answers, only tradeoffs" - Norman, 1983

Related: States in design

I thought about this while operating my air conditioner's remote control. The power button is responsible for sending on/off signals to the aircon. Based on my own conceptual model of the system, I don't think it sends a "toggle" signal, but distinct "on" and "off" signals. That means if you send an "on" signal to an aircon that's already turned on, it won't toggle off. It might still change its state, because it seems the signal also carries other information, like the target temperature.

From a hardware designer's perspective, it is intuitive to have one single power button that acts as a toggle, because that's how devices worked traditionally. Both your TV and your remote have one button that toggles on/off. If you point the remote at your TV and press the power button, it toggles the state instead of acting like the aircon remote. It can thus be jarring if you perform an action at the aircon and it doesn't react, or worse, responds in the opposite manner of what you want.

The aircon remote seems to work differently, because (I assume) it changes the state stored on the remote itself, then sends a whole payload of information to the aircon, rather than separately adjusting parts of the aircon. That is to say, the remote itself keeps track of its current state and overwrites the aircon with it when the signal is received. The TV remote doesn't seem to keep track of any state, and just sends discrete instructions.

I wonder why these two remotes, similar in usage, and ubiquitous in our lives, differ by so much. I still recall having a discussion once with my friend on our conflicting conceptual models for how aircon remotes work. I'll probably talk about it another time if there's any merit in doing so, because there's probably one correct answer that I would need time to research.

Anyway, it seems it sometimes boils down to constraints. You can't always fit all the information you want on your interface if it's physical/hardware, so you have to make do with what you can do, and make some assumptions about the user and their environment.

2025-05-11

The case for playing music out loud in public

Recently, I've been seeing some discourse on Twitter on people using their devices in public areas, like on public transport or restaurants, but with audio playing out of the speaker, instead of using headphones or muting their devices.

This is widely viewed as antisocial behaviour, and one of the annoying things that have been exacerbated by the prevalence of short-form video content... Having to hear the same repetitive, pitched-up or slowed-down audio sample multiple times a day can drive one crazy.

How technology drives this behaviour

Aside from everyone using TikTok nowadays, there's also the fact that wireless headphones are the default option for audio devices. I deplore the fact that phone manufacturers no longer include an audio jack in their newer models, and it's still something I seek out when I look for a phone. See, wired headphones don't run out of battery — they don't have one. Wireless headphones do, and that could be a reason why most just choose to use their speakers, not to mention the difficulty of connecting bluetooth speakers sometimes...

It's not just playing audio and video out loud. Before that, the gripe was focused on people taking calls on loudspeaker, let alone calls at all, in public. This is an older faux pas, and Twitter user @ratlimit puts it well:

I don't get it. I'm still one to prefer texting over calling; it's just far more convenient and legible, although without real-time responsiveness. Anyway, it's not so much the action itself that's antisocial, but the person. It's just better if you're taking a call or listening to something that you at least bear other people in mind and minimize the volume.

Some people have chimed in on this discourse with the counterpoint that this line of thought is fascist and disproportionately targets poorer people, who tend to ride public transport and may not even be able to afford earphones. I still think one has an obligation not to create noise pollution. If you don't have earphones, I think you can at least afford to go without sound when you scroll on your choice of short-form video content platform... or put the damn phone down for once! (Maybe... the boomers were right?)

The case for cyclists

I don't know if it's just me, but just about every cyclist (rather, PMD user) finds the need to play their music out loud. Now that delivery riders are a common job, I have noticed more and more of them in public. Hard not to, anyway.

I cycle sometimes to get places that aren't worth a bus short bus ride (~$1 one way), like going to the market. Well, I've also used my bike for longer commutes, like 30 minutes one-way to the gym or to my part-time job. Those days when I had more free time were the best, and it was plenty good for my health too. My bike cost me $300, which I bought in 2018, so I think I've more than made up for the cost of petty public transport over the past few years.

I like to ride on sidewalks. I'm not too big on riding on the road, and I try not to be "that" entitled cyclist; I give way to pedestrians and slow down when I have to. When I lost my bicycle bell (actually it was a squeaky rubber duck) and a spare one broke, that was when I realised how much they aided my commute. The hardest part of not having a bell was having to get around people on shared pathways. You don't notice how little space there is or how much space you could be taking most of the time.

I'm rather softspoken, so it was hard for me to call out to people in front of me. I did sometimes, but even then, some may have their headphones in. What I did was to cycle over drain covers, and the rapidly approaching rattling noises would signal to people that there's a bicycle behind. That did the trick 90% of the time. The other times, I just slowed down and tried to find an opportunity to go around. I eventually got the bell replaced.

Music on the bike

Having these experiences, I see the point of blaring music out loud when cycling. It's a constant signal of your approach, and together with phenomena like the Doppler Effect, one can pretty easily triangulate not just how far away you are, but also the direction you're approaching from. This sets it apart from bicycle bells; you can't exactly tell where the sound is coming from, one just assumes it's from behind you.

And with the delivery riders that are always playing music, they commute for a living, so it makes sense that their music is on all the time, because they must encounter pedestrians more often than the average cyclist. I don't quite like their taste of music, which is usually Manyao, but I get it. These genres of music, which are EDM at its core, are typically part of hours-long playlists, designed to be played in the clubs with a consistent underlying beat. That consistency aids the listener in detecting their presence. Of course, I still think the reason they play it is because they enjoy it.

I enjoy music and singing along in the car or on the bike, so I get it. I'm in no position to judge. If I'm spending that much time alone on the job, the least I could do is to enjoy it. And, fine, I do enjoy my fair share of EDM on the bike, particularly Dom Whiting's Drum and Bass on the Bike.

2025-05-01

Human-Redundant Design

In contrast to Human-Centered Design (HCD), which is a design approach that focuses on the users' needs and context (you may refer to the ISO definition), I would like to discuss what I term Human-Redundant Design, which is a different approach to design systems where the users, and all their differences and variations, have little-to-no effect on the design of the system.

I don't believe this approach runs in complete opposite of HCD, and perhaps could be complementary. In scenarios where consistency is needed, or that users could vary widely, the right solution could be to employ a "one size fits all" approach that seeks to factor out individual differences as much as possible.

HRD already exists in certain systems which I shall use as examples. I hope my analysis of it could be better formalised through writing it down.


An Illustration - McDonald's

The "redundant" aspect implies that as far as the user is concerned, they are only there to "push buttons", and their unique traits do not affect processes. This is not to be read with pessimism. In certain tasks, the is little need for personalisation.

Take, for example, the process of making fast food. In my teenage years, I spent a few months working in McDonalds to earn a little pocket change. Their processes are quintissentially human-redundant; anyone from a child, a person with Down's syndrome, to an elderly person, could perform the tasks... and you may very well find such a combination of people working the same shift in the kitchen.

While the kitchen looks a little different now, this scene from The Founder (2016) captures the secret sauce that made McDonalds the definition of fast-food today. Even a proprietary tool is used to give a "precise shot of ketchup and mustard". Not much is left to the user's discretion, unless you ask for "no salt" on the fries.

Acknowledging User Error

HRD is complementary to the following saying: "There is no such thing as user error." Don Norman, in his works, addresses the concepts of errors in details, but by-and-large, I agree with this saying. Many things can be boiled down to design errors, and errors in design are where errors in usage are enabled.

Acknowledging that users can and will be forgetful, intoxicated, distracted, and so on, one approach to designing around "unsuitable" users is to decrease the factor of impact the user has on a system. If you have heard of fast food workers showing up high on marijuana or hungover, you should by now agree with me that fast food systems are good examples of human-redundant systems.

Universe of Users

HRD is just as important when your possible users are ill-defined. Because you still go through the hiring processes to be placed behind the grill at McDonald's, there is a degree of control to the possible users of the system. There are times where the user is just "anyone and anywhere". Take for example, an ATM. Placed on any given street, you could encounter a regular person (already ill-defined from the get-go), elderly, disabled, hearing/sight-impaired, a tourist, etc. How do you make sure everyone knows what to do?

The focus of design research in this use case should then be less on users and more on the process of interaction itself. There is also no distinction between an expert user and a new one. ATMs will walk you through each step, pausing each time and leaving no instruction implicit.


Non-human Agents

I foresee that processes in which humans are redundant could just as well be automated, which also implies that the "user" is irrelevant to the equation. As a matter of fact, in scenarios like factory work and even food preparation, certain tasks are things that machinery will only continue being better at. I do want to know, what is it that has stopped McDonald's from being fully automated?

As advancements in artificial intelligence continue to progress, automation of tasks is not the only area where innovations are made, but also re-designing systems to facilitate automation. That is to say, the domestication of technology[1].

Take, for example, the re-modeling of code documentation from the form of a human-readable manual, into a machine-scrapeable, tokenizable document for large-language models (LLMs) to parse, through which humans are expected to interface with[2].

What about us?

Automation is slowly being injected into processes worldwide. In a sense, we are designing ourselves out of the equation. When searching for a job, it is no longer recommended that you make a flashy or non-linear résumé; first contact with them are no longer made with human eyes, but with automated systems that will benefit from an intuitively-structured, semantically-labeled document.

We should seek to achieve a balance in design and be careful not to write us out completely. We are at the stage of technological domestication where a human touch is cherished, like with handwritten postcards instead of email, the barista asking you "the usual?" over ordering through an interface, or getting a free beer because you've been a regular at the bar. It's the little things that make us human.

"The opposite of love is not hate, it's indifference."

Footnotes

[1] I recall having read some articles where this topic is discussed in detail, but I don't have any sources right now. Perhaps some of the works cited here could be enlightening.
[2] While I myself personally do not make use of generative AI or LLMs as much as the next person, I acknowledge the advancements made thus far, especially in the realm of programming.