Jeffrey Sun's CS476a Blog

CS476a - Reading Response 5

Posted at — Oct 18, 2020

In Chapter 5 of Artful Design, Ge talks about interfaces design as an aesthetics that emerges from the human-technology interaction.

Again, the core motivation here is while computers are extremely powerful, the logical, mechanical functionalities it provides will not always help make better interfaces that makes sense to a human. When we play traditional instruments, we feel connected because the interface feels natural, easy, elegant to operate on, as if part of us. How to preserve that experience when we’re creating new tools and interfaces with computers is therefore an important topic.

Ge defines the concept of “mutualization” and “de-mutualization” on pg. 224 - 225. While we interact with acoustic instruments or any non-computer-based interface, there tends to be a direct correspondence of form and function – strings has to be laid out on a guitar that way to be physically plucked and make sound. And that’s what we call mutual relationship. These interfaces are upheld as the “golden standard” because they offer users a nice and aesthetic user experience.

Contrast that to computer-based design. In computers, we can manipulate the input-to-output mapping however we want, so that the form can be decoupled form function in all possible ways. We get more expressiveness, but at the danger that the interfaces might not make sense. This power of mapping of computers is the double-edged sword, and the chapter then proceeds to talk about how to wield it properly.

I would like to react to in particular, one such weakness of computer-based design that might impede good user interaction. That is, Ge’s observation that the GUI of the computer is limiting in the sense that it only makes use of a single finger, which is a tiny portion of our body. This is not desirable because as mentioned In principle 5.4, “Bodies Matter!” Humans like to interact and express using all sorts of body actions. However with mouses, we’re confined to simulating all these actions within the realm of a single finger. Ge then asks, can we do better? It seems clear to me, whereas music instruments are inventing myriad ways of innovative interactive patterns leveraging computers, computers and tablets are lagging behind in the built-in GUI they offer, and perhaps also shaped our mindset regarding it detrimentally.

The Auraglyph

For example, let’s take the wonderful music making interface, Auraglyph (pg. 221), with a live youtube demo here. In it, users create nodes for wave signals and connect them to make a sound synthesis unit on an iPad. However when watching the demo, I realized the primary way of interaction is still though a single finger, dragging bars and shapes one at a time, and it seems to cause some struggle when, say, the user is trying to drag the slider bar to a specific value in that tiny area! I wonder, can we do better? While we can’t use all of our body, we can at least utilize the multitouch function, to empower some of the idle fingers out there, to assign them auxiliary controls such as having a separate finger twisting a large, globally-placed knob to adjust the value while the other finger selects the variable, rather than having a single finger struggling to hit pixel-accurate slider values. Yes, the same music is being made, but I’m sure the user will feel much better off.

Microsoft Surface “Dial”

Sketchpad

One fine example is the physical knob that Microsoft Surface adopts, one use of which is to help with tuning parameters in painting. It hints at the fact that mouse is perhaps overrated, and we do need to bring in more low-cost hardware input interfaces to enhance the interaction of a WIDE RANGE of softwares we use on our household computer or tablets. In fact, looking at Sketchpad, the first ever GUI interface invented in 1963, even then we were utilizing buttons and both hands to assist the stylus. Have we perhaps, regressed from the ideal of interface design, in those commercial GUIs? Maybe one day we’ll finally embrace multi-input GUI, and improve upon its “mental model of the user” to incorporate more parts of the body. Perhaps we’ll discover plucking strings and twisting knobs is way better than a mouse for a lot of software tasks, who knows! At the very least, a nice side effect would be that users might be able to simply make their own whimsical musical instruments directly from their computer control equipment!