Principles of UI, A Thread:
1. natural mapping
2. visibility of system state
4. constraints and affordances
5. habits and spatial memory
6. locus of attention
7. no modes
8. fast feedback
9. do not cause harm to a user's data or through inaction allow user data to come to harm
10. prefer undo to confirmation boxes. For actions that can't be undone, force a "cooling off" period of at least 30 seconds.
11. measure using Fitt's, Hick's, GOMS, etc. but always test with real users.
12. don't assume that your skills or knowledge of computers as a designer or programmer in any way resemble the skills or knowledge of your users.
13. Consider the natural order of tasks in a flow of thought. Verb-Noun vs. Noun verb. Dependency->Dependants vs. Dependants->Dependencies.
14. Instead of having noob mode and advanced mode, use visual and logical hierarchies to organise functions by importance.
15. Everything is an interface, the world, learning new things, even perception itself
16. Consider the psychology of panic. Panic kills scuba divers, panic kills pilots. panic kills soldiers. panic loses tennis matches. Panic leads to stupid mistakes on a computer.
more at: https://www.asktog.com/columns/066Panic!.html
17. Consider the 3 important limits of your user's patience:
0.1 second, 1 second, 10 seconds
18. An interface whose human factors are well considered, but looks like butt, still trumps an interface that looks slick but is terrible to use. An interface that is well considered AND looks good trumps both, and is perceived by users to work better than the same exact interface with an ugly design.
19. Don't force the user to remember things if you can help it. Humans are really bad at remembering things. This includes passwords, sms codes, sums, function names, and so on. My own personal philosophy is to consider humans a part of your system, and design around our shortcomings instead of thinking of users as adversaries. Software should serve humans, humans shouldn't serve software.
24.many jokes are made about the “save” icon looking like a floppy disk. it’s very appropriate, since the command as a concept is built around the technological limits of floppy disks, limits that are comically irrelevant in the 21st century.drag your app out of the 1980s and implement autosave and version control already.
25. consistency consistently consistent. there’s few things more fun than designing your own custom ui widget toolkit, css framework, or interaction paradigm. however, please strongly consider *not* doing this. custom UI is like ugly baby photos. instead, stick as much to the HIG guidelines and conventions of the platform you are on, so users can use what they’ve already learned about where things usually are, and what the fuck the weird molecule icon does.
26. try to imagine ways to use your shiny new software to abuse, harass, stalk, or spy on people, especially vulnerable people. ask a diverse range of people to do the same.
then fix it so you can’t. if you cannot figure out how to do your special software thing without opening vulnerable people to abuse, consider not making it available to anyone.
27. UX is ergonomics of the mind (and also body). Where traditional ergonomics considers the physical abilities and limits of a human body, UX considers the limits of the human mind: attention, memory, response time, coordination, emotions, patience, stamina, knowledge, subconscious, and so on. If you ever find a UX practitioner sacrificing accessibility on the altar of so called “good experiences”, you are dealing with incompetence.
expanding on 1. Natural Mapping:
user interfaces typically “map” to the system they control, each button and dial corresponding to some element of the system. Natural mapping is when the interface forms an obvious spatial relationship to the system, such as 4 stovetop dials that are in the same arrangement as the stovetops. the anti-pattern is arranging controls in an arbitrary order with no spatial correspondence to the system.
2. Visibility of System State:
Software typically has state (to state the obvious), such as “where” you are in the software’s menu system, what “mode” you are currently in. whether your work is safely stored on disk or has “unsaved changes”, what stage of a process you are up to and how many steps are left. Failure to effectively communicate system state to the user is inviting them to get lost and make mistakes. counterexamples: setting the time on a digital wrist watch, programming a VCR
this is about making the possible actions in a system visible- or if not immediately visible, the mechanism of their discovery should be visible and consistent. For instance, the menu items in a GUI system are discoverable. the available commands in a unix system are not. the opposite of this principle is “hidden interface”, examples of hidden interface are rife in iOS: tapping the top of the screen for “scroll to top”, shake to undo, swipe from edge for browser back- etc.
4. Constraints and Affordances.
A constraint is something that is not possible in a system. an affordance is something that is possible to do. which is which should be communicated clearly- the nature of this communication breaks down into three subcategories:
visually obvious from the shape of objects in a system- two lego bricks can only snap together in a limited number of ways.
b. logical: what’s possible or not makes sense logically: e.g. color coding,
constraints and affordances is at the heart of the “flat design” vs. “skeumorphism” debate. the benefit of skeumorphic interfaces is that replicating the look of real world objects like buttons, provides a natural way to communicate interactions. where skeumorphism went wrong is communicating false affordances: a detail in the ios6 calendar app hinting that pages could be torn out- when no interaction supported it.
flat design throws the baby out with the bathwater. we still need real buttons.
5. Habits and Spatial Memory
this is mostly about not arbitrarily moving around.buttons in an interface. people are creatures of habit, and if you fundamentally change the method of performing a task for no good reason, it’s not a “UI revamp” it’s pointlessly frustrating your existing users.
for spatial memory, millions of years of evolution have left us with mental machinery for remembering exactly *where* something is physically. you can take advantage of this in UI with persistence of space.
an example of this persistence of space concept is the meticulous way some people curate their phone’s launch screens. even better would be if iOS allowed a different wallpaper for each page, and for icon grids to permit gaps anywhere instead of forcing them to sort left to right, top to bottom. the different look of each screen could then be very personal and memorable. Finding an app, then, a matter of finding the page with the right color and shape.
6. Locus of Attention
this is a recognition of the fact that human consciousness is single threaded. that while parallel processes permit us to do things like walk and chew gum at the same time, there is only one thread of processing that represents our conscious awareness. therefore, interfaces that expect our attention to be fully present in the status bar, the cursor, the flashing banner ad, the address bar, the lock icon, the autoplaying video and the notifications are misguided.
7. No Modes
A Gesture is an action (a keystroke, a mouse move) expected to result in some effect (a letter being added to a document, a cursor moving).
A mode changes the effects associated with some or all gestures. caps lock is a mode. “apps” are modes. Modes are bad if they result in modal error: the unawareness that a mode has been activated, resulting in unexpected effects, and possibly unawareness it *is* a mode, or how to get out of it. VIM is prime offender. so are modern TVs.
modes are typically employed as solutions to the situation of the number of functions in a system far exceeding the number of available external controls. this can happen either as a result of featuritis, or an apple-esque fetish for small numbers of buttons.
suggested remedies include quasimodes like the shift key, that activate a mode only while a button is being held down. another approach is developing composable UI conventions like GUI menus, or search, that can scale without modes.
another way of looking at this is examining how much context a user needs to understand what effect a gesture will have, and how effectively that context is being communicated. Can i write a step for step guide to doing a task on a computer, for a computer novice, that doesn’t include first determining where in the operating system you are, whether the correct application is open, figuring out which of many methods can get you into that apllication are applicable in that situation?
this is what was nice about the “home” button on iphones: it doesn’t matter where you are in the system, there’s a physical hardware clicky button that will always bring you back to the start, and cannot be overriden by third party software.
apple ruined it with the iphone X swipey home gesture. not only is it hidden interface, but it’s modal now-which edge you swipe depends on the orientation sensor, and is —- sometimes but not always visually indicated by a line that is maybe correct.
8, Fast Feedback 17. Consider the 3 important limits of your user's patience:
0.1 second, 1 second, 10 seconds
why is this important? because without fast and constant communication, the UI will feel broken. it’s why a chattering cli log *feels* faster than a crawling progress bar. the gui might, on stopwatch time, be faster than the CLI, but time *perception* works differently, it works with feedback and delays.
@zensaiyuki I find it interesting that these benefits & drawbacks can vary a lot between different configuration options.
e.g. letting users set the (default?) font for websites can both help accessibility, and is trivial to implement because it's just a "magic number" used during rendering.
@alcinnz also, that option doesn’t otherwise change the behavior of gestures. the example raskin uses is the configurable toolbars of some 1990s versions of MS Word. convenient if you’re a power user, but now you can’t document those shared installations (households, libraries, schools) of msword for novices because the toolbars could contain literally anything.
@alcinnz after reading raskin’s book I became hardline against configuration: especially since it would be a topic of argument whenevee apple would change something in OSX (just add a configuration switch!). and so, upon approaching a stranger’s macbook you now have no idea which way the scroll gesture will scroll.
however I’ve softened now that I’ve realised some configuratoon options are essential for accessibility.
@alcinnz on the opposite end of the spectrum, the gnome project is now discovering that too much configurability can be a curse. there’s so many theme options in gnome now it’s impossible to write an app and test for every possible configuration. most of the devs are forced to make the unsatisfactory tradeoff of testing only with default configurations. (which it seems, taken as a whole across all software, default settings become a defacto platform. changing them puts you in weird bug land)
@alcinnz so I guess the lesson here is: if you’re gonna add a configuration option, make sure you have a good testing plan for it.
@zensaiyuki That GNOME case does show something interesting: It may be useful to have behind-the-scenes options to allow different platforms to share code, whilst not exposing it to end-users because it might/will break stuff. Also makes it easier for *some* apps to target a selection of those platforms.
But in terms of UX this essentially comes out to the same thing as you're saying.
@zensaiyuki In the case of browsers, the question seems to have always been not whether to have configuration but who should be configuring these settings.
The standards now say that webdevs have ultimate say whilst browsers provide defaults. The problem though is that the defaults are no longer reasonable, and webdev's final say can't always be trusted for reasons you've described in other toots.
Fairly trivial to fix when I'm not worrying about breaking JS...
@alcinnz it is certainly possible and even easy to write webpages and even web apps that leave browser accessibility settings available and working. it’s an education problem though and if, i, for instance, wrote a guide on how to do it, everything in it would fly in the face currently fashionable practices, which seem to view accessibility as “old fashioned”
@alcinnz i am a huge fan of js, as you know. but it’s like salt. it shouldn’t be the main ingredient.
@alcinnz i suppose the problem it solves is, if the html “document” actually represents the UI of some kind of application, it’s a bad experience to require a full page refresh for every meaningful interaction, especially form validation. forms are especially complicated for screen reader accessibility as well.
@alcinnz in the less extreme case, it kind of sucks to need to listen to all the menu options over and over again on every page navigation. thankfully most screen readers are smart enough to let you skip it, so long as the page is marked up correctly and includes a “skip to content” link.
@alcinnz but by and large it’s bad if your ui buttons arbitrarily shuffle around and disappear and reappear and move up and down the screen as you navigate
@zensaiyuki Rhapsode is amongst those screenreaders: <nav> is silenced & it automatically skips to <main>.
And UI buttons arbitrarily shuffling around is especially bad when navigating the page with a TV remote! I can't let it happen! Though I will be happy to allow partial page refreshes akin to Intercooler.js.
@alcinnz which is of course better for sighted users but it causes issues for screen readers- there’s aria attributes you must use on the part of the screen that is refreshed to ensure the screen reader is notified part of the screen has changed. it- kind of sort of works?
@zensaiyuki Ideally for an auditory browser you invoke a link, and that would be enough context to understand the update to the page! Without being told where that update happened.
@alcinnz in theory yes. in practice, since the screen reader is typically a seperate peice of software from the browser, interacting with the browser via an OS level accessibility api, if the page doesn’t “refresh” the screen reader doesn’t know you activated a link.
@zensaiyuki I actually went the path of implementing my own browser engine! Within the constraints of what a screenreader can easily handle, it's not actually that hard.
Pretty much as soon as I finished implementing CSS I was spitting out SSML files eSpeak could use to give an interesting performance!
@zensaiyuki Forms meanwhile are a very interesting design space for me!
I might struggle to verbalize some/many forms in the wild, but with some minor HTML extensions there's opportunity to build Alexa-level conversational UIs! Though not as many as some may think due to HTML5.
TVs meanwhile needs those forms rendered to their own menus.
In either case I have to seperate forms out into their own mode & disallow styling to make them function well in these mediums.
@alcinnz but, assuming they did, they fulful the promise of a self describing API. all a client needs to do is understand the html form and it can present any sort of UI it needs to, to fully interact with the api the form describes. it’s really under appreciated how great a design it is.
@zensaiyuki Absolutely! Now can I make it a reality?
The struggle ofcourse is the majority of forms which like to define their own widgets, there's no way I'll be able to handle that...
@alcinnz there’s spambots that do. which of course is another factor working against you. many websites, in the effort to stop spambots, make it difficult to realise this vision on purpose.
Server run by the main developers of the project It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!