Hacker News new | past | comments | ask | show | jobs | submit login
41 Years in UX: A Career Retrospective (uxtigers.com)
135 points by ravirajx7 on Feb 4, 2024 | hide | past | favorite | 69 comments



20 ish years ago I ended up with a UX gig.

Best experience of my work history. Detouring into site structure, information design, and doing actual USABILITY (behind a 2 way mirror watching real people use your app) was amazing.

Jacob Nielson was blowhard even then. His "all links must be blue and underlined" mantra was tired even then. It takes a lot for me to say this, but his pedantry at the time puts peak Richard Stallman to shame!

They are apparently still dancing around the edges of this topic: https://www.nngroup.com/articles/clickable-elements/

Now older and wiser, candidly a lot of folks would be well served by default blue links, og html submit buttons and tables for layouts. A fair bit of modern UI is complete trash: it's the product of a designer and a product person putting the next bullet point on their resume.


> Now older and wiser, candidly a lot of folks would be well served by default blue links, og html submit buttons and tables for layouts. A fair bit of modern UI is complete trash: it's the product of a designer and a product person putting the next bullet point on their resume.

If I were emperor of the world I’d make every consumer program pass a battery of tests that included demonstrating sufficient usability for a panel of users from a nursing home, a panel of users with sub-90 IQ who were in a stressful environment and trying to complete other tasks at the same time, blind users, deaf users, et c.

I expect the outcome would be a hell of a lot less twee “on brand” UI elements and a lot more leaning on proven design systems and frameworks, including fucking crucially for appearance. And also a lot less popping shit up on the screen without user interaction (omg those damn “look what’s new!” sorts of pop ups or focused tabs—congrats, some of your users are now totally lost)


If you aren't making a consumer product for nursing home patients with sub-90 IQs, then you'd be wasting your time, and the feedback you got from the exercise wouldn't be useful. In fact, any decisions you made based on it could be wrong. The point isn't to design for the lowest common denominator, but for the users you will actually have, and usability test participants should be recruited with that in mind.

There is some merit to what I assume is your underlying argument, but the way you phrase it isn't helpful.


>The point isn't to design for the lowest common denominator, but for the users you will actually have

Keyword: situational disability

Even a perfectly fit and educated target audience sometimes suffers from certain conditions or in an environment that significantly reduces their mental or physical capacity. Stress, injury, pregnancy, too many beers, very long nails, terrible weather, a toddler trying to grab your phone, non-native speaker etc etc. You may know the user even personally, but you never know what’s going on in their lives when they use your app. So general advice: ALWAYS follow accessibility guidelines. Even bad copy may drop your usage by a significant percentage, because there are plenty of people with dyslexia.


Pick your favorite programming language. Do you think it should be tested on people in a nursing home? I'd argue that's the wrong audience. (A programming language isn't a graphical user interface, but it is a user interface!)


Programming language is not a user interface, it is a way to describe commands. The UI in this case would be the way to enter the program, e.g. punch cards, text editor or IDE, or AI copilot. People who can write code are very broad audience and of course all accessibility requirements must apply.


A programming language is absolutely a user interface. The error messages and diagnostics emitted by the language are the feedback mechanisms. The syntax and semantics are the design of the interface. The documentation is the user manual. Text editors, IDEs, punch cards and AI copilot are all separate UIs on top of whatever programming language you happen to be using.

After all, TUIs are a thing and nobody debates that they are also user interfaces. Just because a programming language is all text doesn’t mean that usability metrics don’t exist for it.


>The error messages and diagnostics emitted by the language are the feedback mechanisms.

The error messages and diagnostics are emitted by the tools like compiler, linker or interpreter and are part of their interface. Language standard may codify some error messages, but language itself cannot present them to you because language is not a program.

>Just because a programming language is all text doesn’t mean that usability metrics don’t exist for it.

Just because some usability metrics can be applied to a programming language, it does not make it UI. Interface implies interaction. You do not interact with language - it cannot receive UI events and react to them, you interact with the tools that understand it.


You’re being overly pedantic to a fault. Here’s the definition Wikipedia gives for UI:

> a user interface (UI) is the space where interactions between humans and machines occur[0]

Further, Wikipedia lists a ton of different kinds of user interfaces. Included among those is:

> Batch interfaces are non-interactive user interfaces

And further, here’s a better explanation describing how a programming language is a user interface then I can provide here[1]. It really is as simple as the programming language being an interface to the machine, and the programmer being the user of that interface. I don’t understand why you’re arguing so much against a widely accepted fact. When computers were first made, there was no such thing as a mouse or keyboard, there were punch cards. The only way for a user to interface with the machine would be to insert a program with punch cards. Nowadays we have all sorts of input devices to give us new ways to interface with machines, but the most basic way we can interface with a machine is by writing a program with our intent for it.

And if you want to be so pedantic then is a pure HTML/CSS website a UI? There’s no program there just markup. The only program that runs is the browser. So then is the website nothing and the browser the only user interface? Or how about the steering and brakes/accelerator in a car? Those are purely mechanical, are they a user interface because they don’t have a program? Or how about the original arcade games like pong? They were directly soldered onto the board. There was no program just a circuit. There were no instructions being executed. So does that make those games a non user interface?

[0]: https://en.m.wikipedia.org/wiki/User_interface

[1]: https://faculty.washington.edu/ajko/books/user-interface-sof...


> You’re being overly pedantic to a fault...

> And if you want to be so pedantic...

Using labels does not make your arguments any stronger, on the contrary. Speaking of which, you quote Wikipedia, but neither the article you refer to, nor the article "Programming language" does say, that programming language is an interface. Languages by definition are merely syntax and semantics, they are used in interactions but they do not define an interface themselves - it is not an "is", but "is used by" relationship. You can write a program on a sheet of paper and put it in a frame on a wall, so that your friends could read it and enjoy the beauty of the algorithm after a couple of bottles of wine, or you can print it on a t-shirt communicating your identity. In neither case there exists an interaction between a human an a machine.

Interface is always about interaction: a keyboard to write the command or the program on, a display presenting an IDE or command interpreter etc. So, looking at your examples: HTML is not an interface and html file is not, but the static website opened in the browser is, because browser has downloaded the site and now knows how to interface with you. Steering wheel is of course an interface, because, as I said earlier including in my previous comment, it allows interaction. The example with arcade games is actually the same as for the first computer, which did not have an interface for programming (punch cards came later) and had to be re-assembled to run a new program: they did have user interfaces for data inputs and outputs.

Your second reference is clearly written for the beginners and simplifies things to the point where it becomes nonsense, even saying that "Programming, therefore, generally involves reading and editing code in an editor, and repeatedly asking a programming language to read the code to see if there are any errors in it". Do you still think it was worth quoting it?

Now, if you feel that I'm over-pedantic with this response too,


Okay, then maybe a better example would be a command-line program like `grep` or `sed`. Should those be tested in a nursing home? I'd argue that's the wrong audience, and testing there would cause you to simplify these tools to a point where they're no longer useful.

(I do think it's notable that you can combine the inputs and outputs of such programs into a shell script, which feels a lot like using a programming language—but this is beside the point I was trying to make.)


Horrible advice for expert tools. If you can make the assumption that the end user is going to learn the tool you can design it for peak effectiveness after a learning curve. If you have to consider retards and hostage situation level panic you can't do that, and create a worse product overall.


I think the point is that you can design for peak effectiveness while considering usability, and that makes the tool more effective. There’s a lot more scrutiny on edge cases when designing expert tools.

On “Expert Tools” I’d argue it’s imperative to consider high stress levels interactions, because the outcome outweighs the expert using it.


Good luck designing aircraft cockpit with this kind of thinking.


You are missing the point, it's obvious that a cockpit needs to account for stress or a crisis. Extending this to CAD software for example is nonsense.


I like your confidence, but it also manifests lack of experience and understanding of what engineering is. Expert tools have much lower tolerance for user mistakes because there are big money at stake (or sometimes lives of other people). A typo in Instagram post is not the same as a wrong number in CAD. I have personally seen a construction project where incorrect input in CAD resulted in several dozen foundation piles for a 16-story building installed outside the site boundary. Just because an architect responsible for aligning the building on site made a mistake working in a hurry, confusing two fields in the UI. Of course, there was a chain of failures, each next step costing more than previous one, but it could have been prevented if the software cared about the user and did not assume he is a superman.

It is so easy to squeeze as much functionality as possible on a screen trying to optimize productivity, but then quality of labels is sacrificed, click zones become too small and feedback is reduced to a barely visible message in status bar. It takes one sleepless night or a family argument for the user to get distracted and make a wrong but very expensive mistake.


Why? Is it because you think CAD can't be stressful and under pressure or that people shouldn't do CAD when stressed or under pressure?


You know very well that “retards” isn’t an acceptable word. It’s not even used in scientific literature anymore in the way that you’re using it.


I do not care about your opinion of acceptable language.

Attempting to insert your off topic opinion is a character flaw.

Keep it to yourself.


Shouldn't it be up for grabs then?

According to the logic of "you don't call retarded people retarded. You call your friends retarded when they're acting retarded."


A whole lot of us will be mentally impaired sooner or later.

Our experience with software when that happens is sad for everyone but bad UI designers. For them, it’s justice.


A UI designer does a good job if the person who's paying them thinks they did a good job, not whether or not they actually followed best practices, unless that's how the work gets approved. A frontend developer does a good job if their tickets are done and their boss likes them, which may or may not include actual quality work that's accessible or usable. That's the secret I wish I'd known when I started working, could have avoided the extra personal cost of trying produce quality results despite there being no incentive structure for it.


Just like morality and law are not the same, the objective quality of a UI designer's work doesn't necessarily have anything in common with their employer's preferences.

You're right that only one of those is paid well, but that's not what GP was talking about.


> You're right that only one of those is paid well, but that's not what GP was talking about.

I didn't say anything about how much someone is paid, just that it is often a job, and whether you have a job depends overwhelmingly on whether the person paying you is convinced that you're doing it well, which may or may not relate to the objective merit of the work. It doesn't matter if you're making $150k or $20k, but it's not wise to prioritize things that nobody paying you didn't ask for.

The exceptions are of course things that don't pay at all, in which case your goal is still probably to get the best job you can done under the constraints provided. If those are too tight, things get cut, or you don't sign up for it.


So what if we will? That does not mean we will be users of the products we are designing the UI for at that point. Design for actual disabilities that you can reasonably expect your users to have, such as color blindness, not the full spectrum of the human condition.

That said, I do think products should be as simple and clear as possible for a given level of essential complexity.


Countless apps do not even accomodate the users they actually have and very obviously don't test accordingly. The non-lowest common denominator is far lower than you seem to assume.

If you think that a fancy UI rework or "please pay our subscription" screen is only confusing to people in nursing homes you are very wrong. They can be nontrivial obstacles to users who work every day, organize conferences etc.


Then you shouldn't need to test on nursing home residents.


>>> users from a nursing home, a panel of users with sub-90 IQ who were in a stressful environment and trying to complete other tasks at the same time, blind users, deaf users, et c.

Or we could just give the Product manager, designer, and JS engineer a 5 year old laptop with 8 gigs of ram at least 10 browser plug ins and every corporate security package...

We have gone from "go fast, break things" to moving at the speed of stupid. Slowing these three groups down might help.


Make it 4 and have a 1st gen i3 and we got a plan. Most of my family has old Toshiba Satellite laptops from the early 2010s that they don’t throw away because they cost a grand when they bought it.


Funny that you think that 8 gigs of RAM would be low end 5 years ago...


Why not just come out and say you hate JavaScript :)


> a panel of users with sub-90 IQ who were in a stressful environment and trying to complete other tasks at the same time

So long Vim or Emacs ;)

I understand that your example is somewhat tongue in cheek, serving illustrative purposes, but I think good UX is about trade-offs more than a one-dimensional spectrum of convenient vs. inconvenient, and you can't optimize it for the sake of stressed out sub-90 IQ users without hurting the usability for some other groups.


I kind of like this idea of making sure that any UI passes some sanity checks before it is released


Too strict. I implore you to reconsider.


> a panel of users with sub-90 IQ who were in a stressful environment and trying to complete other tasks at the same time, blind users, deaf users, et c.

For me, the first task would be to make absolutely sure that I block any apps designed by you. Such lack of empathy in your wording proves that you cannot possibly be a decent, half-decent, or even mediocre UI designer.


“Think of how stupid the average person is, and realize half of them are stupider than that.”

— George Carlin

You get that when I was on the other side of that 2 way mirror one of the qualifications to be a user was "can you use a mouse". As late as 2005 people being able to navigate a basic web form was quite the challenge.

Find someone far removed from tech and ask them if they have used ChatGPT.

That IQ 90 user stressed out of their mind isnt that far from reality. Go look at the "bad/poor/low quality" content on Facebook, or YouTube or if your brave tick tock. That is the person you're writing an app for.


I don't think that people "far removed from tech" are "sub-90 IQ" (a "delicate" euphemism for "stupid").

And I don't think that people on facebook, youtube or tick tock are "sub-90 IQ".

Thanks for making clear that it's what was implied by the original commenter.

That's lack of empathy coupled with sheer, crass ignorance.


I know people with PHD's who can run rings around you on "their topic" and can't cross the street without support.

I know teachers who are smart, and phenomenal at their job who I had to arm twist to go play with chat GPT cause it's not in their wheelhouse/on their radar.

The not so bright person, who is stressed out is likely a user of your app. The same as the absent minded PHD or the teacher who is too over worked to care about your new tech widget.

These are real people. And there are a LOT OF THEM (for there to be an average 100 intelligence there are a fair number of people UNDER IT). You can go look on FB/YouTube/Ticktok and find them. "Stupid" people exist, there are a lot of them. Making sure your app works for them is good for them, for your company, for your customer support costs... The whole point of usability is to get average, less technical people in and get them testing your app.

To put a fine and final point on it, after 90 IQ, the author went on to talk about blind people. Candidly your app working for everyone with a disadvantage is just good business and shows a LOT Of empathy. A point you seem to have missed in your indignation.


I'm with you on all these examples but why is it wrong if someone is not interested in chatGPT?


The only thing that is wrong is when UI doesn't account for these sorts of user contexts (disengaged, disinterested, distracted, disabled ...)

Chat GPT is about highlighting a breadth of experience. The world has stupid people and smart people who will be baffled by bad UI. It has people who might not have the context that we, the tech community, has around emergent tools.

Many of us live so far in the tech bubble that we forget how people who do "other things" approach technology. This is the gap that UI design should be bridging (and is doing a horrid job at these days)


"sub-90 IQ", when interpreted literally, means the dumbest 25% of the population. I think it's awfully important to remember not to ignore 1/4 of the population.


… lack of empathy? Where?


> Now older and wiser, candidly a lot of folks would be well served by default blue links, og html submit buttons and tables for layouts. A fair bit of modern UI is complete trash: it's the product of a designer and a product person putting the next bullet point on their resume.

Material design, and a lot of the related nonsense was one of the worst things to happen to modern computing. It sure _looks_ pretty, in the way an expensive, glossy brochure looks pretty, but the usability is garbage. A good UX should signal intent and function at all times. Anything less is doing a disservice to your users.


>Jacob Nielson was blowhard even then. His "all links must be blue and underlined" mantra was tired even then. It takes a lot for me to say this, but his pedantry at the time puts peak Richard Stallman to shame!

And that'd be an understatement! I feel Nielsen spent most of his professional career calling himself a guru mostly to boost his consultancy firm. He also reminds me of Norman in the way that their contributions to the field are lofty aphorisms that don't provide any actual solutions.


So many critical websites are horribly broken, especially if you have an older computer


Nielsen's usability heuristics, arguably his most impactful work, still hold up surprisingly well after more than 30 years. It's hard to understate the impact he has had on my own career, as well as the UX field as a whole (together with Don Norman, of course).

Also sobering to read how much of his career seems obvious in hindsight, but also was shaped so much by randomness and chance (such as taking not taking the job at Apple).


> The opportunity cost of going without industry experience for multiple years will hinder your advancement for decades.

I suspect that goes for most creative/engineering vocations.

However, one aspect of formal education, that is often missed, in OJT, is a very broad base, and an early understanding of “the basics.”

Also, people with formal education, are often able to work in a very formal, structured manner, early in their career. This (IMO), is pretty important, in engineering and research.

That said, I’m a high school dropout, with a GED. I ended up doing OK, but YMMV.


41 years ago, "UX" did not mean what it means today: <https://www.youtube.com/watch?v=9BdtGjoIN4E&t=9s>


His book made a big impression on me back in the day. And, to be honest, I still prefer the minimal and predictible style of web site that he recommended.


Which book? There seem to be quite a few, and I wouldn’t mind picking one up, if you have one to recommend.


The one he mentions in the article: "Designing Web Usability".


Amusingly that book had very poor usability. I forget exactly what it was about it; possibly printing the text too close to the binding?

In practice I recall it being mostly a digest of his free content on useit.com


Yes, this was the one. I think it was my first exposure to UX - probably around 2000, 2001.


Congratulations Jakob on your Liberation Day :)

Are you saying the UI is your fault ;-)

Serious question though: how come over the last 15 to 20 years UI's have got considerably worse?


They haven't, the median application or website is significantly better today than it was 20 years ago. Without any additional information, my guess is that you're thinking of some specific examples of good apps from 20 years ago, and bad apps from today, and incorrectly generalizing from a selection bias.


IMHO a lot of this comes from :

1.) The move to the Web, where the browser's interface gets in the way once you start doing something more fancy than just viewing basic HTML documents.

2.) Trying to make a program work well both on a small touchscreen and on a large screen + mouse + keyboard : this is literally impossible without the result being a worse experience for both.


Regarding 1: I remember the internet of the 90s and find our current internet comparatively boring in terms of UX. Back then, websites were created manually, without using "abstractions" like wordpress or wix. This forced us to think freely about UX and to hack HTML+CSS as much as possible. The browser's interface you mentioned is quite powerful. And with HTML6 on the way it becomes even more powerful.

Regarding 2: A paramount program utilizing responsiveness at it's finest won't take compromises for different screen sizes, but it takes longer to develop. Compromises like these usually emerge through the economic circumstances of software development.


2. You will quickly end with two programs with different features, or maybe a single program that only offers a fraction of features depending on input/output capabilities :

Screen size is just one aspect of this, consider how different programs might be optimized for a vertical screen layout, or the use of smartphone's accelerometers, or working best with precise mouse movement, or relying on the user learning to use keyboard shortcuts (ideally with an easy on-ramp phase) for effectiveness, or relying a lot on hovering your mouse over features for popups (which is very poorly translated as a press and hold on touchscreens - consider how rare hover text for pictures has become).

(More extreme examples would be also being able to use the software from, for instance, a monochrome, low refresh rate watch with only a few buttons, at which point you get the option to use vibration feedback notifications that you can't expect to have on a desktop.)


I would be interested in the long perspective of constantly reinvented gui toolkits and what he thinks is progress vs reinvention.

Early html was surely reinvention, but CSS gave us the first fine grained presentation customization mechanism.

Material design? If I was a Soviet despot... Well anyway.

Ui is alas more fashion than function. I remember the original iPhone and is realistic apps, which angered UI designers so much that we now have buttons with no visible feedback as to if they are clicked, and anonymous unicolor squares everywhere.


>The background is that Terry Winograd, a professor of Human-Computer Interaction at Stanford University in Silicon Valley, had invited me to lecture on some of my work in 1998. After my talk, Terry invited me to tour his lab and meet some of his graduate students. One of the Ph.D. students was a bright young fellow named Larry Page, who showed me his project to enhance the relevance of web search results.

Many of those lectures are online. I was not able to find the 1998 one he mentioned, but here is one that Jakob Neilsen gave on May 20, 1994 called "Heuristic Evaluation of User Interfaces, Jakob Nielsen, Sunsoft".

https://searchworks.stanford.edu/view/vj346zm2128

He gave another one on October 4 1996 entitled "Ensuring the Usability of the Next Computing Paradigm", but I can't find it in the online collection, although it exists in the inventory of video recordings, however I can't find any 1998 talks by Jakob Nielsen in this list:

https://oac.cdlib.org/findaid/ark:/13030/c82b926h/entire_tex...

Here is the entire online collection (it's kind of hard to search the 25 pages of the extensive collection, thanks to bad web site design!):

https://searchworks.stanford.edu/catalog?f%5Bcollection%5D%5...

The oldest (most interesting to me) ones are at the end (page 25):

https://searchworks.stanford.edu/?f%5Bcollection%5D%5B%5D=a1...

Here are some of the older ones that I think are historically important and especially interesting (but there are so many I haven't watched them all, so there are certainly more that are worth watching):

R. Carr, GO, "Mobile Pen-based Computing", October 21, 1992:

https://searchworks.stanford.edu/view/jm095fy2355

Cliff Nass, Computers Are Social Actors: A New Paradigm and Some Suprising Results [November 4, 1992]:

https://searchworks.stanford.edu/view/jh333ht2903

Unistrokes: Pen computing for experts, David Goldberg, Xerox PARC [November 5, 1993]:

https://searchworks.stanford.edu/view/gw943dj4628

Putting "Feel" into "Look and Feel": Interaction with the Sense of Touch, Margaret Minsky, Interval Research [October 1, 1993]:

https://searchworks.stanford.edu/view/kk938rh3332

Harry Saddler & L. Alba, Apple & Albert Farris, "Making It Macintosh: Interactive Media, Interpersonal Design" [October 15, 1993]:

https://searchworks.stanford.edu/view/gs214qy7233

Design of New Media Interfaces, Joy Mountford [May 12, 1993]:

https://searchworks.stanford.edu/view/rm437wv9779

Animated Programs, Ken Kahn, Stanford CSLI [December 3, 1993]:

https://searchworks.stanford.edu/view/fk686sy4072

The Magic Lens Interface, Eric Bier, Xerox PARC [December 9, 1994]:

https://searchworks.stanford.edu/view/ss855db5288

Proactive and Reactive Agents in User Interface, Ted Selker, IBM [April 29, 1994]:

https://searchworks.stanford.edu/view/pv655pr7635

Computing in the Year 2004, Bruce Tognazzini, Sunsoft [February 18, 1994]:

https://searchworks.stanford.edu/view/nf237zt2615

Andy Hertzfeld, General Magic, "Magic Cap and Telescript" [January 21, 1994]:

https://searchworks.stanford.edu/view/mp885xf4366

An Academic Discovers the Realities of Design, Don Norman, Apple Computer [December 2, 1994]:

https://searchworks.stanford.edu/view/dd753rg7554

Interfacing to Microworlds, Will Wright, Maxis [April 26, 1996]:

https://searchworks.stanford.edu/view/yj113jt5999

I was working with Terry Winograd at Interval Research at the time of this talk, which he invited me to attend, and I asked Will some skeptical questions, and his amazing in-depth answers convinced me to go to Maxis to work with him on the "Dollhouse" game he demonstrated. I uploaded the video to youtube and proofread the closed captions, and updated my description of the video:

Will Wright - Maxis - Interfacing to Microworlds - 1996-4-26:

https://www.youtube.com/watch?v=nsxoZXaYJSk

>Video of Will Wright's talk about "Interfacing to Microworlds" presented to Terry Winograd's user interface class at Stanford University, April 26, 1996.

>He demonstrates and gives postmortems for SimEarth, SimAnt, and SimCity 2000, then previews an extremely early pre-release prototype version of Dollhouse (which eventually became The Sims), describing how the AI models personalities and behavior, and is distributed throughout extensible plug-in programmable objects in the environment, and he thoughtfully answers many interesting questions from the audience.

>This is the lecture described in "Will Wright on Designing User Interfaces to Simulation Games (1996)": A summary of Will Wright’s talk to Terry Winograd’s User Interface Class at Stanford, written in 1996 by Don Hopkins, before they worked together on The Sims at Maxis.

Will Wright on Designing User Interfaces to Simulation Games (1996) (2023 Video Update):

https://donhopkins.medium.com/designing-user-interfaces-to-s...

>A summary of Will Wright’s talk to Terry Winograd’s User Interface Class at Stanford, written in 1996 by Don Hopkins, before they worked together on The Sims at Maxis. Now including a video and snapshots of the original talk!

>Will Wright, the designer of SimCity, SimEarth, SimAnt, and other popular games from Maxis, gave a talk at Terry Winograd’s user interface class at Stanford, in 1996 (before the release of The Sims in 2000).

>At the end of the talk, he demonstrated an early version of The Sims, called Dollhouse at the time. I attended the talk and took notes, which this article elaborates on. [...]

Bringing Behavior to the Internet, James Gosling, SUN Microsystems [December 1, 1995]:

https://searchworks.stanford.edu/view/bz890ng3047

I also uploaded this historically interesting video to youtube to generate closed captions and make it more accessible and findable, and I was planning on proofreading them like I did for this Will Wright talk, but haven't gotten around to it yet (any volunteers? ;):

https://www.youtube.com/watch?v=dgrNeyuwA8k

This is an early talk by James Gosling on Java, which I attended and appeared on camera asking a couple questions about security (44:53, 1:00:35), and I also spotted Ken Kahn asking a question (50:20). Can anyone identify other people in the audience?

My questions about the “optical illusion attack” and security at 44:53 got kind of awkward, and his defensive "shrug" answer hasn't aged too well! ;)

No hard feelings of course, since we’d known each other for years before (working on Emacs and NeWS) and we’re still friends, but I’d recently been working on Kaleida ScriptX, which lost out to Java in part because Java was touted as being so “secure”, and I didn’t appreciate how Sun was promoting Java by throwing the word “secure” around without defining what it really meant or what its limitations were (expecting people to read more into it than it really meant, on purpose, to hype up Java).


Nitpick: interesting how a UX website doesn’t let me resize it since the font is likely set via viewport units.


Huh, which browser? Scales flawlessly for me in Firefox. Actually one of the smoothest-scaling websites I've encountered in ages, wow. Nice!


iPhone Safari.

Judging by the comments, they set the text via viewport units only on smaller viewports.


Looks like px if my inspector isn't lying to me. Ctrl-+ and Ctrl-- work fine for me on desktop/chrome.


Firefox has always been even better because you can easily scale only the fonts.


If the website uses viewport units Firefox can’t do it either, IIRC. My guess is that the website uses vw only on “mobile”


Nice! Thanks for sharing your wonderful journey in UX.


All you need is Nielsen, Norman, and Tufte.


How can you be happy that you did not join Apple in 1990?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: