Hacker News new | past | comments | ask | show | jobs | submit login

Maybe the common approaches and models are what's wrong. Too me, if EMACS could render HTML the way a browser could it could cover almost every desktop user interface use case.

There have been other systems developed which have similar approaches to simplifying desktop user interfaces; Enso comes immediately to mind. But I'm not aware of any systems that's completely replaced the notion of sandboxed applications running in their own windows and employing their own menu systems and widget layouts.

With web applications things have become many times worse. I absolutely hate all the different UI paradigms I must deal with on a daily bases.

There's empirical evidence to support the fact that I'm not in the minority either. I don't have the reference off hand but I was reading how a very large percentage of the Chinese population uses WeChat for many common, daily tasks other than chat. My claim: this is because consciously or not they don't like dealing with multiple, varying user interface paradigms.




"Too me, if EMACS could render HTML the way a browser could it could cover almost every desktop user interface use case."

I mostly browse the web using emacs-w3m in a terminal. It works great with the exception of javascript, which it can't handle. For that, I reluctantly switch to a traditional browser.

emacs-w3m is definitely not up to handling "every desktop user interface use case", but it's very nice for browsing the web in "plain text mode", as it were, viewing the occasional image (or more, if you're using gui emacs), and having full integration with emacs.


Yeah, I'm aware of w3m but it would be nice to be able to just render HTML payloads in an Emacs buffer. About a year ago I saw a demo on Youtube of an embedded Webkit in Emacs but I'm not sure what the progress is there.

What I envision is something very close to Gmail's interface. But instead of just email the interface presents a list of items. Each item is of a particular type and each type has an associated renderer. A command box facilitates command entry and the result of each command is a list of these items. When an item is selected a view of the item is rendered. Items can also have tags associated with them such that we can filter item sets by tags. A resulting set of items can be piped to an ensuing command to produce another set of items.

Item renderers don't need to be read only either. For example I could issue a file search command that returned a set of files. By selecting one, if an editor renderer existed I could edit the result in place and issue a save command on the item.

I could take the paradigm even further. I could issue something like a project command that returned a set of projects where a project was really just a tag for an item which had metadata pointing to a directory path. After selecting one of these project items I might be presented with a rendered view of the directory's contents. From there I might be able to execute a build command on this item and the command would use the metadata associated with the item to build the project. The output would be a list of one item with the build results.

I've been thinking about this for a long time and the only apps I can think of that don't fit with this paradigm are apps that actually require a mouse or pen pad for input like photo shop or auto cad.

There are varying technologies which nibble around the edges of such a system but nothing that really implements it fully. Emacs comes close, the command line in a terminal comes close, Gmail exhibits aspects, Enso exhibited aspects but nothing exists which puts all the pieces together.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: