Emacs-w3m is tied with lynx as my favorite browser (with telnet and wget both in second place).
Using emacspeak with it allows me to keep my eyes on the lab table (in case something goes exothermic too quickly) while still having the ability to pull up references. Same about reading [hearing] HN while in sketchy areas.
The other post about invoking w3m from lynx is worth investigating if You are not familiar with such. Look for "EXTERNAL" in your .lynxrc. I especially like having it "git clone" the page or link I'm on. Reality is that I heavily abuse the EXTERNAL stuff.
I've been noodling about the implementation of adding functionality to w3m and lynx so there is a separate fetch-page func but report a different User-Agent header (eg, "Mozilla"). I've encountered many pages that don't allow access until I change the "lynx-*" header (bastards).
Semi-OT: I'm addicted to lynx's multi-bookmarks feature (26 different bookmark files for easing the organization of your links), and about 15 years ago I wrote some elisp so emacs-w3m has the same functionality (and same bookmarks files).
> I've been noodling about the implementation of adding functionality to w3m and lynx so there is a separate fetch-page func but report a different User-Agent header (eg, "Mozilla"). I've encountered many pages that don't allow access until I change the "lynx-*" header (bastards).
I have the ability to pop into w3m from lynx[0] to view tables when necessary. I also have the option to call the x-www-browser script from lynx for the full graphical/js/css treatment, such as posting this comment from my account. But most of the time, lynx is more than adequate for my modest needs...
Something like Plan 9's Mothra browser for the Unix console would be great. Support for images, mousing, a small command language and good shell integration.
I like and use w3m a lot (I found the link and posted it here :), but maybe it already has a little too many settings to mess with. Then again, the lynx browser has more than 100, so YMMV.
It has a little too many settings/features for my taste. Unfortunately removing things doesn't attract user - adding things does.
I never used mothra, but heard a lot of good things about the plan 9 tools. They got the advantage, that they could start afresh. I'm curious how much work it would be to port it to a *nix system.
Yes, the defaults are a bit strange. Might be a little late to change them now.
IMHO, the most important thing to improve the user experience is to enable link numbers. With link numbers enabled you can[0] jump to a link by typing the number and then LINK_BEGIN (default '[').
[0]: Technically you can do this without enabling link numbers, but you need count the links yourself.
How does this do with sites that won't render without JavaScript? I browse with noscript and everything block by default. I'm constantly having to turn on JavaScript for sites that are blank until you do.
It's a nice sentiment, but these days it's almost impossible to conduct business with JavaScript disabled. At least business in the parts of the economy I work in.
The moment somebody sends me a Google Drive link (about three times a week), w3m goes back on the shelf and I have to pick up a modern standard-compliant browser.
I wrote that remark, and I was not. I’m not even using w3m right now, and observe that the comment I replied to was more general, relating to all browsing without Javascript.
Extreme positions and interpretations are for nutters.
In the specific case of Google Drive access, I like the Ruby client.
The Ruby Client is great for data storage and retrieval, but I meant editing Drive docs (sheets, docs, etc.). Colleagues don't appreciate it when I drop their doc into another editor instead of commenting / recommending corrections in-place
Oh, for sure. I don't perceive those as websites, though. Sheets and Docs et al are basically thick-client office applications that happen to be written in Javascript. In this circumstance I just use Chrome since it's Google's official runtime. Or the "native" apps on a tablet, which I'd wager are just a packaged variant of the same code.
I see this as congruent to the old-school Lotus Notes/Domino architecture, and not really about the web at all.
There's no value in being an ideologue about it. My reasons for browsing with JS disabled-by-default are fourfold: 1. to defeat many/most active tracking methods, 2. as a sort of passive ad blocker, 3. because it's very often much faster, and 4. for written content JS dependence is moderately correlated to a poor S:N ratio. None of that has much bearing on an office application.
Actually, the idea that I so can easily launch a full browser in context is what has me wanting to give this a shot.
I would love a less cluttered and less resource intensive web experience. I also know is unrealistic in 2022 to have a normal day of browser activity without needing JavaScript and / or CSS.
If I can use a text browser much of the time and easily switch to a full browser only when I need it, that sounds like the best of both worlds. Lots of what I browse would likely be even nicer text only. This very site comes to mind.
> If using a regular modern browser is a superset of the experience of w3m, why do I want to add the extra level of inconvenient indirection?
Because using a 'regular modern browser' is not a superset; it also lacks certain things, such as freedom from JavaScript, freedom from some forms of tracking and it is incompatible with the terminal or other text-based environments. Some folks find those things to be worthwhile.
If it is a inconvenient indirection for you, than w3m is probably not the right tool.
The linked article mentions some reasons. I use it, because it fits my console-focused work style. It's blanzigly fast, removes all the distractions like unnecessary images and ads, doesn't require me to take my hands of the keyboard, works very well if I'm working on a mobile hotspot, can be used while being ssh'd into a server... and because opening the side in a regular modern browser is only a key press away.
Getting docs into an efficient "IDE" is still nice. Just yesterday, I was using nov.el to occasionally reference Blandy et al.'s Rust book in Emacs, faster than I could the PDF or in a dedicated ebook reader program.
One example: When I open this page here, in eww all comments have the same left margin and I have no idea what is a reply to what, but with emacs-w3m the nesting structure is visible.
Strange, I'm trying out standalone w3m right now and also don't get nesting. Maybe the emacs version is different. I'm guessing HN must indent with CSS (which w3m pointed out it does not attempt to support) even though the comments are in a table.
What I really want is something inbetween this and a full fledged browser. Something that will basically allow me to browse the web in Reader mode, with support for images and native video, but without any JS involved.
I want a graphics terminal (as opposed to a text terminal) and a browser that opens in that terminal. The terminal should not run a windowing system. But the terminal itself should be able to run inside a windowing system.
I have been thinking of using a full headless chrome or firefox (so all pages work) and then share that with a commandline / lite browser. So we render and then re-render the page. The browser would run in a remote computer somewhere and the lite browser connects there. So a chrome/firefox proxy basically .
I like this idea. How do you know what to render, though? Do you simply copy-paste the page source (but then what’s the point?) or do you filter the source (but then how do you know what to filter out?)
You could use uBlock Origin and disable javascript by default, it's availavle on mobile. When necessary you can also reenable it for specific sites, works pretty well for me.
Also neat: you can disable media elements over a certain size (default is 50kb I think), which is a godsend if like me you have limited mobile data.
I'd recomend qutebrowser. It's not exactly what you're looking for out of the box, but it is very customisable. There's a learning curve but once you're used to it, no need to ever reach out for the mouse.
uMatrix on Firefox gets one pretty close to that ideal, and makes it easy enough to enable JavaScript (and similar stuff) selectively until webapps and broken-by-design websites work.
I strongly recommend it to the technically-inclined.
Just trying this out for the first time.. Posted from vim inside w3m. Just took an 'apt install w3m' and 'w3m news.ycombinator.com'. Many of the default key bindings are vim like, took me a while to realise i had to use 'enter' to edit a form field rather than 'i' though.
Only disadvantage I can tell for HN is lack of comment indentation beyond one level... which might be a deal breaker.
> Only disadvantage I can tell for HN is lack of comment indentation beyond one level... which might be a deal breaker.
HN indentation works with spacer gifs, so you'll have to enable inline images for that to work. (Press o and tick YES for the Display inline images option.)
EDIT: also if you're using debian you'll also want to install the w3m-img package. Or change the Inline image display method to img2sixel (and install libsixel) if your terminal supports sixels, or to kitty (and install imagemagick) if you're using kitty.
(If your w3m version supports Inline image display method, that is. It's a relatively new feature.)
The option was already enabled but not working, the w3m-img package fixed it... I was only expecting place holders and very surprised to see real images rendered, not sure how it's doing this since i'm using urxvt, which doesn't support images AFAIK... is this layered on top via xorg?
Yeah on xorg it just draws a window on top of the terminal. It is a hack but one that works pretty well, it also supports framebuffer and supposedly even Windows.
In fact many other terminal tools use w3m-img for image display on the terminal. Though both sixel and kitty protocols are nicer (kitty being as close to ideal as it gets), they're unfortunately not as widely supported.
The picture gallery is apparently under construction still. I wonder if it will ultimately contain a notice advising visitors to proceed by using a normal browser.
w3m displays images using w3mimgviewer by default, which is kind of a hack but works pretty well with xterm. It also supports both the kitty and sixel protocols;[a] I wouldn't call either of those a hack.
In fact I find it very strange how they're introducing the only terminal web browser with inline image support as one incapable of displaying inline images, then writing an entire paragraph about why you don't need inline images in the first place.
[a]: Only in the debian fork, mind you. The website seems to link to an unmantained version for some incomprehensible reason.
Some versions and forks of links have JS support which can differentiate it.
Lynx and W3M are extremely similar, W3M was Japanese and in the 90s had much better support for the language. Lynx has unicode now but I'm not sure if it's fully on par in Japanese text rendering or not now.
I'll also throw browsh in as another sub-category, it uses a headless instance of Firefox (packaged separately allowing independent updates) in the backend for full web support but presents the page in text-mode.
There is a we! It's all of us here now, the audience and/or potential users. We may not all make the same decision in the end but it'd be exceedingly odd if none of us were expected to have a shared reason backing our choice. You still have a valid question if you ask just for "I" but you'll end up with a thread about your specific case not a thread about the target users for each.
But no one individual need represent the group, in fact the group need not be represented at all. The question posed is why we should choose, not why we must choose for everyone and announce the choice here. E.g. we should choose links over lynx/w3m if JS support is a hard requirement. That doesn't mean we all have JS support as a hard requirement or should all agree on the a single choice.
I have now done so. Unless you were hoping I'd notice something else could you perhaps clarify what you meant by: "not one any individual can accurately represent to any reasonable degree of confidence" if that's where you feel the disconnect was? I'm not quite sure I followed what you meant there. If not, maybe you could clarify the particular part you fealt I needed to reread?
I've also got a reply to the original question by the GP that might clarify what I've been talking about by example of that's what was unclear.
Worth mentioning that both w3m and lynx are horribly insecure. Although I guess it’s very unlikely that anyone would actually bother to exploit such niche software.
They both parse untrusted content content without any sandboxing.
I typically send content through rdrview[0] before piping through w3m-sandbox[1], which should be pretty safe. I also only browse one site per w3m instance.
Do you know of any current vulnerabilities in w3m? The last one I saw was in 2010 and fixed. Is it just not studied enough and there could be many undiscovered vulnerabilities? Beings up an interesting question of how to evaluate unknown vulnerabilities to determine if something is horribly insecure.
It’s not studied because nobody cares about w3m or Lynx. I’m just making these statements based on my own experience with fuzzing both a few years ago.
Not that difficult if still laborious when following the construction of IETF HTML and W3C HTML 4.x as an SGML vocabulary ab initio which already covers the vast majority of what people coming from XML have problems understanding, such as tag inference eg automatically ending span-level markup on block-level content. Plus some irregularities introduced with URLs, and with CSS and JavaScript to prevent those from being displayed as content in old browsers. Then add some more oddities introduced with WHATWG HTML 5.1+ arguably by accident and due to ignorance and lack of a formal grammar.
If you want to try w3m in Docker (although without support for images), I found it packaged in 14 MB only:
docker run -it --rm corbinu/alpine-w3m https://news.ycombinator.com
Press [tab] to navigate from link to link, [enter] to follow a link, [⇧+B] to go back, and [q] to exit. You can also run it with --help instead of a URL for a help page.
From the last commit:
> So far, I've been removing tons and tons of stuff (and will
continue), but now I can begin shaping up what I'll replace this
with.
Unfortunately he only removed stuff and stopped there.
Using emacspeak with it allows me to keep my eyes on the lab table (in case something goes exothermic too quickly) while still having the ability to pull up references. Same about reading [hearing] HN while in sketchy areas.
The other post about invoking w3m from lynx is worth investigating if You are not familiar with such. Look for "EXTERNAL" in your .lynxrc. I especially like having it "git clone" the page or link I'm on. Reality is that I heavily abuse the EXTERNAL stuff.
I've been noodling about the implementation of adding functionality to w3m and lynx so there is a separate fetch-page func but report a different User-Agent header (eg, "Mozilla"). I've encountered many pages that don't allow access until I change the "lynx-*" header (bastards).
Semi-OT: I'm addicted to lynx's multi-bookmarks feature (26 different bookmark files for easing the organization of your links), and about 15 years ago I wrote some elisp so emacs-w3m has the same functionality (and same bookmarks files).