Hacker News new | past | comments | ask | show | jobs | submit login
Dillo: A Tiny Graphical Web Browser (dillo.org)
124 points by vmorgulis on Oct 20, 2015 | hide | past | favorite | 69 comments



I love Dillo. It reminds me of when I was ~14 back in the mid-to-late 90s and learning C++ with a copy of Visual Studio 97 my dad "got me from work". Back then things were very different and teenage me wrote a pretty nice browser from scratch with the exception of some networking code.

Back then a browser was just another application on your computer, almost like a calculator something. It was just a tool. Now the web browser is probably the most complex bit of software on your computer. Crazy how far things have come in the past 20 years.

Also it is great to see FLTK in use, it is a nice little toolkit albeit not the prettiest :)


Personally I dislike this trend, it pushes the onus of application development into the domain of web developers, and they have shoddy tools (javascript?) and have to program for multiple platforms (chrome/firefox/IE) and- at the end of the day, nobody is signing your code (not like native programs).

At least we finally have a system that is multiplatform- but my web browser is seriously hungry, added to that most websites dictate how they will display their information to me- I seem to get less choice year on year. (and yes, I know I can plug my browser full of add-ons which give me back some control, but if JS doesn't run for instance, I don't even see the page in many cases, and this serves to increase how hungry the browser is)


You're right, the trend has it downsides. But there are some pretty big upsides too. Distributing a web app is such a liberating and lovely experience compared to trying to share native code. No crap for the recipient to install, no installers to write, no figuring out library and platform dependencies only to have everything break anyway. Sharing my C++ or python projects, even with small groups of friends, even with other developers, even on a single platform, has always been a complete nightmare, not to mention larger audiences on a variety of platforms.

> and they have shoddy tools

I don't intend to argue with you, but my personal experience is one of discovering recently that javascript is a decent language and the browser dev tools, especially Chrome, are bigger and better than most of the C++ dev tools I've ever used. And they come with the browser, no crazy developer packages to install, and you can always debug on anyone's machine anywhere.

It is true that web developers usually have to make browser specific adjustments, but its fairly minor these days, an occasional polyfill you can find online, and I don't think this even compares, not even a little, to the issues you have to deal with when writing native cross-platform code (mac/win/linux).


If I could have my ideal setup, rather than web applications we'd have a sandbox installed on our computers that would run applications that could be downloaded on the fly. Something similar to Java Webstart but without proprietary nonsense that plagued Webstart. That way, the web could stay the web and distributed applications could stay distributed applications and we wouldn't have to deal with the complexity or the compromises needed in using a document language to define application GUIs.


I'm happy for web-apps, they're clearly a good idea.

I'm less happy that everyone treats their webpage as an app, and thus crams it full of huge amounts of code, when really they're mostly serving text and images, with only little bits of javascript needed.


I love the web but I hate to work in the web. It is a shitty experience. Getting my data in and out can be a pain in the ass and slow. There is very little proper integration between services for obvious reasons. On the desktop I can pump data around in memory to other applications lightening fast and if I need to do something I can't do locally then I pump it into a cluster (cloud) service to crunch it.

Sure there are some desktop apps which suck as much as their web alternatives (Outlook for email is a great example) but overall I would much rather work with a dozen desktop apps than a dozen web services.

And then all of the other things you mentioned come into play such as code signing, no control over updates which might break my process, etc.


added to that most websites dictate how they will display their information to me

Yet you say you miss native apps, which absolutely dictate everything about how they work?


For native apps:

* CLI tools present in my terminal's specified fonts and colours. Near total control. The main exceptions are ANSI escapes, and even those I can limit to black-and-white -- usually normal, bold, and reverse-video.

* Most desktop apps integrate into some desktop enviornment (or environment(s)) for font size(s), colours, etc. OSX is actually something of an exception, which I realised when trying to see how I might set that up for a visually disabled friend.

* Even where desktop apps don't have systemwide configuration tools, you're generally seeing the same toolkit (or limited set of toolkits) in use. Most of the experience is highly uniform.

* Straight-up text presentation is typically quite good. Far superior to Web tools.


A lot of native "apps" have either very flexible configuration, or some sort of scripting/plugin framework. Maybe not iTunes, but things like foobar2000 or conky.


A few do. But most do not offer the incredible user presentation freedom the web does.


In choosing between presentation freedom and a sane GUI programming model, I'd choose the later, especially given that the 'presentation freedom' is problematic when I want my desktop to have a common UI theme. Take into account that modern GUI toolkits actually provide presentation freedom similar to the web without having to do without native widgets, I don't see why anyone would prefer web GUIs from a programming perspective.


> modern GUI toolkits actually provide presentation freedom similar to the web

They let you change colours, typefaces and shapes slightly. You could do that with Windows 1.

That's not the same as what CSS allows (complete reconfiguration).



you're right!

I live in the land of UNIX and commandline, I was not talking about GUI applications. But I see why you interpreted it that way, I apologise.



And in my limited tries Netsurf is faster and much much better at displaying modern websites (not perfect of course).


Weird, I felt a huge latency gap between the two. Of course dillo renders things naively so it's almost an immediate mode, but it also regularly display things under 0.5s[1] which feels impressively good.

[1] if not lower... I can't time it, but to describe it I'd say, rendering is done when the Return key is up. Seriously nice.



surf provides a minimalist interface but isn’t really in the same class as Dillo and NetSurf. It’s a full‐fledged WebKit engine, while these two use their own custom, lightweight rendering engines.


Good point. I should give Dillo a try.


Got stable release from Ubuntu repositories with apt-get.

Most of the websites work, but are visually broken. Google seems to be working okay.

But the speed is super. It is like Firefox, Chrome and IE are these big slow monsters and Dillo is the little hero running around them.

Dillo shows pages, other process them.


That's their objective : http://www.dillo.org/funding/objectives.html

It does not work for "apps", but to read documents (the initial purpose of the web), it's excellent.


Dillo by default doesn't show pages if they're being served up using SSL:

http://www.dillo.org/FAQ.html#q12

> First of all, beware that this is a prototype in alpha state. It will only provide for very basic web page retrievals, POST and GET. There is no certificate caching and NO AUTHENTICATION performed.

> It's disabled by default.

In my mind that's a pretty big defect.


Something funny, even typing a URL without suggestions felt good. The locality frees my mind. It's ""just"" a HTML renderer. Nothing else.


My stable from Ubuntu repositories for 15.04 just gives me segmentation fault. It's a shame that bug tracker is currently broken. Gonna try with Ubuntu devs.


Also don't forget Links browser with the '-g' option (text-based with graphics!). Super fast, with simple keyboard controls, or mouse if you really need to.


I find lynx/terminal style browsers easier to read. But one thing I did notice was that they were slower than some of the graphical ones (the last time I tried). So I ended up modding my GUI web browsers for readability.

I'd like to see a speed comparison.

A browser where you disable JS, CSS and images, you'd hope may be smart enough to not fetch or process unneeded assets.

I have a 1.4mb connection at home, and many sites are painful to use, to the point I just don't bother. Because of lavish asset loading - with advertising etc.

Most of my web reading is suspended to the future: add to queue -> process page to text -> send to e-reader, partly as it's more comfortable, and also because of sluggishness in sites/pages.

On faster connections/faster computers it's less of a pain. Mobile is generally a world of pain (for me at least)!


Do you use custom scripts for that reading workflow, or something more generally available? Could be a nice partial solution for procrastination.


I didn't really want to flag a specific e-reader. And was hinting at something more general. As you say I'm sure you could bolt something together, like I have been meaning to.

At the moment it's as boring as a readability bookmarklet. That then sends the parsed html to a free kindle address. Readability will accept urls to process in emails. But this doesn't work for me. I forget why. Also readability sometimes borks and I never get the page. And sometimes readability can't read something. It just comes out blank. Probably a JS heavy page or something. Managing readability archives and searching through them isn't that great. I'm not really a fan of the readability web browser interface. I haven't tried the apps. But it does the job for free so I mustn't grumble.

I'd rather email/bookmark links, to process later, to then read at my leisure. And also text index them. Something to improve searching. Also the ability to rate or summarise what I've read.

I get articles that upon reading, I don't like. So chaff does get through. I could do with applying feedback. Flagging bad (as in I don't like) sites, bad authors etc. One problem is that I can't do the feedback there and then from my e-reader when I'm in the moment. But actually it might be a good re-enforcment tactic to come back and evaluate later. It would be nice to then share good reads.

An email address for reads is good, as you can always ask others to send you stuff. Or perhaps forward an email containing a link to your reading address.

I have cognitive difficulty reading from a browser, but am fine with e-ink. Plus it doesn't require an internet connection. It's comfortable. I do tend to neglect my book reading though, which is a bit of a shame. Oh and I have a mountain of articles to read, currently about 1200.

With what comes through with readability, sometimes the author is missing. Probably meta data missing. And not having dates and other meta data available can be annoying. Having a datestamp of when I bookmarked it for reading might be nice.


Oh, there is Calibre, that I have been meaning to try and use. But it's nice to be able to just turn on the kindle (as I'm in the shower or something), and have it download latest reads before I leave the house. Rather than fussing about connecting the machine to another machine etc.


If you can leave Calibre running, it will process RSS subscriptions in the background on a schedule.


I use pandoc to turn html into epub. Something like the following script could be used.

#!/bin/sh

curl -- $@ | pandoc --standalone --smart -T '' -o - -f html -t epub


What does the $@ stand for? Readability strips sidebars, comments etc from web pages. But I'm sure it's leaning on free libraries. I assume pandoc doesn't do that?


$@ is the command line argument for the script. That is the URL you pass to the script.

Pandoc parses the html, converts the parts it can to its internal document model and converts this into EPUB. It doesn't do any intentional stripping, but some things get thrown out because it is not part of its document model.

Pandoc is not a single purpose application, but rather the document format swiss-army knife. [0]

[0] http://pandoc.org/


i used to use this as my documentation browser. it's ideal for the purpose - opens instantly, very fast and responsive, and typically html docs don't need anything fancier than dillo can handle.



Looks like they have their own plugin framework. I've been watching the Dillo project for sometime, in 2005 I had feared it was dead, but it has been remarkably unkillable thank goodness!

Their change log is quite comprehensive:

http://hg.dillo.org/dillo/file/tip/ChangeLog

Refreshingly for me, they have tried to do a lot of documentation, which can be found here:

http://hg.dillo.org/dillo/file/devdoc


This reminds me of Midori[1] which I loved using while I was on ElementaryOS[2]. It was such a refreshing change from years of using bloated Chrome and Firefox which they still are, more so.

[1] http://midori-browser.org [2] https://elementary.io/


Which os do you use now? Why did you change?


I use OSX now.

That's what I used at home for the last ~4 years. But At my work I was using Ubuntu and Windows in my last job. In current job (~5 months) I was given a Dell laptop initially on which I had set up Elementary and after some time I was given a Macbook Pro. So that's how I changed. Also, Elementary wouldn't be called stable really (and they don't claim so) - till few months ago at least, I still liked it.


One letter away from something completely different. Anyone ever thought about renaming this? I had to look twice...


Actually, it's two letters away. PilloW


You're right. I too was disappointed. AFAIK, not a single pillow has made it to the front page of HN. And we're all still waiting.

They should definitely rename to avoid further confusion (not to mention the false sense of hope).


https://en.wiktionary.org/wiki/dillo ?

Seems reasonable, not like it's called "the gimp" or something. Perhaps it's from "armadillo"?


Gimp is a thing anyway, you dirty minds. http://www.gimp.org/


About 10 years ago I got an old discarded, MIPS based HP workstation from work. Dillo was the only browser I could get to compile on that box's Gentoo distribution at the time, I remember it being really fast but not being able to render a lot of stuff. I'm surprised the project is still going!


I used to help port Linux to various devices like the HP Jornada [0] or the Ben Nanonote [1], both of which have (up to) 32MB of RAM available.

Finding a suitable web browser with reasonable support for "modern" web standards (basically CSS2) and a lightweight footprint was terribly hard... the better ones, as I remember them, were:

1) Dillo[2], which was one of the most lightweight graphical browsers under active development, albeit a bit light on features as well... FLTK is a great toolkit, and runs well on resource-starved devices like the ones mentioned above.

2) Netsurf[3], a relative newcomer, had one of the best rendering engines out there considering its lightweight footprint. It too is under active development, and is moving towards HTML5 and CSS3 compatibility. Has GTK2 and framebuffer backends, the last of which is better in terms of memory footprint.

3) Konqueror-Embedded[4], which hasn't seen development since the mid-2000s, was actually the only browser with reasonable support for web standards and support for Javascript. Built against QT2 (which is a massive chore in itself), it runs fast and has a low memory footprint.

4) Links-hacked[5], which again hasn't seen any development in more than 10 years, worked pretty well in graphical mode. It's a mix between elinks code (before Javascript support was gutted out) and links2 code (for the graphical parts).

Some failed experiments were:

Firefox version 1.0.8, last release with GTK 1.2 support, was, unfortunately, too slow to run in any reasonable way. Startup time was around 1:30 minutes on the HP Jornada, and navigating to any web-page took more than 3 minutes.

Hv3[6], a browser and engine built in TCL/TK, looked promising but was a nightmare to compile and never worked correctly.

Finding modern software that can run well in such memory-constrained enviroment was hard enough, let alone something as complex as a web rendering engine.

[0]: https://deuill.org/page/1/jlime-vargtass [1]: https://deuill.org/page/3/jlime-muffinman [2]: http://www.dillo.org/ [3]: http://www.netsurf-browser.org/ [4]: https://konqueror.org/embedded/ [5]: http://xray.sai.msu.ru/~karpov/links-hacked/ [6]: http://tkhtml.tcl.tk/hv3.html


Am I right in thinking that Dillo doesn't load any JS by default? I'd imagine your user-agent would be quite a giveaway in this case, but still, perhaps nice to use in that respect.


Used this way back time, but then somehow still use firefox these days. Had some javascript issues but overall it's usable.


I know this is an extremely naive question, but what is the advantage to using a nonstandard browser over one of the big four?


I tried a while back to get this to build for OSX but gave up after a few rabbit holes.


Dillo is part of Pkgsrc, and can easily be installed on OS X.

Bootstrap this : https://pkgsrc.joyent.com/install-on-osx/

And then it's as easy as : pkgin install dillo


I'll give it a whirl, thanks. :)


hmmm.. looks like dillo isn't an available package?

  pkgin avail | grep browser


Haven't checked 2015Q3 yet, still using 2015Q2 where it's available. It might be broken or failed to build in 2015Q3 then :(


netsurf or uzbl over dillo any day of the week, with xxxterm receiving an honorable mention. Vim user currently using iceweasel with vimperator after trying many minimalist WebKit browsers.


>Bug Tracker

>[currently broken]


Yes because what we need are more browsers with less function parity.


links -g for life.


It's lightning fast. Makes you feel what you've lost with all the css+js.


Browsing could never be slower than what we had without modern CSS/JS: desktop applications. If you think we've lost anything (speed, convenience, whatever) with CSS/JS, then you don't remember (or willfully forgot) what the web was like in the 90s.


No, I remember the web from the modem days, and I'd be quite happy with pages from the modem era but loaded over 100Mbit Ethernet.


But that was because we were all using dial up, now the problem is the processing power required to crunch the overly-abstracted CSS/js output of our huge and complex toolchains.


It's not just processing power, it's all the round-trips and timeouts and social buttons delays.


And advertising.


Dumb banners or text adverts would be pretty fast, but all that script-loaded, dynamically retargetted, real time bidded crap is pretty slow.


Except dumb banner and adverts are NOT pretty fast and a frequent cause of the spinner still running long after the page loads.


Meaning they are not dumb enough.


Without which we wouldn't need all the processing power and 640K of memory really would be enough.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: