Hacker News new | past | comments | ask | show | jobs | submit | majika's comments login

My solution is to write Jinja templates of C source files: https://github.com/mcinglis/libarray

Libarray (and my other C libraries; Libvec etc) have served me very well on a 20k LOC project. With ~150 source files, the entire project builds from fresh in 20 seconds on a i7-2620M. Rebuilds are super-fast; with proper Makefile specification, there's no need to rerender/recompile the templated files.


AGPL? Never. Ever.


My reasons are made clear in the README. I don't want to contribute to nonfree software. It's pretty simple.

Businesses who want to use my work in nonfree software are welcome to get in touch to negotiate such a license.

IMHO, for anyone who has seriously considered the morality and ethics of software licensing options, the AGPL is the obvious choice. I'm surprised it isn't more popular. It was a shame large donors strongarmed the FSF into splitting it from the GPL, and that the FSF kept pushing the GPL as it's go-to license.


I would suggest working on how to improve your Python tooling; Vim with Jedi, or PyCharm are both great options.

I think Python's module semantics are harder to use and understand, but when you know how to wield them (or reading the code of someone who does) they serve you far better than other systems. Python's module semantics are closer to Java's (explicit packages) rather than Ruby or JavaScript (global namespace), but unlike Java, Python's modules are derived from the file structure and not `package` statements.

The consequence of this is that most Python developers throw everything that should be in the same module into the same file, because that's "easiest". It stops being easy when the file grows large.

What you can do to address the problem of large modules in a single file is to turn the module into a directory of the same name, create an `__init__.py`, then put separate files in that directory to hold sections of the module (e.g. a file for each class if that's appropriate), and then in the `__init__.py` import the classes and whatever else you want to export from the module. Done - and client code needn't change.

Python's module system (and namespace system in general) is one of the largest reasons I prefer it over other dynamic languages. Knowing where every identifier came from in a file makes it so much easier to learn a codebase.


To me, this sounds like a failure of your tooling rather than the code you're reading. Vim with Jedi, PyCharm and PyDev can all jump straight to definitions. What's your development environment?

I'll use `from <module> import <identifiers>` when the identifiers' names express their purpose, and aren't dependent on the module's name. Importing identifiers directly makes the code using them less noisy [0], and makes it possible to replace the source module for the identifiers later on. In Django projects, I'll often need to rename modules or move identifiers between modules, and so using `from` imports makes that a lot easier.

I agree that `as` imports should be used sparingly.

[0]: your suggested solution is even more noisy than using the qualified name, because now you have a variable hanging around and readers have to work out if `now` is going to be reassigned or used later.


If we accept:

- that there's only one class of "humanic" intelligence;

- that we can approximately represent instances in this class of intelligence as vectors of {memory, learning speed, computation speed, communication speed};

- that any AI that could be created is merely a vector in this n-dimensional intelligence space, lacking any extra-intelligent qualities;

- that productivity and achievement increases exponentially the more intelligent being you devote to a problem, but logarithmically for more beings you devote (e.g. a being with intelligence vector {10,10,10,10} might be as productive as 10000 {1,1,1,1} beings);

then this doesn't exclude the possibility of us creating an AI with an intelligence vector twice an average human's intelligence vector, which can suggest improvements to its algorithms and datacenter and chip designs to become 10x as intelligent as a human, and from there it could quickly determine new algorithms, and eventually it's considering philosophy (and what to do about these humans).

The point is: viewing intelligence the way you suggest doesn't help us on what to do about "super" artificial intelligence.


This is really cool (I've been wondering how to do a string-switch for a while now), but I don't think getopt is a great use case, because getopt still results in imperative argument parsing and this always results in pain.

I've been working on libargs for the past year; it's a declarative argument-parsing library for C: https://github.com/mcinglis/libargs

The main idea is that each argument is by default parsed and stored as a string, or you can optionally specify a function of the type `void f(char * name, char * arg, void * dest)` to parse the argument string and store it in a well-typed destination. This way, you can have an `int` argument by passing `int__argparse` as the parser, and if the user passes a value outside the range of `int`, then an appropriate out-of-range error is printed to the console. Similarly with `uchar__argparse` or something like `point__argparse` (e.g. taking some format like `{x,y}`).

libargs is quite flexible and has worked well for me so far. Automatic help text generation can be added in future while maintaining (non-ABI) backwards compatibility.

The main disadvantage is that it depends on other libraries I've developed that are essentially Jinja-templated C source files that function as makeshift generic types / typeclasses in C. Your inclination towards this approach depends on taste; personally I much prefer deferring the pain to the build system, as opposed to the source code.


Fun fact, C++ reserves double-underscores anywhere in names for the implementation. It's highly unlikely that you'll run into anything colliding with your names in the wild, but if someone wants to use your library in C++, it's technically bogus.


It's similar in C. C11 Standard chapter 7.1.3:

>All identifiers that begin with an underscore and either an uppercase letter or another underscore are always reserved for any use.

I doubt the library is usable in C++ though, since longjmp doesn't play well with the destruction of local objects.


getopt still results in imperative argument parsing

That's one of the reasons I use it, actually. For simple utilities it may be adequate to set flags and store values for command-line parameters, but sometimes you need the flexibility of being able to run whatever code you need when an option arrives.


>To accept notable contributions, I'll require you to assign your copyright to me.

Okay, but why?


Among other things, it allows the project owner to relicense the project without having to contact every contributor requesting permission to relicense.


And usually makes accepting contributions from people outside the US a mess.

Ask the FSF how long it took them to finally sort out all the legal issues and prepare new forms.


Also, people are generally more inclined to sign that for a non-profit organization like FSF or the OSGeo Foundation, than for an individual person, although that's still more accepted than signing to a company.

CLA to non-profit > CLA to indidual > CLA to company


Why would a Canadian programmer have trouble accepting contributions from outside the US, specifically?


Not specifically outside the US, but the US (and apparently Canada) have the concept of copyright assignment, many other countries don't, so it gets interesting what value and consequences a copyright assignment by a programmer from there has, esp. in local courts and when the contributor isn't on your side.


Don't forget that "moral rights" may not be assigned at all, in at least some countries, so you'll need a per-country agreement that the contributor won't enforce them ... in countries where it is possible to agree not to enforce them, which is not all countries.

https://en.wikipedia.org/wiki/Moral_rights#In_Europe


What the web is increasingly unable to do today: provide text content without requiring a code execution environment. This site is another example of that.

All non-application websites should provide all their content in semantic HTML at appropriate HTTP endpoints, with CSS styling (in as few requests as possible) as required per the design, and JavaScript (in as few requests as possible) that takes the semantic HTML and makes it interactive (potentially adding and removing elements from the DOM) as required per the design. The CSS should not depend on mutations resulting from the JavaScript, nor should the JavaScript assume anything of the applied styles (as the user agent should be able to easily apply custom user-styles for your site; e.g. Gmail only providing a limited set of styles that are managed server-side is laughable).

Thus, all content is readable and styled properly without requiring an arbitrary code execution environment. That is what the web was meant to be. Unfortunately, most "web developers" have made the web worse over the past 10 years because simple, functional, minimal technology is not impressive, and hipsters love to show off.

Nor does it help that there are few capitalist incentives for the web being open and malleable -- e.g. so users can easily use a different front-end for Facebook, or users can easily choose to avoid analytics or advertisements, or users might prefer to use the website rather than the app (providing access to personal details, contacts, location, tracking, etc).

The state of the web is emergent and I'm not sure what anyone could do about it (perhaps make a better browser?), but it really irks me when web developers pretend like they're actually doing something good or useful, or that the web is actually in a healthy state. In my experience, it's the people who don't talk about web development who are the best web developers; these are the people who don't wince when they write a HTML document without a single `<script>`.


You're talking about "progressive enhancement". It's a romantic idea, but it never happened, probably because it's too hard and the cost is not justified given most users run with their browser's default settings.

The precursor of the web made by Tim Berners-Lee dates back to 1980, but it was not based on HTML or HTTP. These happened later in 1990 and early 1991. But then CSS happened in 1994. And Javascript happened in 1995 at Netscape, but then Javascript was completely useless until Microsoft came up with the iframe tag in 1996 and then with XMLHttpRequest in 1999, which was later adopted by Mozilla, Safari and Opera. And people still couldn't grasp its potential until Google delivered Gmail in 2004 and Google Maps in 2005.

Not sure what the "the web was meant to be", we should ask Tim Berners-Lee sometimes, but in my opinion the web has been and is whatever its developers and users wanted it to be, with contributions from multiple parties such as Netscape, Microsoft, Mozilla, KDE/KHTML, Apple, Google and many other contributors, being a constantly evolving platform.


It was Microsoft with Outlook Web Access that really was the first big example of what was possible. I worked with several folks around that time and we were doing "rich" web apps in IE5 with XML data delivered from the server via xmlhttp and building the presentation in the browser with XSLT. It was really slick, especially at the time, to be able to filter and sort, do summary and detail views, etc. all without a page refresh.


CSS happened in 1996, the first couple of years of the web we had no CSS, all styling was done with inline attributes up to that point.

The frustrating bit is that CSS was supposed to separate content/structure and markup, but now we have pages without content but with markup where the content is loaded after the fact. This really overshot the mark.


Also note tat the separation of content and structure is mostly nonexistent as it's currently practiced. People like to talk about it as a Golden Rule, a holy virtue, and then proceed to write even worse code than they did with tables. Generally, if you find yourself writing a tangled mess of divs in order to support CSS tricks, you're not really separating content from the structure. HTML5 semantic tags helped a little, but I still rarely see them used live.

(I'm not really convinced that separation of form and content is a feasible goal anyway - some elements of form are also content at the same time - but it's still a good goal to have.)

> now we have pages without content but with markup where the content is loaded after the fact

This is absolutely ridiculous and it, combined with the idea of routing everything through the cloud in IoT / home automation applications, makes me wonder what happened to some good old-fashioned engineering sanity. It's like people are trying to create wasteful and insecure systems on purpose. ("And what would that purpose be," - the cynic in me asks - "maybe monetizing people's data?").


Have you considered that there may be an actual reason why people write a tangled mess of divs. Could it be because the entire model is crappy and people don't know what to do to make it show things the way they (or their client) want(s)?


In my experience, as a front-end developer, the usual reason I need a mess of divs and spans is to support a design that's not a good idea for a web page to begin with.


What is a web page, and why would a particular design not be a good idea for it?

If a web page is not a good idea, then what other technology should be used to achieve that design, as well as remain equally easy to distribute?

This mindset is precisely why native mobile applications continue to exist on the market, and why articles like this one fail to convince me. Nobody (except perhaps Facebook with React) tries to really fix the web to meet the demands its been given. Instead everyone insists that the demands should change to meet the original vision and limitations of the web.

Regarding the HTML/CSS interplay, flexbox gives me some new hope that reconciliation is possible (getting both semantic markup and powerful / precise styling).


It's easy to find designs that are bad ideas for web pages. The most obvious culprit is a design that's excellent for a magazine layout. Such designs typically don't take into account the document flow nor the idea that different people open their browsers to different resolutions and browser window sizes. Heck, some won't even take into account different browsers with different capabilities. I think I understand where you're going with the "what is a web page?" bit, but it doesn't necessarily apply. Especially since a web page can be whatever anyone wants within the limitations of a browser, but that doesn't mean a design works based on some person's idea of what they think a web page is. In most cases a design is limited by the browser, not necessarily by the code of the page.

I have no idea what you mean by your second sentence.

And then, are you referring to my mindset or some other mindset? Because I fail to see how you can know my mindset on the matter. If it's the other, I would agree. Except I would say that "fixing" something in terms of making it do something it wasn't intended in the first place may possibly create more problems than we are attempting to fix. I would suggest attempting new things to see if they work, which there are some doing just that. I would point out that flexbox is an example of this.

I'm excited with flexbox and have started pushing to make use of it more often, when warranted. It's hard to switch current code to it, but I think it's worth it. But, in the end, someone will eventually start complaining about its limitations and that it needs to be "fixed". Then we'll be back where we started, it's inevitable.


My second sentence was the entire point. If a web page is not a good idea because it can't support certain designs, then what would you suggest instead (that also has the other properties of the web such as easy distribution and is ubiquitous)?

I was referring to the whole mindset of "the web was not meant to do that". Well, yeah, it wasn't. But its already doing that, so we have to do something to make it properly rise up to the challenge.


> This mindset is precisely why native mobile applications continue to exist on the market

And long live them because there is little about the mobile experience that's more infuriating than a web app that should have been made as native. The assumption of being always on-line is baked too deep into the web stack; I have yet to see a well-made web app that could not be made significantly better UX-wise by simply going native.


The UX problem isn't a matter of being online, but of not having a proper mechanism to express application UIs and styles (rather than document UIs and styles).

There are many PhoneGap applications there, and they don't make any "onlineness" assumptions. Their UI does however still suck and does not behave as expected.


> it's too hard

It's only hard if you want it to be hard. Your tools should be handling most of it for you. If they don't, pick tools that aren't broken or badly designed. When I was writing websites in Rails 2.x, progressive enhancement was usually automatic (same views are rendered as a page or a dynamically-loaded partial).

Saying "It's hard because I want to write over-complicated pages with badly designed tools" isn't good engineering or good design.

> most users run with their browser's default settings

How, exactly, do you know this? If the answer is "analytics", you are missing an increasing amount of data.

> constantly evolving platform

Which is why you progressively enhance pages. Those of us that disable javascript for safety usually get blamed when this topic is brought up, but the main reason for progressive enhancement is that it's defense in depth. You don't know what the browser is, what options it has set, what bugs it may or may not have, or if extra page assets even made it successfully over the network.

Not bothering with progressive enhancement is shoddy programming for much the same reason you shouldn't skip the test for NULL after calling fopen(3).

edit: grammar


I don't know a single person IRL who turns off javascript. It's the browsing equivalent of running only RMS-approved software, possible in theory, but not very practical, and definitely rare. The business needs of modern commercial websites are hard to build with progressive enhancement, and web apps basically require javascript to deliver a good user experience. Yes, you can build a personal website using progressive enhancement, and at one point I did, but I had to give up my progressive enhancement ways when i became a professional web dev because it was just not practical to do otherwise.


Web apps can be excused, but most web sites? Not really. What exactly happened that made it not practical to write simple sites? Did browsers suddenly stop rendering HTML unless you generate it in JavaScript?

It's not business needs, it's cargo-cult web development. Going with the latest trends without a moment to stop whether it makes sense or is actually what the users want.


What exactly is the issue with having the HTML generated with javascript? You can still run "view source", overridden CSS or Greasemonkey on it...


Why do you need to generate HTML with JS in the first place? There rarely is a need to do so on a website. There's definitely no need for the e.g. loading content of a blog post dynamically just to achieve a fade-in effect[0]. And yet, most of the modern web practices is silly stuff like that.

[0] - https://news.ycombinator.com/item?id=10646025


It's not only people who actively disable JS, though. See the How many people are missing out on JavaScript enhancement? blog post[0] by the UK's Government Digital Service (aka GOV.UK).

They calculated that 1.1% of users don't have the JS enhancements activated, and only 0.3% of those were browsers where JS execution was disabled.

[0] https://gds.blog.gov.uk/2013/10/21/how-many-people-are-missi...


I would guess a majority of that traffic is going to be crawlers. It's probably more like a small fraction of 1% who are actually missing out.

EDIT: Looks like they covered this in comments. Even that doesn't convince me for some reason.


Crawlers load images from <noscript> tags? Some might. As googlebot runs Javascript, would an image inside <noscript> be indexed into google image search?

While that's an interesting question, one of my points was about this common type of claim:

> guess

You admit you don't actually know. I don't either, which is why I program defensively and test for any feature I want to use.

A lot of people seem to be projecting what they want to see, reinforced by confirmation bias. Choosing Javascript based analytics is a great way to conclude that almost nobody uses Javascript.


Good points.

https://addons.mozilla.org/en-US/firefox/addon/noscript/

https://chrome.google.com/webstore/detail/ghostery/mlomiejdf...

These are some very popular plugins, so my thinking is maybe the answer lies somewhere in between, where security-conscious users are white listing sites they want to run scripts on. Not that they're running completely in js disabled mode. Even though it's a subtle difference I think it's relevant to the strategy one goes in with regarding noscript tag.

So this would make sense to me. If that's right, that most noscript users are just running these plugins, then those folks know that they're going to miss out with some sites, or they'll selectively enable javascript on a case-by-case basis.

This would probably require more extensive review of logs, to see if the person who originally downloaded the noscript image eventually came back to the site with javascript enabled. The likelihood is this would only happen if the site was not functional when they visited with javascript disabled.


I don't know a single person that does not run an adblock anymore.


I happen to know 3 (two of which I convinced into doing so --- and they're not even what I'd consider "advanced users"), and I am one myself.

The majority of informational sites are actually quite usable without JS. I'd say "the browsing equivalent of running only RMS-approved software" would be never allowing JS, but I'm more pragmatic and only enable it if necessary, for the sites I trust and must use.


The web is whatever we want it to be, but sad that lack of agreement on standards and openness of platforms severely limits what it can be now without a massive sea change. Because collective agreements tend towards entropy, those with the power in the agreements hold on tight to prevent decline. When we become more conservative and restrictive in order to keep collectives in place (open web >> Facebook), it limits our freedom to innovate and development becomes "a fight to maintain" that benefits the few rather than "a forward thinking step-by-step process" that benefits more and more people.


> What the web is increasingly unable to do today: provide text content without requiring a code execution environment. This site is another example of that.

I was about to argue that this website is actually an excellent example of what you seek -- each link has a separate URL associated with it that returns a page containing that content. The links point to these real URLs so they work with "open in new tab" and "copy link" and in browsers without JavaScript enabled, while the JavaScript that runs when you click it changes the page content via AJAX (possibly saving a few round-trips) and updates the current page URL so that back/forward history and the address bar both work just like you're navigating between real webpages.

And this works perfectly in Firefox (with JavaScript) and almost perfectly in Lynx (the table of contents still fills the first screenful, but that's hard to fix since Lynx doesn't support CSS). But it completely fails if you have JavaScript disabled in Firefox.

Every page starts with the table of contents visible and the content collapsed (through CSS). The page then seems to assume that JavaScript will be able to immediately switch the page to the correct view (i.e. the site is broken if you have working CSS but not JavaScript). Navigation to a given page directly should start the other way by default, and to make that happen is just 21 missing characters (` class="page-feature"` on the <body> tag). However, this unfortunate error completely ruins this otherwise beautiful example of progressive enhancement.


> I was about to argue that this website is actually an excellent example of what you seek

> But it completely fails if you have JavaScript disabled in Firefox.

I'm not sure what point you're trying to make.

We've known for a long time that progressive enhancement IS possible, it's just that very few sites bother to design for that. Are you just saying it's difficult and that this site "almost" made it.


The issue when JavaScript is disabled is now fixed, thanks for pointing this out!


It's the Flashification of the web. People who wanted to show off, or code web apps, used to use Macromedia Flash. People complained about it, in part because sometimes if you accessed a site without the Flash plugin you'd see a blank page.

But Flash was great in many ways, and it was self-contained in objects, so websites were mostly still websites. JavaScript had been around for a long time, but there was still a cultural norm that most people respected about not requiring JavaScript. This was mainly because a lot of browsers still didn't fully support it, or people had it turned off. It was also when some people had cookies disabled.

Then once Adobe bought Flash, and then Apple blocked Adobe Flash, it really killed the Flash way, and all that spilled over into HTML with HTML 5 and the new cultural norm of kids who are more concerned with showing off socially than the meat and potatoes of hypertextual information.

It should've been obvious that there was a need for a new web, for code and multimedia. But in the .com boom nobody would dare try to start with something unpopulated, since it'd risk losing their chance at fortune.

Today, instead, maybe we should go in the opposite direction and create a new old web; a hypertext network that specifically only works for HTML, so people can have this one to morph into an app network, and we'll not lose the text-linking place we've grown accustomed to.


> we'll not lose the text-linking place we've grown accustomed to.

It's just a minority of users who are accustomed to the old web.


Rad idea. How would one initate a new network of this kind separately from what exists today? And what kind of interoperability would it have? Can't imagine it could go cold turkey!


Thus, all content is readable and styled properly without requiring an arbitrary code execution environment. That is what the web was meant to be.

In other words, it was supposed to be a worldwide hyperlinked document library --- and we have mostly achieved that goal, although it is a library wherein you are constantly tracked and bombarded by books flying off the shelves at you, screaming at you to read them, and most of the books consist solely of ads with very little useful informational content.

In my experience, it's the people who don't talk about web development who are the best web developers; these are the people who don't wince when they write a HTML document without a single `<script>`.

Agreed completely. The ones who write information-dense HTML pages, often by hand, would not be considered "web developers" nor would they consider themselves to be; but they are what the web needs most. I've done that, and I don't consider myself a "web developer" either.

it really irks me when web developers pretend like they're actually doing something good or useful, or that the web is actually in a healthy state

I wouldn't doubt that they genuinely feel like what they're doing is good or useful; I've noticed the appeal of "new and shiny" is especially prevalent in the web development community, with the dozens of frameworks and whatnot coming out almost daily, proposals of new browser features, etc. Very little thought seems put into the important question of whether we actually need all this stuff. It's all under the umbrella of "moving the web forward", whatever that means. But I think we should stop and look back on the monstrosities this rapid growth has created.


Against what platform should we compare the web to decide what it should look like?. I think everyone has a different criteria.


Strong opinions on how things should be, but no arguments for why.

That is never going to convince me.


It's because his post is entirely opinion, albeit one that frequently shows up at the top of HN comments sections because it's such a popular opinion in the hacker community that we forget it's not actually the majority's opinion.


I tend to think rants like this are just being luddites, but all this JavaScript and external resources rest are ruining the experience of the web. Holy crap is everything slow now. On desktop we've all gotten into the habit of tabbing the new stuff and waiting for load while doing something else, but on mobile where that workflow isn't as easy and you have to watch a page load? I'd say HN is one of the very few sites I can stand anymore.


I encourage you to try the "NoScript challenge" - disable JS, enabling it only for sites you often use and which need it, and browse the Web for a week. If you tend to indulge in rich multimedia you're probably going to give up soon, but if, like me, you're just after text and image content the majority of the time, you might actually prefer it. Pages load nearly instantly (or not at all), there no more intrusive popups/popunders/slideovers/etc. and other annoyances like disabling right-click, text selection, or stuffing text into your clipboard.

The "almost all sites have JavaScript and will break if you disable it" statement is common, and while I agree that the vast majority of sites do have JS on them, whether or not the stuff that will "break" on them if you disable JS is actually of any use to you is questionable.


I think this is because so many websites try to shove ads and tracking crap down your throat.

It's not even particularly difficult to pull off for certain websites. For example, my blog uses React and it does server-side rendering, so it'll work without JS at all. However, if you do have JS enabled, it'll let you avoid doing full-page reloads to navigate around. I totally agree that for content-focused websites, you tend to get a better experience by limiting gratuitous JS abuse.

However, this isn't very feasible when you're building highly interactive applications. In the case of something like Facebook, they do have a version of their app that works without JS... But how many people can afford to maintain multiple versions of their applications?

I think a good compromise is achievable by well documented APIs or even better with a public GraphQL schema! If I don't use magic APIs to build our frontend app, you can build a different frontend that's tailored for your needs.


What we need is a new protocol: something that lets an author write and publish text documents, marked up with basic styling, with "hyperlinks" to other text documents. The protocol could allow embedding of simple inline figures as well. Users would run a "browser" whose function was limited to requesting and displaying these documents.


You mean like AMP?

https://www.ampproject.org/

And yes, it's pretty silly that this is a thing.


> Unfortunately, most "web developers" have made the web worse over the past 10 years because simple, functional, minimal technology is not impressive, and hipsters love to show off

No it's because when I go into work someone says to make it a certain way and if I want to get paid I have to. If you want to blame anyone blame designers who see proof of concept stuff from developers and throw it in designs.


This comment reminds me of Maddox and his 90s' style html blog: http://www.thebestpageintheuniverse.net http://maddox.xmission.com/


To read this without JavaScript, you can get the post text from the (current) top of the RSS feed:

http://blog.achernya.com/feeds/posts/default

The source code of the (unfinished) shell is at:

https://github.com/achernya/xv6-shell.rs/blob/master/src/mai...


For these Blogger templates, I just pull the relevant page from Google Cache. E.g.

http://webcache.googleusercontent.com/search?q=cache:http%3A...


I didn't find this video helpful at all. You have to skip 10 minutes ahead to get to the actual talk. The talk is stunted and, as always with Rust documentation, dwells only on the basics. Audio isn't clear, but neither is the speaking.

I don't know of a succinct overview of Rust for experienced system programmers, and this video certainly isn't that.


I used Thunderbird and Enigmail for a year or so. I later moved to Evolution for the better GPG experience (no flickering on opening the message as there is with Enigmail) and better GNOME integration. I used Evolution for another year, but its rough edges wore me down; I had to get in the habit of closing it if I wasn't using it, because it would noticeably slow my system down. It would take at least 5 seconds to open. Parts of the UI would often lock up and need a reset by switching between the panes.

I've been using Claws Mail for the past few months. It's simple, fast and stable, and GPG support is fine. That's all I need. It sucks for reading and writing HTML emails, but I'm okay with that. Parts of the UI are still rough (e.g. the main window isn't responsive while the "sending" dialog is open), but it's less worse than Evolution.


Your electricity usage patterns are enough to determine your schedule, especially if you have electric water heating. Browse through some systems at PVOutput.org until you find one that publishes power usage data; you can usually see when the occupants wake up, when they do their washing on the weekends, when they turn on the TV in the evening, and when they go to sleep.

With higher precision usage readings, you can determine what they're watching on their TV [0] [1].

I agree that the internet-of-things is another concerning layer of risk for privacy and security, though.

[0]: https://nakedsecurity.sophos.com/2012/01/08/28c3-smart-meter...

[1]: https://www.youtube.com/watch?v=YYe4SwQn2GE


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: