Hacker News new | past | comments | ask | show | jobs | submit | more meditate's comments login

I don't see much of a difference. There still is a popularity game and it's played with corporations, not people. Though I agree that it's not necessarily a bad thing in comparison.


Yes, but your popularity doesn't affect your experience of buying bread or your ability to reach a destination. In Mali you may get more expensive and shittier products for being disliked. You may have a hard time finding a good guide to accompany you somewhere.

You life literally depends on what others think about you.

It still true for us, but an order of magnitude less after high school.


>Yes, but your popularity doesn't affect your experience of buying bread or your ability to reach a destination

Your credit score does. In some places and situations, more directly than others.


Your credit score is a lot more straightforward and reliable than something like social popularity.


See what China is doing with the concept.


That's not the same thing. China has some kind of scoring system, but it's much more totalitarian and popularity based than the credit score you were referring to in the US and other Western countries which is fairly straightforward. It's not helpful to conflate the two.


Why not use a restricted subreddit and have submitters simply email links to you?


I don't want to restrict the subreddit, for one. I have around 295k followers on Tumblr; I can't hand-approve all of them. Plus, restricting the sub would keep new people from finding it.

Plus what the other commenter said about formatting and poor workflow. The mods can just barely handle the current workload of reviewing ~30-40 posts per day, choosing 20 to post, and adding tags as needed. If we had to open an email, then open a link (and hope the link was functional and not malicious), then review the submission, then copy the submission, then open the "submit a post" dialogue, then paste, then format, then post... yeah, that's way too much.

There's an IFTTT workflow for sending Tumblr posts to Reddit, but unfortunately the API it uses doesn't preserve any formatting, so you end up with a giant text blob. I don't think it could handle image posts. If that worked, I'd happily start sending things to a subreddit.


Restricted subreddit means that only approved users can post, but anyone can view. Private means that only approved users can view.

Another option is to set all posts by non-approved users to be automatically flagged (hidden to non-moderators), and moderators can un-flag the posts. Which, I think, is the workflow you're looking for? All you have to do is set the spam filter to "All" in the subreddit settings.

You still run into the problem that a lot of people might not want to use reddit, and you'll just end up fragmenting the people following your blog and lose a lot of readers.


The flag workflow is the closest I've seen on any platform to Tumblr's submissions, for sure.

The concern about people not wanting to use reddit is definitely a big part of this. Reddit has a very different culture than Tumblr, and I agree that it's likely I'd fragment my readers and lose most of them. As much as I love my blog, I don't think the effort of trying to make the switch to reddit is worth the harm and disruption it would cause.


Tumblr posts are multimedia, have tags, etc. There's no straightforward way to package all that in an email.

Also, it's just not a simple workflow to copy/paste from email. On Tumblr you just push a button and the post goes up.


Those kind of corporate bullying tactics are not particularly welcome in open-source communities.


The attribution relaxation in this is a nice step forward for trivial bits of code. I've seen too many Javascript applications that use 100+ small MIT-licensed dependencies, so the copyright statements end up being a significant portion of the minified code!


I'm actually a little disturbed that attribution is removed. What's the point then? People generally don't write software for free so that they can't even get basic credit.


Writing code that is helpful to and/or maintained by other people. Attribution does seem like a rather low bar, but I would understand if not everybody cares.


I see it covering the same use case as CC0, which is trivial bits of code that benefit mostly from mass adoption. In the case of this license it would be useful for code fitting that description which also may be covered by patents.


It doesn't make it inherently safe, but if you are attempting to prove your builds are safe then it is impossible for anyone else to verify that without the source. See the thread on Debian reproducible builds from earlier this week for more discussion on this topic: https://news.ycombinator.com/item?id=19310638

Code signing is something you can do on both open-source or closed-source, but it doesn't prove anything other than that a particular build was made by a certain person.


"but it doesn't prove anything other than that a particular build was made by a certain person."

But that's what trust actually is. This IRL person or identity, that I trust, vouches for the non-maliciousness of this application.


Except the core problem is key propagation because just anyone can have a key - paid or free if you don't know the source. It says it is from Globe Software and it matches with the provided key. It doesn't tell you if they really are Globe Software, let alone if they are a trustworthy company in the first place.


I don't see this as an incentive specific to FOSS. Most customers of any software tend to demand increasingly complex featuresets as time goes on.


The complexity is not specific to FOSS. The incentive as described however is.


I still chalk this behavior up to the walled-garden nature of the publisher platforms (mobile, consoles, Steam, etc). When customers have no recourse against companies doing things they don't like, they resort to mob behavior. Remove the friction from customers moving from one clone to another and you will see this disappear. However, I have no idea how you would get there.


>someone to make a decision about the one true way to do things.

Things don't really work this way in free and open source development. There is no one person to make decisions, consensus is reached when the quality of something raises "above the bar" and actually improves things for all involved parties. If someone wants there to be an über-library that serves everyone's use case then it's up to them to go and do the work to build that.

And it has been getting better in this regard. For example KDE and GNOME used to have their own IPC, multimedia & audio mixing backends, but now both have converged on DBus, GStreamer and PulseAudio, in part because these were intentionally built to be flexible low-level solutions. I'm sure there are more examples of this too but those are the first that come to mind.


You're absolutely right. I wonder if something like DBus and PulseAudio could happen with my UNC pain point.

With the assumption that the goal is for "vi //server/share/file.txt" to work the same as "notepad.exe \\server\share\file.txt" does on Windows, here are my thoughts.

First off, notepad.exe doesn't really care about the fact that it's a UNC path. It just opens the file with CreateFile (either CreateFileW or CreateFileA).

There would need to be replacements for the libc file functions. These could be a shim in front of libc, or baked right into libc. Note, there's a LOT more needed than "just" new file functions - any functions that do anything with paths need to be looked at. Shells would likely need some changes to work properly, though it's not like the Windows shell can truly do much with UNC paths - copying files to/from works, but you can't cd into them.

How does it ask for credentials? If it's via DBus, a desktop environment provide the authentication prompts, but what about a pure-commandline system? Maybe the transport is just SSH and relies on the existing public key authentication? But what if you're just doing a one-off thing and don't want to set that up? Using SSH is probably a decent idea since it's got authentication, security, and a file transfer protocol, already built in.

On top of all of this, when you open //server/share/file.txt for writing, what does that actually mean? Is there a file descriptor? How does that work with the kernel? Does libc now manage all file descriptors with only a subset corresponding to kernel file descriptors? Could a pure user-space solution fake this well enough to actually work? Would this need to be a FUSE filesystem along with some daemon to automatically unmount the remote servers when the mount is no longer needed? Would it be something like the automounter, just a lot better? Does a kernel need changes for any of this to work?

This is one of those things that touches so many layers and potentially interacts with so many parts of the system, potentially all the way down to the kernel.

My guess, and I don't actually think this will happen, is that Apple will do something like this on Mac OS X and have a reasonable mapping to the BSD world underneath, then someone in the Linux community will come along and do something similar in a way that's better suited for Linux. As a parallel, Apple came out with launchd in 2005 to replace init scripts, systemd made an appearance in 2010 - both do very similar jobs, with launchd tailored to the needs of MacOS and systemd tailored to the needs of Linux. Maybe something similar could happen with UNC-like file sharing.


All that has been doable for quite some time, you could mount SMB shares like that with smbfs since early releases of Samba, and later with the CIFS fs driver. You do need root to mount things that way, so it isn't ideal.

For the more complicated stuff it can be done but not everything is available via a simple GUI. GNOME and KDE have their own virtual filesystem layers in userspace, GVfs and KIO, I don't know what KIO does but GVfs supports a bunch of network backends and has a FUSE driver that can mount its own virtual filesystems and expose them to outside applications. So the features are there but I don't think they are well-presented right now, maybe someone can prove me wrong though.

It would have been nice if the kernel had better support for fine-grained control over filesystems like HURD or Plan 9 do. But instead it was decided that it was better to handle those things with userspace daemons, so that's where we are now.


These aren't the same thing though. The GNOME and KDE VFS layers only apply for applications written for those APIs. It's not a universal thing.

Being able to mount a CIFS filesystem is fine, but it's not the same thing. In Windows, you can basically use a UNC path anywhere because CreateFile knows how to deal with it. The point is that you don't need to mount the remote filesystem (the Windows-equivalent being mapping a network drive).

What I'm really looking for is the user experience, not the underlying protocol. On Windows, I can just go "notepad.exe \\server\share\file.txt" and edit the file, on Linux I need to either use a KDE application or go through the ceremony of mounting the remote filesystem. It's the fact that the feature is silo'd into GNOME and KDE (and the fact that it doesn't even exist on Mac OS, but that's another issue) that bugs me.


There is currently no kernel interface that I know of to do that, and I don't think it would be too hard to hook into an open() on an invalid path and try to do something (mount a network fs, call out to GVfs or KIO, etc), but I can tell you you will meet resistance if you try to because things like "//stuff" and "smb://stuff" are already valid local file paths in Linux. So I leave it up to you to figure out how to do this without breaking things.


Yeah, this is definitely not an easy problem to solve given the design of Linux.

I don't know why I didn't remember this earlier, but I actually explored this a number of years ago and came up with two things that are close, but not quite there:

First was to use a systemd automount unit[0], but I didn't really get anywhere with it. From the looks of it you have to know all the possible things you could want to automount, it can't do wildcards. Being able to do some kind of pattern matching on the requested path and translate that into a mount command would go a long way to making this work.

I also explored the good old automounter[1][2], but it has a lot of the limitations that systemd's does. It does have the advantage of supporting host maps, which gets me a bit closer to what I'm looking for. The unfortunate thing that remains is that this is NFS instead of a modern protocol. If this were somehow backended on sshfs, I suspect it would be quite useful. Of course, sshfs is missing the concept of shares but that's not a showstopper by any means. Authentication becomes a problem since the automounter probably can't ask the user for a password, and may not even know which user is requesting the mount.

I have no idea how well either will work in practice. Modern Linux on the desktop is a very different environment than the one the automounter and NFS were built for. The systemd automounter looks like it serves a very specific purpose and can't currently do what I want.

Maybe all we really need is a modernized automounter and/or some extra features in systemd's automounter. These could lead to to "vi /net/server/share/file.txt" working as expected which, quite honestly, is basically the same as what I suggested earlier.

[0] https://www.freedesktop.org/software/systemd/man/systemd.aut...

[1] https://linux.die.net/man/8/automount

[2] https://linux.die.net/man/5/auto.master


> I also explored the good old automounter[1][2], but it has a lot of the limitations that systemd's does. It does have the advantage of supporting host maps, which gets me a bit closer to what I'm looking for. The unfortunate thing that remains is that this is NFS instead of a modern protocol.

What limitations affect you?

(At home, I have linux running on an HP MicroServer as my NAS, it exports filessytems via NFS. Other machines run autofs with the hosts map, so for example my wife's desktop - and mine for that matter - auto-mounts NFS shares on-demand and she can open any file directly in any application by accessing /net/$hostname/$path).

NFSv4 is pretty modern ...

I believe this should also work for CIFS, if the server-side supports unix extensions (to do user mapping on a single connection), but I haven't had time to try it in the past day in my limited time at home.

> Authentication becomes a problem since the automounter probably can't ask the user for a password, and may not even know which user is requesting the mount.

If you have Kerberos setup, NFSv4 does the right thing ...

If you don't have Kerberos setup, then you're probably ok with just normal NFS user mapping.


Interesting, I'll have to give automount another look.

The last time I tried it was years ago, so I can't remember what limitations I found. If I get a chance to do this in the near future I'll report back.


gvfs does some of what you ask. I guess you could trick open() with LD_PRELOAD.

For the dbus/polkit authentication prompts, I've seen it work on the command line but have no idea how it works. If anyone wants to donate, I'll spend a day and half a bottle of good whiskey and come out with a blog post.


If all their users are threatening to switch to some AWS managed service because it is better for them, then perhaps the software should fall into disrepair and the developers should get other jobs. There isn't anything wrong with this, it happens all the time.

It's probably true that Amazon is able to accomplish this only through strong-arm tactics, but it also seems unfortunate that Redis Labs feels the need to resort to similar strong-arm tactics in retaliation. It doesn't invite anything besides more pressure from Amazon and companies like it. Nobody wins in this in this scenario, especially not the customers who now have no refuge because both competitive offerings are trying to rope them into participating in a turf war.


It's a free-rider problem - if large cloud companies fork every successful open source infrastructure project, don't contribute back, and pull away a significant chunk of users, then there's a lot less incentive for other companies to invest in future open source development, because they'll have fewer users and get fewer code contributions.

Potentially that shifts things from a good equilibrium where everyone reaps the benefit of many people contributing to the same projects, to a bad equilibrium where every major cloud player develops closed-source products separately.

I don't think it affects all open-source projects, particularly not small ones or ones where some is scratching their own it. But it's hard to build certain types of complex production-quality infrastructure without full-time employed developers working on it.


What jobs? How can the developers get other jobs if no one is paying for their open source software in the first place?


I don't understand, there are still lots of companies that fund FOSS. Publishing FOSS doesn't mean you can't go out of business, it means customers get an additional layer of protection from you going out of business or deciding you don't want to be in a certain market anymore, as Redis Labs has clearly done here.


Sure, those companies make money, ergo why people want to be able to make money while working on FOSS.


You may consider that WSL is to Cygwin as Wine is to Winelib.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: