Hacker News new | past | comments | ask | show | jobs | submit login

I'm glad the post states that Roundcube will stay an independent product.

Roundcube is on a whole other level in terms of stability and robustness compared to Nextcloud.

I'm also glad that the current Nextcloud client will be replaced, because it's not very good right now.




12-18mos before that changes.

no company in the world is going to maintain 2 separate software products that compete with each other. They will be merged, my prediction is 12 to 18 mos


which sucks bc nextcloud is an ugly, heavyweight beast of a suite to run. whereas roundcube i'm happy running it on my lightweight el cheapo VPS.


Nextcloud looks clunky but it certainly isn't heavyweight


Isn't it PHP based. I worry about anything written in that turd of a language.


I guess you still think about PHP when it was version 3 or 4... we're at version 8.3 now and it's definitely not the same thing. Both for language and performance.


I remember looking at some new features and was so hopeful the language would get better but they had some super cargo culty take on something borrowed from another language that completely missed the point. It's a horribly done me too language where the developers don't fully understand what they are "me too"ing.

7.0 had some issues so bad it's almost impossible to find anymore. Seems like they tried to erase it from the internet. The language itself is an attack vector.


But so is Roundcube.

OT: please (re)read the HN Guidelines. https://news.ycombinator.com/newsguidelines.html


You saying I'm flamebaiting? I doubt anyone is going to be offended by PHP being trash, especially anyone that has spent any serious time working with it.


Then I wouldn't use Roundcube either.


What?

You said "i'm happy running it on my ...vps."

And now "Then I wouldn't use Roundcube either." So you're not running roundcube then?

Also, love how you said you loved roundcube because it works on "lightweight el cheapo VPS", and then backtracked once you found out it uses PHP.


I think you think I'm someone else.


You're correct, I'll retract my statement.

Both start with coll/cooll, I've got to increase my zoom apparently.


Honestly it's a mistake I would make too.


Heavyweight? I run it (with all the collab suite features, photo galleries, and ai integrations) on a single RPi4 along with several other applications. But then I only have a handful of users...


Nextcloud Mail won't be replaced by Roundcube!

"Neither will Roundcube replace Nextcloud Mail or the other way around. ... Nextcloud Mail will evolve as it is, focused on being used naturally within Nextcloud."


Will it be like Microsoft saying that VS Code wouldn't replace Atom after the merger? Not that I like Atom, quite the opposite.


Atom was my first proper Editor. I miss it, even though there were a lot of bugs. It was so much fun finding all the cool community-made packages and trying them out.


Thanks for the reminder. I still expect Roundcube to become a well-maintained alternative for E-Mail clients within Nextcloud, right?


[flagged]


Please ... don't do that anymore.

When I want to "enjoy" memes, I'll go reddit, not HN. Let's keep it clean.


That's a relief.

We just swapped out our old webmail system (made from twigs, mud and spit) for a nice and elegant Roundcube install with custom plugins and I was already dreading having to change it.


I really tried to make Nextcloud work for me but it was too much. I‘ll pay the enshittified Dropbox premium soon.

Some bugs I encountered in a few hours of testing and trying to make it work.

The Mac auto-update installs an incompatible version to my OS; the website offers only the new incompatible and an old version that also doesn’t work (OS can not scan the app). The solution is to find a suitable version from a hidden FTP, user-unfriendly.

Some files had modification timestamps on 1.1.1970 that causes obscure sync issues on Mac. Either run some arbitrary database scripts to fix this or a simpler solution is to ‘touch’ all affected files.

The Windows Client consistently shows random minus bytes, hangs, and freezes.

The Windows Client is stuck in a loop of calculations and transmissions. Also a reinstallation is impossible as AppData folder isn’t deleted during uninstallation.

A successful complete reinstall downloads all the existing files individually, creating conflicts with identical(!?) local and server files. Why is the file hash not checked before the download? It’s frustrating and seems poorly designed.

All bugs have open GitHub issues I didn’t bother to include. Some have open PRs for years. The last bug is open for 5 years now.


If Dropbox is all you need you might be satisfied with Syncthing. I have used it for a week now, it works well and I have the warm fuzzy feeling that nobody is using my data to make a few bucks (I'm self hosting it on a home server).


Syncthing is great, and I use it at home. For a robust multi-user alternative to Dropbox (or *cloud), I can also recommend Seafile. I replaced Dropbox with a self-hosted version of Seafile and have never looked back. Also, for a fantastic mail server solution with a great webmail client, look at Axigen. Their free version is more than enough for a personal server, and you can use Amazon SES for outbound mail to avoid reputation issues. I host mine at Linode and love it. If you have a business need or are larger than the limits of the free version, their license costs are quite reasonable.


Not only that, you don't even need to "host" Syncthing. Being P2P in nature, it can just run on whatever computers you want to sync to, directly.

That's pretty cool.

The only thing I'd really want for Syncthing is some kind of simple interface for my desktops (all running SwayWM.) There's a GTK app that I use on my Pinephone, but it's a little janky. I mainly just want to be able to know that I'll get notifications when there's a conflict or error. (Dropbox style file emblems in file explorers, showing the state, would be nice, too...)


You still need a node somewhere that is always available, otherwise your device cannot sync when your other devices are offline.

My wife and I had such a setup for years with Resilio Sync. But life is busy enough to maintain yet another thing, so we are happy to fork over the monthly fee for Dropbox Family.

Ideally I'd switch over to some other sync solution, because Dropbox is somewhat overpriced. But we've had bad experiences with Google Drive and OneDrive for local sync in the past.


Well, you need an always-online node only if you actually need it to be syncing all of the time. Not everyone needs this; often times it's enough to just sync opportunistically. This is mainly necessary for things that are mutable and active; for me, I store my Keepass XC on a Syncthing shared folder, so this is relevant to me. And for that, I use my NAS, although obviously, not everyone has a NAS.

But that's the thing. Especially notable compared to NextCloud, Syncthing is not like most "self-hosted" software. Because a node is a node is a node, and because it's relatively lightweight, it literally doesn't matter what you use. You can use a Raspberry Pi, an old phone or laptop, anything you can connect sufficient disk and a network to can be a Syncthing node. And if it catches on fire, it doesn't really matter since every node is equal. You can just add another node at any time.

So a lot of people think Syncthing is another thing you'll have to worry about and maintain, but it's not. It's one of the few pieces of software that I expected to have to deal with a lot of extra work to use, but then it wound up being dramatically easier and more flexible than I expected. I worry about robustness when it comes to something as complex as cross filesystem syncing, but Syncthing has never lost my data. I have backups turned on on most nodes for the important folders, but I've never consulted them before, because I've never needed to.

Surely it is possible to lose data with Syncthing, or otherwise create a headache. However, from my point of view, it certainly seems to be among the most reliable and lowest effort ways to sync stuff across devices. I haven't had to spend almost any time maintaining Syncthing, and I don't have to worry about limits. I just need one device with a big enough disk, then I can create however many shared folders are needed to get the granularity I want.

Syncthing also has a pretty cool encryption feature. It is considered "beta" still, so I only use it in "trusted" scenarios, but it works great.

When I started using Syncthing, I only intended to share some document files between my desktop and my laptop. Now I use it to sync my Keepass database, files between some servers (think seedbox etc.,) multiple different documents folders including some for collaborative projects, and even a couple of other things. So it really wound up over-delivering for me.

I'd strongly recommend people, even people who already feel like Resilio Sync wasn't a good fit, to just try to set up Syncthing before resigning to Dropbox. Comparatively, I think Syncthing is simpler to use and more robust than basically any other solution that isn't Dropbox.


If you run it on a Pi, do you run it on Portainer or anything like that?


I use an old phone (with Resilio) as such an always-on node.


I might have been too stupid to figure it out, but I found Syncthing unusable because I couldn't set up a basic "backup" style sync. That is, anything I added to my phone folder would get synced one-way to my computer... and anything I deleted from that folder would also get synced one-way to my computer. This forced me to maintain my entire photo library on my phone, which of course is exactly what I was trying to avoid.


Would it work to set the folder type to "Send Only" on the phone and to "Receive Only" on the server?

See https://docs.syncthing.net/v1.26.0/users/foldertypes


No, the issue is that send-only sends all modifications, including file deletion. I wanted it to only send new files, but couldn't find a way to do that.


https://docs.syncthing.net/advanced/folder-ignoredelete.html

It works well enough in a backup system, where the issue explained on the page isn't relevant.


Nice, I guess it does work. Annoying that what feels to me a basic option is hidden for power users. I don't understand what your comment is saying though?


Well that option is hidden because of the very real problems it causes in "normal" syncthing use, as explained on the page.

I'm saying that you can disregard the warning in the case of a backup system (as opposed to the normal use which is full sync between two devices that both modify files).


I installed Nextcloud twice and faced early bugs quickly as well. I'm certainly not moving my 15tb of client photos to it anytime soon


Just use apace2, its davfs implementation, and davfs clients.

Linux has davfs2, android has foldersync.

Apache2 is super streamlined for this, and has done dav stuff for at least 16 years.


I was thinking of trying out using something low level... Maybe that? Does it support lazy selective syncing? Updating only ranges of files? Does it handle broken connections gracefully and recovers without data loss?


davfs is a protcol, a standard. Read that standard uf you wonder, but many things used dav behind the scenes.

And apache2 is a very well established implementation of it.

Clients handle partial snags.

I wouldn't rely upon anything that syncs like this, without backups. Any protocol at all.

Of course, the same may be said for anything at all. Backups are king.


What are you using for backup?


I always wondered why photographers held on to negatives/raw files for so long. How often does the need to return years later come up, and can that justify the cost of storing all that in a way that’s somewhat reliable? I’m not saying there aren’t valid reasons to do so, and throwing the pictures on a few externals drives isn’t terrible, but to do it “right” seems like it would be super expensive!


Oh, I'd throw them away in a blink, If I were not lazy:

Almost after every shoot, people come back "remember that one photo, where I smiled at sth? I'm very sentimental about that, cause it's [some important thing to them]", which necessitates the need to hold on to every photo taken on the session. So no real deletes here, even if it came out technically wrong (blurry, blown out, etc.).

Those requests lessen, but don't die down completely. Especially with cyclic events, organizers have this habit of a asking for things done exactly year ago.

Some just say: "hey, I remember you taking a photo of me then and then" for their dancing portfolio in my case.

Especially for videos, which can be a constant flow of editing requests, for supercuts and etc.

Now, if I were really smart, I'd just have some good way to archive after two years, and delete after - let's say three years. In practice though, there are so many unforseen circumstances that a habit of "never delete anything" forms really easily.

It's just a lot easier and cheaper to buy another drive instead of culling 10k of photos every once in a while, especially if external confirmation is involved.


Totally understandable! As a service provider, you want to be able to fulfill those requests because it will make you their go-to person for life. Pretty cheap compared to the benefits you can get.

Despite constantly crowing at researchers in my past life that they will lose all their data ... it only happened once or twice, and both times was related to theft and not drive failure.

I wonder if you could sell a type of "archive protection plan" as an add-on to your work. It's like $70 a year to store 500GB on Glacier. I am sort of assuming each shoot is 500GB? You could guarantee access for those customers who want it.

If we're being honest with each other, I would do the exact thing you're doing and focus more on my business. :)


You could also back up to something like AWS Glacier. The cheapest tier (access less than once a year) is $1/TB/month. Maybe if you kept thumbnails locally, you could push all the data up and only pull it as and when you needed it.


Have fun paying a fortune if you need to get those files again.


If you need all of them, and can wait 5-12 hours, that appears to be free to request and transfer? Or am I misreading[0]?

[0] https://aws.amazon.com/s3/glacier/pricing/#Retrieval_request... <- under "Bulk"


This does not include data transfer pricing: https://aws.amazon.com/de/s3/glacier/pricing/#Data_transfer_...


Free retrieval pricing, but not free transfer. Transfer is starting at $0.09 per GB :-)


Aaaah I see : - ) still, if it's only very occasional "do you remember that photo?" queries, that shouldn't add up to any significant cost. But a full retrieval - yeah. Interesting how the price ramps up!


For "expedited" retrieval it is $0.01 per request plus $0.03 per GB. Doesn't seem like a fortune. And there are retrieval options for 1/10th the cost.


Transfer is starting at $0.09 per GB.


The exact same as transfer out of normal S3? Don't get me wrong, I am as big of an AWS pessimist as one is likely to stumble across.

I guess I could reinterpret your original comment as "Have fun paying a fortune if you need to get those files [out of AWS] again."

instead of my original interpretation "Have fun paying a fortune if you need to get those files [out of Glacier] again."

Agree!


Because better hardware gets cheaper, and one's technique and software improves over time.

I was able to recover a 13 year old photo I took with a D70s which was extremely noisy. By using what I learnt and state of the art software (which is Darktable), I was able to extract a very nice photo out of it.

Also, as your style improves and experience piles up, you look to your "bad" photos and say "Aha, there's a nice angle here. Let's process this".

You can see some of my "Remastered" photos at [0].

[0]: https://www.flickr.com/photos/zerocoder/albums/7215770242956...


I wonder how would you react to the DeepPRIME* from DxO

https://www.dxo.com/fr/technology/deepprime/

I have a license for some older version, if you want to throw a .nef at me


That looks pretty nice. I'll try to find some noisy files to send to you. What's nice about Darktable is it has a feature called "profiled denoise".

Contributors send in calibrated RAW files per camera, taken at every ISO setting possible, so Darktable denoises your file according to your camera's profile at particular ISO. The result is pretty impressive.

I have uploaded that particular image to [0]. Taken in 2006 and processed in 2020, after 14 years!

[0]: https://www.flickr.com/photos/zerocoder/53363865806/in/datep...

Edit: EXIF says 2005, but it should be 2006.


Have you tried Immich? I don't have much experience with it (installed yesterday evening) but it looks and run great.


I second immich - used it for some group sharing (non-registered users) and worked really smooth. had some random upload fails though on the app from iOS.


I spinned it up once and looked really good. Never got to thorougly testing for that case. Might take another look


I run it on TrueNAS Scale. Yes it is all quite fragile, but I managed to get it in a state where it works if I don't touch it. It's all backed on a ZFS volume. Barring significant hardware failures I will be able to access my data locally. I use Storj for mirror remote. Rebuilding would be a bit of a pain, but I have reached the E2EE personal cloud ideal. If [better than storj] cloud storage services offered BYOPK E2EE then I wouldn't need to jump through such hurdles.


There are services that host those for you, as an alternative to self hosting. I've been using hosted Seafile for a while and very happy with it.


I‘ll pay the enshittified Dropbox premium soon.

I have been using Maestral for Dropbox sync on Mac for years now and it works great. The primary downside is that it doesn't have block-level sync because it's not supported by the API. The flip side is that you don't get the memory-hungry Dropbox app that embeds a web engine for some unfathomable reason.

I wish that Dropbox would bring back their old client that just did sync and not all the crap that I don't need.


The first and only time I had a server hacked was due to a RoundCube vulnerability in 2006.

Just a month ago there was news of a RoundCube XSS zero-day that was widely exploited (https://cyberpedia.medium.com/state-sponsored-cyberattacks-l...).

Don’t use RoundCube!


Can you recommend a webmail client which hasn't had 2 vulnerabilities in the last 15 years?


I can recommend not to host your own webmail in general in 2023 unless you are a Fortune 50 company.


Or you can just host it behind a VPN if you're not confident in your ability to manage/patch/monitor it


That wouldn't really solve XSS vulnerabilities (of course depending on what the vulnerability is).


Alright, can you recommend a webmail host who hasn't had 2 security issues the last 15 years then?


By this logic, you should recommend that people don't use computers.

All software has vulnerabilities. The trick is to install it in a way which mitigates most of the typical ones: use VMs, SELINUX/APPARMOR, containers, chroots, user separations, etc.


None of what you mentioned protects against XSS which the parent mentioned. Things like having a proper CSP might but only if the application is built so it does not depend on insecure eval/inline and/or you can properly disallow fetch/connections to outside sources.

Everything you mentioned is about protecting things the app should not have access to. Many vulnerabilities are about intent (did the admin user really mean to truncate the db they have permission to truncate) or target (did the user really mean to export all my emails in an archive to h4x0r@yahoo.com).

If you at all store any sensitive data within the application either serverside or clientside you need to consider the security of the application itself, not just the sandboxing/isolation.


Agreed, this is a nasty bug in the software, which makes it open to manipulation by anybody on the internet who can send you an email. It's a big failure of the RoundCube project, developers probably do not care about security of user data very much. The response to the bug report is "did something to fix this, closed", no comment on what is going to be done to prevent this stuff in the future. Which is disappointing for a flaw of such severity. I wouldn't be surprised if similar attacks on RoundCube are still possible.


Email is no longer in the class of "software you install" it is in the class of "services you outsource to reputable companies who won't fuck it up"


Many people don't see it that way, as they outsource to Microsoft, who can't do e-mail properly and eff it up all the time.

It's more of "outsource to someone else" than "who won't fuck it up".


The old product is ok yes. The company is awful, with their roundcube next scam though.


I thought the problem with Roundcube-next was it wasn't run by the roundcube org but separate devs from Kolab? They just made the mistake of endorsing the project

Edit: Yep that's the official position as per this post on the roundcube github https://github.com/roundcube/roundcubemail/issues/6030#issue...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: