Hacker News new | past | comments | ask | show | jobs | submit | svennek's comments login

Your reply is (respectfully) analog to : "Why do I care about this ssh security thing? I have always used telnet and it works fine."

In X11, there is the underlying expection is, that no program will misbehave evilly - and THAT has changed in Wayland and hence a lot of wayland is much more complicated and (some of it) unfinished.

A lot of "traffic copping" added, ensuring that the user is queried before privileged access is granted, the ability to capture raw video output of a window for example (through desktop portals) - or the ability to capture raw key strokes from another process windows (think password prompt).

I have been running Sway (on wayland obviously) for 18 months with almost zero issues.


I understand your idea and like it, but...

The resulting headlines (at least) are pretty misleading.

Examples:

https://newshavn.duarteocarmo.com/482308c4511050464f22b4f40d...

The AI makes the headline "Courses in crisis are being cancelled", the correct headline should be "Courses in crisis are selling like hot-cakes".

https://newshavn.duarteocarmo.com/a416902f09c27c7411907a519b...

LLM: "... (danefæ refers to good weather or fine days)" Me: really not, it refers to the law that all significant finds (historically or financially) from the ground belongs to the state, but the finder receives a finder's fee.

I haven't read the articles line by line, just skimmed the headlines and the two above really stands out as really wrong..


Oh year and the "diapers one" is also not quiet right

https://newshavn.duarteocarmo.com/5be0716b00833a04bbd1406880... Inger Støjberg with new concept: One will be able to smell the dirty diaper "decidedly"

I can't read the article due to the paywall, but the correct word in the headline would be "sawdust" (not that it makes any sense without context, it is not a common danish saying).


I would use syncthing, which is open source at https://syncthing.net/.

After minimal setup, it just works(tm).

You have a normal directory in your filesystem, that is synced to the other peers (which you set up in the "minimal setup").

I have been using it for years, and it works well. It has no problems crossing os'es (i.e. windows -> linux, linux -> mac)

For windows I usually recommend https://github.com/canton7/SyncTrayzor, but vanilla syncthing works fine too (but don't try to mix them!)


Have you had any concern exposing your syncthing on the internet?


No.


In Denmark, we have a common login system called MitID (translated: MyID), which is used by all bank, insurance company, the governmental digital mail system (not email, but pdf's in a vault) and it still-alive now-commercial-only predecessor. I believe it is by law.

The system is 2FA with either your phone or a hardware dongle proving your identity. It is strongly authenticating you as a person, that is precisely identified (but the services are only getting a token, but it can also validate your person-number - think SSN in US context).

It is quite harsh in device security, recently failing on beta versions of Android - on top of afaik always failing on rooted devices...

The phone version also requires you to scan a continuously changing qr-code twice to proceed, which is shown when you need to identify yourself (in an I-frame). This is to ensure you are "physically" present where you are being authenticated (i.e. to block of some phone scams).

Works pretty well and is reasonable secure, whilst still having some flaws..

In the future, I believe this system will work in some/all of the EU due to the coming eIDAS legislation...


That is really hard, as there are no such things as columns in PDFs, only text starting at different (x,y) positions.

Hence most (if not all) programs export the text in the order they appear in the file.

And if it is scanned, there is no text at all (but you could OCR it).


First of all, I find I kinda funny that you call banking, retail and insurance "legacy industries".

I would rather be without Netflix and Google, than banking and food ... but each to their own..

While some is inertia (mostly doing to rewriting truly large applications are hard and expensive), there is also the the point that most of those industries cannot easily handle "eventually consistent" data..

Not all transactions are created equally, the hardest usually have a set of requirements called ACID.

ACID in the classic RDBMS is not a random choice, but driven by real requirements of their users (the database user, i.e. applications in the business sense - and not the users as people). The ACID properties are REALLY hard to do in scale in a distributed system with high throughput. Think of the rate of transactions in the bitcoin system (500k/day with many, many "servers") vs. visa (500M+/day) - the latter is basically driven by two (!) large mainframes (with 50ish km distance) the last I heard of any technical details.

None of the companies you mention need to have strict ACID, as nobody will complain if different users see slightly different truths - hence scaling writes is faily easy.


I have no expertise in this area, but two counter arguments pop'ed in my head:

1: I wonder how many transactions the largest e.g Postgres clusters (or other classic RDBMS) handles per day. 500M+/day doesn't seem that incredibly high?

2: Google Spanner, which I would classify as cloudy, promises ACID guarantees at a global distributed scale. Couldn't that be used?

I've listened to a Swedish developer podcast where they interviewed an old school mainframe developer in the banking sector. He brought up similar points about the scale and correctness of database transactions, and it didn't feel convincing to me.

What does Paypal, Klarna, or even maybe Amazon give up by not using mainframes? Does any company founded in the last 10-15-20 years use mainframes? If not, does that mean that "modern" companies can't compete in these high-demand industries like retail or insurance?

I think it's much more in the inertia-point, the cost of rewriting these enormous applications is simply too large.


>What does Paypal, Klarna, or even maybe Amazon give up by not using mainframes?

Simplicity. Because instead of having an enormous team to maintain their multi-million line Kubernetes configuration files that automatically spawns thousands of servers, having an enormous team that tries to fix the CAP problem, you go grab IBM, you tell them "this mainframe goes down, you're dead, here's half a billion in cash" and you run all of your software on a single, massive machine.

Sure, it adds other problems, but to be fair I'd rather deal with a mainframe than yet another microservice that doesn't work because Omega Star doesn't handle ISO timestamps and it blocks Galactus

>Does any company founded in the last 10-15-20 years use mainframes?

The biggest issue with mainframes are:

- High initial costs (although that's been changing)

- Nobody knows how to work on IBM Z mainframes

- The current zeitgeist about having a billion servers spread throughout the world because it's really important to have an edge CDN server for your cat trading card game

These industries didn't care about that because they could absorb these high initial costs, had the knowledge of the people building the mainframes, and were already highly centralized. Decentralization just adds more problems, it doesn't fix anything.

>I think it's much more in the inertia-point, the cost of rewriting these enormous applications is simply too large.

Rewriting just because it's not following the latest trend is garbage. These applications work. They're not going to work better because you're using Spanner now.


Remember inertia goes both ways. As soon as you have a large system "in the cloud" or otherwise "distributed" (also only in your shed).

Rewriting your system to a mainframe architecture is equally as expensive.

Lets say that you save a hundred megabucks per year by going cloud (I am really not convinced, that you save any money at all, but lets say). That is what 1500 man-years of work? No way you can rewrite even a simple "banking" system in that amount of work, and the second problem is that you need to feature-freeze the old system (and hence your business) while you convert. So maybe 5 to 10 years of lost competitive power on top of that..

Also, remember these are not "begin work; select * from table where id = 123; commit;" transactions. These have maybe 50 to 200 queries (selects and updates) in each transaction (has the paying party the funds, is the receiving party blacklisted (terror organisation for example), does this look like money laundring.... etc... and a very detailed logging requirement by law). All of these MUST usually be in the same "snapshot" (in the RDBMS definition).

It makes no sense to talk about "transaction rates" without an intimate knowledge of what it does, as especially marketing departments have a tendency to use the simplest possible "transactions" to get a large number..

And in the end, it is only "money" the might loose, and any choice you make makes some other choices in the future easier (and some harder). That is called path dependency.


I agree, it's obviously very hard to compare transaction rates, and I also agree that I have hard time seeing companies currently using mainframes recouping the cost of migrating. If it works, it works.

But.

> Rewriting your system to a mainframe architecture is equally as expensive.

There was a new bank mentioned in this thread that actually started using mainframes from scratch, but other than that I've never heard of any "modern" fintech (or really any) company introducing mainframes. Organisations actually rewriting functioning systems TO mainframe must be almost never heard of (in the last 10-20 years at least).

If System Z, Cobol and DB2 are so obviously superior, why are so many successful new competitors in industries where they are the norm in older companies choosing to not use them?

I'm not saying banks should rewrite their stuff in node.js (or deno - even better of course), it makes sense for them to stay.

I just have a hard time believing that mainframe systems are so technically impressive, to the point where some people claim it's almost impossible to build a similar system on non-mainframe technologies.


The software on mainframes only shines in reliability and the fact that the machines have been build for money transaction from the start. For example doing "decimal" math (if you think python) is as inexpensive as doing float math due to hardware support.

The machines themselves are impressive (hardware wise) and reliability wise, for example you can swap mainboards one by one in a full frame without ever taking down the machine (think raid on a mainboard level, RAIMB ?).

But the high start-up cost makes most startups going the other road. I am not convinced that the vertically scaling is cheaper than horizontally, if you need the ACID guarantees... but it is hard to say.

The reason why us old dogs say it is hard (not impossible) is due to the single-image and acid requirements. There is no good way to do that distributed (look up the CAP theorem).

So having a massive computer (with double digit terabytes of memory AND cache, and truly massive i/o pipes.. just makes building the need-to-work stuff simpler.

As an example, a few years ago I was (on my own money) on a mainframe conference (not doing mainframe work in my work day).. at that time the machine had more bandwidth to the roughly 200 PCIe adapters that a top-of-the-line intel CPU had between the L1 cache and the computing cores) - and that meant that given enough ssd's you could move more data into the system from disk that you could move into an intel cpu from cache...

Also mainframes can run two mainframes lockstep (as long as they are less than 50km apart), that means if one of them dies during a transaction (which in itself is extremely rare), the other can complete it without the application being any the wiser.. Try that in the cloud :)


I've worked at three banks and it's not about cost. It's because they aren't stupid.

Young developers often think that banks, insurance companies etc. should just rewrite these "legacy" systems because it will bring all of these magical benefits with no risk. Whereas older developers who have worked on (a) mission-critical applications, (b) major re-platforming efforts and (c) projects in a highly regulated industry know the score.

Doing just one is hard. Doing all three at the same time is suicidal. The chance of project success is basically in the single digits. And the risk of failure is billions in lost revenue and your future prospects in the company and within the broader industry ruined.


Not so sure, experienced tech managers are also very wary of vendor lock-in and tech debt which mainframes give you in spades.


The vendor lock-in is a feature to them. They're not a tech company. They're banks. They're insurances. Lock-in means they can send a lot of cash to someone and the problem gets fixed, which is the only thing they care about. And in good news, cash is also a thing they have a lot of. The cost of their tech infrastructure is a blip on the radar compared to payroll, to the cost of their physical spaces.


I assume from this comment that you've never worked in the enterprise.

Because (a) major decisions like choosing a mainframe are not made by tech managers and (b) every company is built around vendor lock-in.

Who do you think companies like Atlassian, Oracle, Salesforce etc sell to ?


It's a little off to think that a mainframe was "chosen". Software and the companies that write it and support it "chose" the hardware for you.


Google internally announced a while back that Bigtable (which powers Spanner etc.) hit 1B queries/second -- there definitely exist systems with far larger scale (though admittedly this is with lower atomicity requirements and probably includes reads etc.).


VISA does 500m+ transactions per day and Spanner does 1B queries per day, but it's quite unlikely that what a transaction means on VISA is the same as what a query means on Spanner.


Spanner does over 1B queries per SECOND (but your other point still stands of course)


If it is readonly shared-nothing queries, I am sure the hardware I have in my flat can do that as well..

The hard parts are updates in a shared system with a single consistent "view" requirement..


The hardware in your flat definitely cannot do 1 billion queries per second -- that requires a massive, global system, which can probably also support a shared system with ~100,000x less queries.


That sounds so impressive I googled it. It's actually 6 billion queries per second: https://id.cloud-ace.com/how-youtube-uses-bigtable-to-power-...

But then I mulled over it for a while, and it occurred to me it's likely Sqlite does orders of magnitude more than that planet wide.

Spanner's 1 billion per second is more impressive: https://cloud.google.com/blog/topics/developers-practitioner..., assuming it's returning a consistent view across many tables. But the Sqlite comparison still stands.

Visa claims 24,000 TPS, but in reality runs at a 10'th of that. It would be interesting to see if Spanner could process the same 2,000 transactions per second. Sqlite definitely can't.


Amazon does use mainframes. They aren't a payment processor and don't handle that part of the processing. That's why the likes of VISA get to charge their fee.


Google Cloud Bigtable and DynamoDB both appear to have ACID -- I don't see why mainframes would be better for this than cloud.

Bitcoin is slow because of the many servers not in spite of it. Because of the design of the network, all servers need to receive every transaction and servers need to be able to be pretty small, which limits the transaction rate.


Both BigTable and DynamoDB only support eventual consistency. That is a big asterisk in ACID for those technologies.


I don't think that's true?

Bigtable: https://cloud.google.com/bigtable/docs/replication-overview

> When using replication, reads and writes to the same cluster are consistent, and between different clusters, reads and writes are eventually consistent. If an instance does not use replication, Bigtable provides strong consistency, because all reads and writes are sent to the same cluster.

DynamoDB: https://docs.aws.amazon.com/amazondynamodb/latest/developerg...

> Both tables and LSIs provide two read consistency options: eventually consistent (default) and strongly consistent reads > Eventually consistent reads are half the cost of strongly consistent reads


I heard that mainframes have redundancy built-in. /s


It’s not a technical problem. IBM and whomever owns other “undead” platforms aren’t dumb, they price the stuff high enough to print money, but low enough to make it a poor return to migrate.

In a big enterprise, the mainframes give CIOs leverage for other stuff too - they sell at high margin and IBM will “give away” or subsidize other services by moving money around in the backend.


legacy industry is sort of standard term not sure why you found it funny. It not meant to say they are less important.

From quick google: "Legacy industries are those that have been around for a long time. These industries dominate a specific market and have not always had a positive approach to innovative ideas."


Legacy is mainly a marketing term for someone wanting a piece of an existing, big market. It also usually implies that you expect it to be closed down and replaced...


Well this is Hacker's news, you label something legacy like code, its never a nice thing or showing some respect to product/creators, rather contrary. TBH its also the first time hearing about this term, its simply not common, not even here and much less in general population.

And I have to strongly agree with OP, I couldn't care less about fate of FAANGs of these days, but I do care about those 'legacy' businesses tremendously.

As for original topic - if it works for 3-4 decades, don't be the stupid guy and change it. Tremendous risk to core business with little to gain.


Your comment doesn't make sense, why in Hacker News should I show more respect to Walmart and JP Morgan rather than google or apple by calling the former legacy industry and the later big tech, and why are you so worked up about it?

Relying on the existence of one vendor with highly unportable and unmaintainable code carries its own risk and my post is asking whether it justifies the cost.


It also gives you the opportunity to use their mainframes advantages to the very max. Not the "common feature set" but the very "best features" you can get out of your tech choice.

I do the same, I have run PosgreSQL for more than 2 decades now and I don't care about portability to any other database, all of which I consider inferior (and yes, I do follow most of their releases).


"These industries dominate a specific market and have not always had a positive approach to innovative ideas"

Google is itself starting to sound a bit like that!


I’m not GP, but I found it funny because the term is misusing the word legacy. It doesn’t fit other usage of the word or the dictionary definition of the word. I didn’t look it up because I didn’t think to, it looks like a normal use of an adjective, not a term.


Legacy means highly valuable and proven. Non-legacy means unreliable and highly unlikely to ever make money. Legacy software runs the world. Without it western civilisation would collapse.


For me, the magic trick was getting up the same time every day even in the weekends (almost no exceptions). After a few weeks of zombie-state I got naturally tired at an appropriate time - and now, I go to bed when I feel sufficiently tired.

Also, I try to keep mentally far below full tilt the last 60-90 minutes before my expected bedtime.


Yeah, I get this one will do, but sleep-deprivation can drive me crazy. After a few days with irregular sleep times, I often get heart palpitations, skip beats with an uneasy feeling in the chest and sometimes panic attack.


RDX is diskbased, not tape based (that is the D in the name)..

https://buy.hpe.com/us/en/storage/disk-storage-systems/remov...


Not to forget book-keeping.... In the EU for example VAT handling (and associated payments to the gov)


I would prefer the one who refuses to use AI. Chances are that that developer will be less intellectually lazy.

I have played around with chatgpt and coding (I even have the paid version), but I fail to see it used as anything else than a brainstorming tool (at least right now). It writes code, that is often wrong and even if right it has the quality of a very new junior developer.

But again, I also don't like IDEs (and use "unix is my IDE"), so it might just be personal preference...


> I would prefer the one who refuses to use AI. Chances are that that developer will be less intellectually lazy.

In my experience it's the other way around - the people who don't use AI assistance are much more likely to be the intellectually lazy type. They have either some ideological block, or they don't know how to use it effectively, or they're barely even aware of what's possible.

The people who do use AI assitance (at least the ones I've worked with) are pragmatic types who keep up with the latest developments in the field want to do the best possible job they can do in the most efficient way - and there's a visible difference in people's work performance. It would be really weird and counterproductive to disqualify someone from a job for this.


Same here. I would prefer someone who uses any / all tools at their disposal to accomplish the objectives. I also do recognize that there is a chance that someone blindly using generated code (the new version of copy/pasting from StackOverflow). The results of those will surface at some point or the other.


I would say it depends on what you put into the word "refuse". If the person was blindly refusing it, they are likely not good. But I do refuse it after careful consideration (100+ hours of actual experimentation)...

As said, as a brainstorming tool, I find it kinda valuable as a code-helping tool, not really..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: