Hacker News new | past | comments | ask | show | jobs | submit login
Chris Lattner left Swift core team (swift.org)
559 points by rayx2022 on Feb 21, 2022 | hide | past | favorite | 487 comments



> It is obvious that Swift has outgrown my influence, and some of the design premises I care about (e.g. "simple things that compose")

As someone who was heavily invested in Swift, and an active member of the community from around 2015-2019, I'm a bit sad to see the direction the language is taking.

From the time I started experimenting with Swift, I absolutely loved the philosophy of the language. It seemed to really prioritize having a set of well-factored systems, each with their own very rationally designed interfaces, which could be composed to do really powerful things. It was an incredibly expressive language which allowed for writing code with an incredible level of clarity - when writing Swift code I always felt I was writing at the level of the problem domain, not writing syntax. At the same time it offered really nice features for ensuring safety and correctness, like ADT's and best-in-class nullability syntax.

It was a language which sometimes seemed to move at a glacial pace, but the implicit tradeoff was that when a feature landed, it was for the most part very well thought out, and would add to the language with minimal negative impact.

A few years ago that seemed to start to change. From my perspective, some of the features added to the language to support SwiftUI - specifically property wrappers and function builders - very much felt rushed and forced into the language based on external deadlines. And imo they have a largely negative impact on the language: it used to be the case that you could look at a piece of Swift code and roughly understand how it would compile to assembly pretty easily, but with these new features there's a ton of compiler magic going on behind the scenes.


Languages not ruled by a BDFL are subject to every team member wanting to get their pet feature in, motivated by ego, fame, and career.

This happens with all corporate software, as people compete to (for instance) foist their new videoconference system into your calendar flow. That's sort of tolerable for end-user software, but horrible for programming languages where feature-feature interactions grow as N^2.


In the special case of Swift, with it being the language of choice for iOS development, I imagine a lot of people at Apple also want to have a word to say about the direction of development.

One notable remaining exception to this trend is Go. They are committed to the "simplicity" mantra and so far they have stuck to carefully weighing and discussing any feature added to the language. I know that some consider it too simple and the "glacial" pace of development not exciting enough, but I personally appreciate it...


Go has Rob Pike as a sort of BDFL within google, no?

Apple it seems does well w/ a Scott Forstal or a Bertrand Serlet running cover for their engineers. maybe not so much if there's a giant committtee....


The idea and most of the initial development was by Pike, Ken Thompson and Robert Griesemer, and their explanation for the simplicity of the language was that no feature was included unless all three of them could agree it was necessary. Thompson has since retired, others have joined the "core team", and Pike doesn't seem to be that active anymore in terms of actual commits - however he may still have an active role in guiding the development, I'm not following the development that closely, so I can't tell. But the way I understand it, it's more of a team effort than a BDFL situation...


"Benevolent junta," maybe.


>Apple it seems does well w/ a Scott Forstall or a Bertrand Serlet

And before that Avie Tevanian.

After that it was Craig Federighi. And I dont like the decisions and directions since then to say the very least.

Sometimes I still think about Bertrand Serlet leaving, the year that Steve passed away or 6 months after Steve took leave of absence. I wonder if he saw what was coming and simply decided it was time to leave.


Sometimes I wonder if Craig finds it harder to say "no" more and put his foot down. And his priorities, for better or for worse, seem to be on developing new features vs. fixing bugs/robustness. Granted, he's had to oversee the forking of OS X into 3 active branches now (iOS, iPad, Mac), a processor transition, etc etc.

And this is pure kremlinology at this point, but I feel like Apple Music, at least the client part, has to go back under Craig and not Eddy....


Rob Pike is retiring, or perhaps already retired.


I totally agree. That’s why I’m completely pro the BDFL model. Committees don’t produce good products.


The BDFL model has its own flaws, primarily with longevity and succession - the fl - plenty of examples where the lead got bored, retired or went on their own path. Let alone how they deal with criticism and review (constructive or otherwise).

Committees work when they have a shared vision, and a base strong enough to keep them on that path.

It’s clear from Lattner’s statement that Apple have moved that vision, have not brought the group along with that change and the result is wasted effort and a passive aggressive approach.

I would posit that rather than operate as a committee, instead that leadership is being exercised by a limited few and the structure is being retained as a fig-leaf :(


[flagged]


Democracy is a way to get things done while minimizing the potential damage inflicted by actors that have a lot of power on a large scale for long periods of time.

It is not the most efficient way to do things. It explicitly sacrifices some of it to help prevent leaders with uncontested power from coming up with things like concentration camps for dissidents.


Democracy also defends against things like exploiting emergency powers to enact martial law. Wait...


Democracies allow for the dispersion of unlawful protests. They also crush insurrections, that isn’t limited to authoritarian regimes.

The point of civil disobedience isn’t to avoid jail, it’s to show you believe so strongly in your cause you’ll go to jail anyway, so people will listen to your cause and maybe sympathize.

If we skip the step where civil obedience doesn’t result in consequences then we remove its signaling power and it just leads to a race to the bottom.


https://twitter.com/justintrudeau/status/205322201187106816

Justin Trudeau: "When a government starts trying to cancel dissent or avoid dissent is when it’s rapidly losing its moral authority to govern"

AKA Protest is terrific only when it doesn't happen in my backyard. Trudeau had no problem supporting "illegal" protests which blocked roads and essential supplies in other nations.


Well, that isn't specific to Trudeau. Most people their actions aren't consistent with their words.

People say all kinds of things without really having a vision or philosophy behind it. It's just how they feel at that particular moment. Unfortunately most societies offer more reward for easy answers and saying what people want to hear then staying consistent.


There's a lot of shitty "single creator with all the power" software out there too, obviously...


Almost no one practices direct democracy because yes. It’s not good.


No, democracy is for governing human affairs in the areas of value based judgement, that being primarily in politics.

Software engineering is largely non value based, they are technical based. On things they designed for and on things they base their code upon. In such affairs, democracy is a incompatible framework. It's like my c++ compiler does not compile Java code, that's a technical design and facts. It has no implications to anything political.


Isn't it the fate of all programming languages? they start on clean foundation, only to evolve in a bazaar of features.


Excuse Me Sir, Do You Have a Moment to Talk About O̶u̶r̶ ̶L̶o̶r̶d̶ ̶a̶n̶d̶ ̶S̶a̶v̶i̶o̶r̶ Lua?


In its 27th year of life, Lua introduced integers (in 5.3).


An interesting fact that could legitimately be used by both sides of this debate. :)


the language so 'clean' it doesn't include array.len()


As much as I hate some aspects of C, I also love it. I love it because the committees are so slow and conservative, I'm sure I can write in C89 now, I can write in C89 in ten years' time, and I don't need to care about new features someone might add or not.

You can call it a moot point since the amount of what you can do in pure C today is very limited - you have to use at least a few libraries to do anything remotely useful even on very light systems, but in today's fast developing tech world it's one of these things that really stand out.


For what it's worth, I find it remarkable what you actually can do with C with just a few libraries. For instance it's entirely possible to write a simple video game in pure C with a few well-chosen libraries for things like audio and asset loading.

There's just so much prior art out there.


Naturally the libraries for audio and asset loading aren't pure C, unless I missed some chapters on ISO C document.


> As much as I hate some aspects of C, I also love it. I love it because the committees are so slow and conservative

You might find this an interesting perspective:

https://news.ycombinator.com/item?id=30404636


You either stay unpopular enough that no-one demands that you compromise your vision, or you become popular enough that there is constant pressure to add features that are important to some part of your community.


"avoid success at all costs" — Haskell does it right.

Having a core language helps a lot too. E.g. I don't expect Rust to go off the rails.


I am mostly a fan of Rust, but it's made some missteps in its evolution imo.

For instance sync rust is mostly nice to work with, but async is another story entirely. I have the impression that it was somewhat rushed into the language due to community demand, but it's not really a fully solved problem.


>async was somewhat rushed into [Rust]

My understanding is this is a FUD meme that has been passed around. Async isn't finished yet, but was debated for years and considered with incredible care and input from the community.

more context in the comments here: https://news.ycombinator.com/item?id=26406989

specifically https://news.ycombinator.com/item?id=26407565


I don't know if that's FUD or not, but personally async turns rust in a language within a language: core rust is interoperable with everything via C FFI, you can build on top of existing stuff. But if you let async rust lure you with its (well deserved) appeal, you end up in a parallel universe, where everything is pure rust.

Surely you want to use the pure rust async aware postgres client and the pure rust grpc implementation, but with async that's not a free choice.

You now have to deal with the fact that these (actually often quite well written) reimplementations often lag quite a bit in functionality.

For example you think you can just use some boring technology like a RDBMS but then you discover that pgbouncer doesn't really work well with sqlx. Similar stories for gRPC and other stuff.

Don't get me wrong, I'm not saying that's because async implementation is bad or it hasn't been well thought out. Other languages make other tradeoffs (e.g. Go detects blocking syscalls and increases the size of the thread pool, which makes it a bit easier to accomodate blocking FFI with the core async model, but that comes with a price).

What I'm saying is that in practice async rust feels like another language, a language built on top of rust, that looks like rust, that can call rust, but effectively creates its own ecosystem. The problem is compounded by the fact that it's still called rust, and often you can write libraries that can do a bit of async or not based on optional features.

It's probably all unavoidable and for the better, but it does add a source of fatigue that must be acknowledged and not dismissed as FUD.


I think async creates a mini-language within _any_ language, i.e. the whole what-color-is-your-function problem. It's turtles all the way down or nothing.

In return, of course, you get a style of concurrency that tends to be much easier to reason about and much less prone to subtle bugs than traditional preemptive multitasking.

Whether that tradeoff is worth it is obviously very dependent on the particular situation.


Does async rust prevent the use of C FFI? A quick google shows me nothing there.


It doesn't. You can call non-async Rust and C from async Rust just fine. You just have to be beware of calling blocking things if you're not in a context where blocking is ok (i.e. in a CPU work pool thread).

I'm not sure what the GP meant, to be honest. Async "colored" code does have an tendency to "infect" more and more of your codebase, but you can still write and integrate big chunks of non-async code (e.g. a parser or a network protocol) if you're mindful about your design.


Sure, you can technically call other rust code from async rust and thus you can also technically call C code via FFI, but if that code blocks you have a problem. There are ways around it but they are hard to use and create the pressure towards just rewriting the whole thing in rust.

I could be wrong though. The reason I like to discuss these things here is the opportunity to be proven wrong by someone who knows more and offers a counter-example


If you need to call blocking code from an async code just use spawn_blocking.

https://docs.rs/tokio/latest/tokio/task/fn.spawn_blocking.ht...


Yes you can. And this will create threads on demand as needed.

But you need to know when you must call this and when it's ok to not call it.

The type system won't help you with that. But if you forget to call it you can cause starvation and if you call it too often you may create too many threads.

What I see in the ecosystem is that such tricks are perceived as hacks and that it would be just better if one could just write a pure rust reimpl.

Surely there are enough reasons that drive people to reimplements stuff in rust. I think this aspect of async nudges people even further into that though.


I think you're right that async Rust isn't quite done yet (improvements on this front look to be a major focus for 2022). But I think the fact they were able to get an MVP released that so far looks like it can be built upon without too much technical debt is positive rather than a negative. Rust would be far less useful without the async feature.


async is a good example! I agree it is fine for the time, but of less quality than the rest of the language.

Still, the fact that Rust has MIR I think very much limits the damage premature async can do. If we get something better, it should be quite possible to extra "Rust - today's async" to recreate it.

Conversely, I wouldn't be surprised if many of the misfeatures in Switch are in impossible to extricate.


The recent post[^1] about the roadmap for Async Rust shows promise for the coming years.

[^1]: https://blog.rust-lang.org/inside-rust/2022/02/03/async-in-2...


I neither expect Rust to go off the rails, but the Core developer team drama lately and Mozilla stepping away makes it that I will neither be surprised in the even that it does.


The Mozilla/Facebook partnership makes me glad they have distanced themselves from the Rust. Obviously not a lot of good judgement happening at Mozilla.


Kind of, I lost count of how many extensions exist currently, and some of them are incompatible.


Fair. What remains to be seen is whether the political will to delete outmoded extensions exists.


> E.g. I don't expect Rust to go off the rails.

Unless the rails are too rusty :-)


I still love Objective-C. And to me—I'm so amazed that Apple added "Property Syntax" and "Properties". It's pretty easy to do much of this with Macros (which could have been baked into the system) and the language stays very simple.

You would have 2 lines of code per property but one fewer concept—and I think that is a much more important factor. To me—this was not the right way to add sugar.


As someone who started learning Objective-C only after Swift become firmly established, I have to say I strangely like the language. I picked it up fast and have had no issues with adding features to a legacy codebase.

When I started, I thought it would be some complicated beast, but no - its a reasonably simple and elegant C superset with the only disadvantage of some extra verbosity. (but Apple API's suffer from verbosity as a principle)

I have to wonder: why did Apple create Swift ? Objective-C is quite nice, especially for C/C++ programmers. With Objective-C++, interop with C++ libraries is also terrific. Swift offers nothing like this yet.


>I have to wonder: why did Apple create Swift ?

I asked this many times over the years, even in this thread [1]. Still no concrete answer. I still think overall it is a distraction.

[1] https://news.ycombinator.com/item?id=30418197


I thought it should be obvious. Apple was competing with Google/Android for mobile developers. For Android, you programmed in Java, which everyone learned in school. For iOS, this funny language with smalltalk syntax was a barrier to entry.

So Apple needed a nice language with C-like syntax to woo developers.


Plenty of Chris Lattner interviews answer that question.

https://atp.fm/205-chris-lattner-interview-transcript


I know Chris lather's intention. And I have read and listen to all the interview he did. I think the question should be, Why was it necessary for Apple to bet on Swift. Something I dont think Bertrand Serlet or Avie Tevanian would have done.


That is the answer right there, they weren't the ones calling the shots and the direction bought into Chris' intentions.


Oh well. I guess this is another thing I could blame it on Craig Federighi.


Because they wanted a modern language to power their platforms. Objective C is ancient, and showing it.


> I have to wonder: why did Apple create Swift?

They wanted a safe, performant language that interoperates with Objective-C. Safety gets them fewer vulnerabilities, interoperability means they can gradually decrease (Objective-)C usage.

Also:

- opinions on the nicety of Objective-C differ (but the only arguments I’ve heard why it would be bad more or less are “I don’t like the syntax” and “it’s verbose”, both of which, IMO, are weak. Both, IMO, are acquired tastes. I don’t think anybody is born preferring terse K&R C, for example.

- as you probably know, work is being done on C++ interop (https://github.com/apple/swift/blob/main/docs/CppInteroperab...), so that may improve.


It seems you like some ancient language which can't even concatenate a string without writing this dull statement with bunch of symbols.

  NSString *string1 = @"Some";
  NSString *string2 = @" string";
  NSString \*string3 = [string1 stringByAppendingString:string2];
Not to mention it didn't have ARC back in the days.

For sake it was built nearly 40 years ago. Not sure if there's any room to question why Apple and devs wanted a new language.

I was amazed why anyone would want to use this language before Swift but people didn't have a choice.

There were efforts like MacRuby.


To be fair, while Swift includes nice convenience methods for strings, the new API comes with a considerable cost:

https://www.mikeash.com/pyblog/friday-qa-2015-11-06-why-is-s...


That article is from 2015. A lot has changed, though some people still complain.


Doubtful that the technical underpinnings of the API have changed all that much since 2015.


They very much have.


ARC has been there for almost a decade now. Also if you wish to do heavy string concatenation, please use a NSMutableString and you can simply do: [string1 appendString:string2];

It is amazing that people forget to simply use the right tool for the right job and blame the PL instead.


Chris Lattner has mentioned that in a couple of interviews, there is a limit how much they could improve Objective-C towards being a safe systems programming language, exactly due to its C heritage.


Yeah, no macros in Swift is a bit of an odd choice. It means that every would-be macro has to go through the language evolution process and be blessed by the Swift team. You can't release macro-like functionality, such as Rust's serde, without support from Apple.


It’s certainly something that they want. I suspect it’ll be added someday in a major release.


Write your own preprocessor?


As I see it, when we will have built languages that didn’t de-evolve into a bazaar of features, those will be dependently typed languages.

Because a dependent type is a type that represents a computation. A word that symbolises a computation, a word which can be manipulated by hand and assured by machine – and vice versa.

Software is automation. Types are signifiers of meaning. Dependent types are the automation of the automation.

Using a dependently-typed language feels like handling a machine which is the result of having closed a fundamental loop of expression vs. meaning. And that is what it is.

It’s very philosophical-sounding! It has its roots in philosophy – in intuitionism, where math dug down and met philosophers who had dug down from the other side. Kind of.

Practically, the upshot is that things actually become simpler. Kind of because you can grip the tool everywhere. Un-dependently typed languages feel kind of like they’re part tool, part void. You usually can’t express things relating the context you’re in, or looking at.


Dependent types are a false idol. It’s better to express propositions just as propositions, like in Isabelle/HOL.


Tell me more!

(As I see it – at least these days – dependent types just are. It’s very, very nice to just have higher-order unification and the things that sort of fall naturally out of having it.)


They are a false idol in that, when people learn about them they seem like they’re the answer to all problems, and they’re not. The fact that types are equivalent to propositions doesn’t mean that making all of the propositions about your code at the type level is a good idea. For me, it’s completely unnatural, clunky, and more verbose than just writing propositions as logical statements.

I found this talk from Xavier Leroy a while back too: http://www.cs.ox.ac.uk/ralf.hinze/WG2.8/26/slides/xavier.pdf. He is the main person behind CompCert, the formally verified C compiler. They do that verification in Coq, so I was expecting him to be a believer of dependent types. But he had this to say:

“ Dependent types work great to automatically propagate invariants

- Attached to data structures (standard); - In conjunction with monads (new!).

In most other cases, plain functions + separate theorems about them are generally more convenient.”

For that reason I prefer Isabelle/HOL as a theorem prover. The core logic is simpler, but you can express whatever you want as a theorem, without worrying about phrasing it within the type system. It feels a lot more natural.

That’s not without downside either of course. Proofs in Isabelle notoriously must match the structure of the code being verified, so changes to the code require proof changes. Liam O’Connor wrote an example of where they he feels dependent types are better here: http://liamoc.net/posts/2015-08-23-verified-compiler/index.h....

Even with that, I’d rather have simple code plus simple propositions with complexity at the proof level. This will likely be another eternal holy war though.


Thank you so much for taling the time to write this thoughtful and valuable reply!

It helps me see what I think I’m seeing, or rather to define it. It also has helped me to perspectives I hadn’t seen.

I’m coming to dependent types from here: “simple, clear code good yes program program argh I can’t express a very distinct thought without escaping to another language layer or generating code using string concatenation”, and from here: “my data structure is simple and clear but argh it needs a handwritten parser and serializer and it needs to be maintained and argh why am I writing a parser AND a serializer it should be just one bidirectional definition? and argh why do I need to write it for each output/input format?”, and from here: “my code is simple and clean and my variables are well named and my tests are well defined and my documentation is well written but argh why can’t I just fold the mechanics that the tests define into the code? as proofs? (and… argh? why can’t my variables and documentation and method names be checked against the code?)”.

It’s metaprogramming that I’m thinking about. And most programming ought to be programming. But we definitely need metaprogramming, and it needs to be understandable and composable and simple and clear. And I don’t know if dependent types are a complete solution to that, but I do think that they are necessary for it.

That-which-is dependent types, which by definition is a computation of what it is and is a proof of what it is. Those philosophical terms finally become practically grounded and practical help in a lot of the work I find myself doing.

I’ll certainly look for the false idol too! My sincere thanks.


  > Software is automation.
software is a lot more than just automation


I don’t disagree!


what's an example of said language?


Idris, some Haskell extension, and Agda


C# will not rest until it has every feature of every language.


It's funny because it mirrors MS' approach to product development in general.


I sort of agree with this sentiment but I must say that the implementation of said features has been pretty sane so far, especially if you compare with C++.


Kind of, string? MyFunc (string really!!)

Looking forward to C# 11 code.


Is Java and C++ not the same? I think so. No hate for all three: I use all of them, and there are pluses and minuses about all of them!

Wider question: How can a language and its core library stand still? To me, standing still is death for any computer programming language ecosystem. Many languages are just getting started on the idea of "green threads" and "colours" (sync vs async). Some of this can be done purely with a core library using existing language features, but some evolutions are better done with language features.


Java is leaner than C# and C++.

> How can a language and its core library stand still

I guess by having a more abstract foundation and relying less on adding hacks like colored function? Haskell seems to be like that.

Another thing is language extensions that you have to turn on to use. This makes deprecating stuff easier in the future, so languages can trim itself instead of keep growing.


Java looks more leaner, except when really wants to master it, also needs to understand JVM APIs for low level coding (invokedynamic and friends), annotation processors, JVM agents, just like C and C++ how the many existing implementations behave, bytecode rewriting libraries,....


Sure, but day-to-day Java doesn’t usually require that any more than day-to-day development in Objective-C doesn’t require learning the details of the runtime.


C# has all of that, and more syntaxes. And I don't think anyone's ever taken the position that Java is as complex as C++


I rather love C# pace of evolution.


Rust dropped the GC before reaching 1.0, it was the best decision back then as the memory management evolved. But right now as it guarantees backward compatibility, only new features are allowed to be added (although a lot of new features are coming without new extra syntax).


That's why you keep your language small like clojure.


That's generally the fate of all complex systems.


C seems ok no?


YMMV, but to me, C is a great example of a language that absolutely could stand for a few more features.

I would almost never write C++ (in the context of low level and performance-relevant code; I write a lot of things for a lot of stuff) if the type system was a touch more rigorous (why can't I specify, and have the compiler yell at me if I don't properly handle, "this pointer may never be null"? TypeScript can do this kind of type narrowing in its sleep!) and if error handling/lifecycle cleanup wasn't hazardous (I'm not even saying exceptions and try-catch, just make it easier to guarantee that a cleanup clause gets called when leaving scope, like Ruby's def-ensure-end).

As it is, whenever I finally hit the breaking point with C++ (which I write mostly from inertia because I know it pretty well) it's probably Rust for me.


That's exactly why Zig is such an interesting language to me. The premise of trying to write a modern C replacement - essentially what C would be if it were designed with the learnings of the past 40 years in mind - is super exciting.

I am a fan of Rust, but I am not 100% sold on it. The safety features and ADT's are nice, but I find it quite clunky in practice, and it is just such a huge language full of so many features. It almost feels more like a test language to try the concept of static memory management than a properly designed language in some ways.

I feel it's missing the quality from C/C++ that they are a very thin abstraction over assembly.


> a very thin abstraction over assembly.

I think this statement implies that it’s clear and obvious what assembly will be generated from a snippet of C/C++ code. But few people can understand everything that optimising compilers and modern processors do to normal looking code. An example of this lack of knowledge is seeing people disagree on if a snippet contains UB or not.

If C is so simple, why do people struggle to write it?


> But few people can understand everything that optimising compilers and modern processors do to normal looking code.

Well when I'm talking about an abstraction over assembly I'm not talking about optimizing compilers. That's another topic entirely. What I mean is, if you look at a block of C code, it's very easy to understand what the machine is doing.

> If C is so simple, why do people struggle to write it?

Do they? I think people struggle to write correct, bug-free code in C, but that's because it doesn't save you from yourself, and it's happy to let you do whatever the hardware will do. That's a much larger programmable space than the set of all safe correct programs.

But I don't think people in general have difficulty looking at a snippet of C code and understanding it.


> What I mean is, if you look at a block of C code, it's very easy to understand what the machine is doing.

So they simultaneously understand what the machine is doing but don’t know what assembly will be generated? Both can’t be true.

If everyone understood what the machine was doing, everyone would be able to look at a snippet and agree - “that’s UB, let’s not do that”. But they can’t agree. Because few people understand what the compiler and the processor will do.

The “thin layer of assembly” was true for the first generation of C compilers. But it hasn’t been true for a long time. It’s a complete black box now. Anyone who thinks that it’s straightforward isn’t being upfront with themselves.


> So they simultaneously understand what the machine is doing but don’t know what assembly will be generated? Both can’t be true.

I mean I can understand a reasonable mapping to what the un-optimized assembly would be. Compiler optimization is very complex, and is going to obscure the results in every language.

> If everyone understood what the machine was doing, everyone would be able to look at a snippet and agree - “that’s UB, let’s not do that”. But they can’t agree. Because few people understand what the compiler and the processor will do.

What? I think everyone can agree that UB is much harder to detect in assembly than in higher level languages than assembly, and assembly gives the most clear view of what the machine is doing. It's a very complex topic to create a system which detects and disallows UB automatically in the compiler - this requires a lot more complexity than a simple mapping of high level instructions to machine instructions.

> The “thin layer of assembly” was true for the first generation of C compilers. But it hasn’t been true for a long time. It’s a complete black box now. Anyone who thinks that it’s straightforward isn’t being upfront with themselves.

I don't know, seems pretty straightforward to me: https://godbolt.org


Let me rephrase. You say C is simple. I say Rust (for example) is simple. I can look at a Rust code base and say confidently - this code base has no UB in it, it has no memory safety issues in it.

Can you look at a non trivial C code base and make such an assertion? You can’t. Even simple looking C code could be translated into problematic assembly because such a transformation is technically valid. And it’s beyond the ability of anyone but an expert to guard against that.

C is a very useful language. Very important. Very fast. A great tool in the right hands. The world wouldn’t run without it. And it will remain useful and important and fast for decades to come, certainly. But it’s not simple and hasn’t been for a long time. Let’s acknowledge that.


I'm sorry I don't understand your argument at all. What does lack of UB and memory safety have to do with simplicity? Those are very complex features of Rust which require a very complex compiler in order to achieve. Also Rust is notorious for having a steep learning curve, and it takes time for even very experienced programmers to become accustomed to it.


> a thin abstraction over assembly

Do you agree that this statement implies 2 things

1. The compiler isn’t doing anything unusual or unexpected. It applies only basic, easily understandable transformations from C to assembly

2. An intermediate C programmer would be able to guess correctly most of the time what the generated assembly would look like. And thanks to this, such a programmer would be able to avoid most footguns.

But the compiler does unusual/unexpected things, and it’s hard to guess what assembly will be generated or what that assembly does, it’s not a “thin abstraction”. Would you agree?


I don't think this is the best way to understand this. C has been around for 50 years at this point, and there has been an enormous amount of investment and advancement in the realm of C compilers in that time, which has naturally resulted in complexity and esotericism in terms of how actual mainstream C compilers work. But that's not a metric of language complexity, it's an artifact of a half century of work on the topic.

I think a better metric is: an average CS grad with a little bit of background in compilers and assembly could reasonably be expected to be able to write a naive C compiler which covers say 80% of the footprint of the core language on their own in a matter of weeks.

What do you think is the size of the cohort of people who could write a naive Rust compiler, with borrow checking, ADT's, traits and non-lexical lifetimes? Even without some of the fancy bits like async you're already talking about grad level CS topics at the very least.


C and C++ are not a very thin abstraction over assembly language. The C virtual machine is very far from the actual hardware. And Rust is the same distance from the hardware as C and C++ are. Rust's core operations are, by and large, the same simple mapping to LLVM instructions.

I don't find the idea of a new non-memory-safe C replacement in 2022 very exciting. We should be moving away as an industry from non-memory-safe languages, for the obvious security and productivity reasons.


> C and C++ are not a very thin abstraction over assembly language

We can argue the thinkness (or the thinness) of the abstraction until cows come home, but

  while (*dst++ = *src++) ;
is a direct abstraction over

  L1:
    movb @(r0)+, @(r1)+
    tstb (r0)
    bne L1
(assuming «src» and «dst» are both «char *» and addresses are loaded into «r1» and «r0» registers, consequently) in the PDP-11 architecture that C was designed on and for. The instruction sequence is exactly 4x 16 bit words long. As well as

  *ptr &= 1;
becoming

  and #1, (r0)
and being 2x 16 bit word instruction (assuming «ptr» is an «int *» and is loaded into «r0»). Pointer arithmetic and array design in C as we know them today was highly influenced by the addressing modes existing in the PDP-11 ISA, with many C abstractions having to a direct correspondence to specific sequences of PDP-11 instructions.

C++ is much less of a hardware abstraction, specifically when it comes to the higher level language features.


Except that breaks outside non PDP-11 CPU, where many languages offer the same "low level" capabilities of C.

Or 8 bit CPUs like 6502 and Z80 that aren't even able to fully support C.


6502 and Z80 had not existed yet when C was begot for the PDP-11 architecture.

Yes – historically – C has never been a perfect fit for 8-bit architectures as it had been conceived for a 16-bit architecture. So what? There are C compilers for 68HC08 68HC11 MCU's as well.

Yet, C has outlived «many languages offering the same "low level" capabilities of C».


The reasons for its survival cross many axis, it wasn't just for the language alone on its own.


It really was an elegant instruction set.


I don't see how you could argue that C is comparable in terms of level of abstraction to Rust and C++. For instance with Rust memory management is totally abstracted away from you as the programmer, which is a core part of what a computer is doing.

And if you index into an array in C, that's basically like saying `root memory location + stride * index`. In Rust it's calling a trait function which could be doing arbitrary work.

Rust and C++ are on similar levels of abstraction, but C is much, much simpler.


How is Rust memory management totally abstracted away from you? You have to opt in to every heap allocation, just as you have to call malloc() in C.

And it's certainly true that Rust has overloading and C doesn't, but that wasn't what I was getting at. The point is that C is defined in terms of the C virtual machine, not the underlying hardware. The C virtual machine is quite far from the actual CPU instructions.


In Rust you opt into every heap allocation with an abstraction like Rc, Arc, Box etc. With C you would have to implement each of those behaviors with primitives, because C is at a lower level of abstraction than Rust.


How is that different from C++? Since you've explicitly put C and C++ in the same basket, this argument doesn't hold any water…


It sees like you're trying to catch me with some kind of gotcha instead of dealing with the argument in good faith.


malloc is also an abstraction.


I never said C is without abstraction, only that it is a relatively thin abstraction. Are you seriously arguing that Rust is not at a higher level of abstraction than C?


> Are you seriously arguing that Rust is not at a higher level of abstraction than C?

Rust is clearly not at a higher level of abstraction than C++, yet in your own argument you've put C and C++ on the same ground…


No I haven't. I said Rust and C++ are on similar levels of abstraction.


Yet a few comments above in this thread[1]:

> I feel it's missing the quality from C/C++ that they are a very thin abstraction over assembly.

[1]: https://news.ycombinator.com/item?id=30419248


I also said this:

> Rust and C++ are on similar levels of abstraction, but C is much, much simpler.


I haven't looked at Zig much because I mostly noodle with lower-level languages with an eye towards game prototypes and the like, so I don't know much about it. Have anything you'd recommend reading about the language?


I like Zig because it feels like a refinement of C with all the knowledge of the years since C was created. Some of my favorite features are no global allocator, well-defined pointers (with optionals required for nullability), generics and partial evaluation at compile-time, and builtin features/functions for all sorts of operations that are only extensions in C/C++ like SIMD and packed structs.

Read [0] for a comparison among Zig, D, Rust, and C++, and read [1] for a deeper look at the language's goals. I personally really love Zig's type system [2], with a system for generics that is very simple and consistent.

[0]: https://ziglang.org/learn/why_zig_rust_d_cpp/

[1]: https://ziglang.org/learn/overview/

[2]: https://nathancraddock.com/blog/consistency-in-zigs-type-sys...


I'm going to give those a read. Thanks!


I've always understood that C was designed to fit a hardware budget.

We knew about many of these features then, when C was created - but it wasn't until the 2000's really where computing horsepower caught up to be able to use them in a universal way.


Hmm I've always thought of C in terms of "portable assembly". Not so much that features were omitted because they weren't performant enough, but because they strayed too far away from machine-primitive operations.


C itself is pretty far from the machine, too, except by accident.


Some of them would have been great even on 70s hardware. Proper array, slice and string types come to mind.


You mean the learnings of Modula-2 in 1978, but with a C like syntax?


Clang has a nullability extension and associated nullability sanitizer: https://clang.llvm.org/docs/AttributeReference.html#nullabil...


Yep! And it's great where you've got it, but the world is not Clang. :(


>why can't I specify, and have the compiler yell at me if I don't properly handle, "this pointer may never be null"

You can do so easily. A reference is a pointer that is never null.


> A reference is a pointer that is never null.

This isn't even theoretically true:

    void blah(int &x) { x++; }

    int main() {
      int *x = NULL;
      blah(*x);
    }
You can definitely write a smart pointer that more or less provides some kind of guarantee about this (with a combo of runtime checks and typefoo) but references only provide a guarantee that they are not statically initializable to null, which is very different.


Compilers are free to assume x is non-null at the time it’s dereferenced, so isn’t it true by definition? Do you have an example that doesn’t rely on undefined behavior?


"Compilers are free to assume" is a fine thing for like, a loop, but the compiler isn't free to assume you didn't mean to dereference the reference in the code snippet I posted, it's just free to not check before it does.

So it will crash, whether that's undefined behaviour or not. The thing at issue here is that other languages have reference types that are more strictly statically guaranteed to be non-null. My point is that references are not a substitute for those, because holding them wrong is not only easy, it's extremely likely to happen in code of any reasonable complixity that mixes pointers and references (ie. almost all production C++ code).


C++ isn't C.


The GP was referring to C++

>I would almost never write C++ …

Edit: nevermind, I missed the point! GP is saying he wouldn’t write C++ if C supported these features.


C begins to increase rate of adding breaking changes.

C23 is about to

- add keywords that was valid identifiers before

- remove K&R parameter syntax

- change semantics of the most popular function argument list declaration

- forbid representations other than 2's complement

https://news.ycombinator.com/item?id=30395016


The over adherence to bloat in the name of backwards compatibility holds the language back - these are incredibly conservative changes that reflect practices which have already been used for decades!


What new keywords? Everything new goes into the reserved namespace with macros to look like the C++ versions.


There’s a concept from compiler optimization called Local Reasoning that I have stolen because I think the name applies at least as well to ergonomics/DevEx concerns.

A good deal of bad code design comes down to not being able to predict what a bit of code is going to do without stopping everything, unrolling whatever built up state you had in your mind (from the thing you were actually trying to do) and sit and become one with the code for a time. But at least when you’re unsure what the code is doing you have a suspicion that is the case. The worse sin by far is misdirecting you into thinking it does one thing when it does something else, or even the opposite.

We like to complain about extroverted business people interrupting us, but we seem to have very little guilt about doing it to each other by leaving these sorts of time bombs in our code.


> used to [..] you could look at a piece of Swift code and roughly understand

For all the critique of java, I think this is the one thing java got correct. It is (no matter how verbose or clunky) still easy to read and understand.


Not in the real world, though. And for similar reasons, too. Most real Java projects I end up working on/with are littered with annotations that do God-knows-what.


In enterprise Java code is filled annotations from some random half-assed framework. While talking to developers I feel they think annotations automatically reduce code. E.g @Validation annotation somehow magically validate input data whereas right in front of their eyes code is still if/else conditions and strings manipulations.

Of course one can't make them understand as they are just following "best practices" enforced at employers.


Yeah, I would say prior to annotations taking over, the Java codebases I worked with were often quite understandable. Annotations have certainly clouded things though. Spring, for instance, is great conceptually but there's way too much magic and clutter when using its annotations.


Annotation is the worst one that obscure what happening. Java language needed to be improved before annotation/javaassist based approach getting so popular


It's trivial to look up the docs to see what they do. It's not unlike looking up what a function call does.


In my experience, it's much more of a momentum killer and context switch to research an annotation than a function call.

First of all, in most IDEs, it's very quick to jump to a function's actual definition to investigate such questions as "What if both of the arguments are < 0?" or "Is null an acceptable argument?". With annotations, you'll often go-to-definition and be staring at an almost empty class that hopefully has a docstring. This is because the annotations, themselves, don't actually do anything- you have to go find the code that actually checks for the annotation and does stuff.

In my experience, the docs are often not good enough when I run into a problem. I was wrestling with some strange behavior with JacksonXML some time ago, and I had a very hard time figuring out what went wrong because Jackson has so many options and they don't actually all compose well, so it's not even about figuring out what one option/annotation does- it's about figuring out what happens when I set OptX=1 and OptY=4 at the same time. Unfortunately, the docs only tell me what is supposed to happen when OptX=1 and what's supposed to happen with OptY=4, but nobody has decided to document every single combination of every single setting in the library.


I cant tell if this is a joke. Have you ever looked at a production java codebase once


I was wondering the same. It's usually a game of "find the actual behaviour" among a sea of interfaces, frameworks and so forth. Go, I can see the argument for.


Golang is quite bad when trying to find an implementation for an interface, because it is quite easy to accidentally implement an interface even if you're not intending to. This also makes it much harder on the IDE to index everything.


I have looked at many production java codebases. I agree that some things can be complex, but that is typically a feature of a framework (java has many configuration heavy frameworks) and not the code aspects. But I haven't seen any production code that is complex simply because it is production java.

Could you give an example for your point?


I've seen an internal Java-based web application (a digitization of some business processes involving what used to be paper forms) in which nested elements in a web page corresponded with subclasses in Java. Way high up in the hierarchy, it had `HTML`, `CSS`, and `JavaScript` classes, each of which eventually descended from a `Document` class.

It was impossible to properly trace any behavior through. Every method zigged and zagged through the inheritance hierarchy multiple times as you traced deeper. And so:

> It is (no matter how verbose or clunky) still easy to read and understand.

The sentiment is taken well, but it's just not true. Credit your fellow engineers for the positive experience, not any particular language.


To illustrate the above point (a little hyperbolically) https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


But what they're referring to is that none of the code in that is hard to understand... unlike in C/C++. This is like of like the same idea behind Go. You make the language simple and straight forward and even though there may be a LOT of code the code will be be easy to understand.

And I would add that Go is obviously easier than Java here by design, but anyone could write a horribly over-abstracted Hello World or Fizz Buzz in any language given the motivation.


> And I would add that Go is obviously easier than Java here by design

Nothing in golang makes it easier to understand compared to Java by design, and the opposite is actually true (e.g. no proper enums, no records, no pattern matching, etc.) make it more verbose and harder to get to the underlying logic.

That being said, you should see some of my employer's golang code with their web framework that they wrote and the 80+ line stack traces.


It really depends. Yes, you can more or less easily what each line of code is doing, but figuring out the purpose of everything in enterprise code can be a real challenge. Sometimes I have a feeling someone really wanted to make it as convoluted as possible since the same thing could have been done in a much simpler way. I remember the time when EJBs were all the rage and still can't understand why.


Ditto for Objective-C, which by convention handles verbosity in a more elegant way than Java imo.


Objective-C is a neat little language. Don't get me wrong - I would never go back to it, largely because of runtime errors due to duck-typing - but there were definitely things to like about it.

The best part was the perfect C/C++ interop. When I was working on iOS apps in Objective C, I found myself writing a lot of pure functional code using C and it was pretty neat to be able to integrate that with the OO stuff from Obj C so easily and with a clear dividing line.


> For all the critique of java, I think this is the one thing java got correct. It is (no matter how verbose or clunky) still easy to read and understand.

I think that this comment was written with a positive spirit, but that quote-chopping is incredibly meaning-distorting in a way that counters your point. The original poster stated that Swift "used to be the case that you could look at a piece of Swift code and roughly understand how it would compile to assembly pretty easily" -- you chopped after "understand". There are very few languages where it's /harder/ to understand how they compile to assembly than Java. The JIT compilers are both incredibly complex and incredibly varied, and almost all of them have multiple possible compilation outputs for the same code.

Java, as a language, has been relatively successful in making it easy to understand programmer intention when reading code, as your chopped quote implied -- but the same could be said of Python, Julia, and plenty of others. Swift is in the much smaller category of languages where you can understand how the code will /run/ -- along with C, etc. One could argue that Swift's greatest virtue is that it falls into both categories, but that's another discussion.


I think that "how it would compile to assembly" is a shorthand for "understanding what the code does".

Not many people these days can read and fully understand the 64-bit assembly code generated by llvm. I think it is less than 1% of Mac developers.

C++ also have a lot of magic going on. Even C can be tricky with an optimizing compiler.


I'd accept "how it would compile to assembly" as "understanding the machine-level behavior of the code," but again, I find this extremely lacking with Java.

I don't know what percentage of developers can read assembly, but the popularity of tools like Godbolt strongly suggest it's non-zero. In my own experience, all of the most skilled developers I've worked with have been comfortable digging down to the necessary level -- and doing that in Java, or most other JIT'd languages, is just not fun.

You'll notice that I very much didn't put C++ on either the "easy to understand intent" or "easy to understand behavior" lists -- while it's my preferred language and my primary language, ease of understanding is not its virtue. And while I agree that C compiled with a minimally optimizing compiler (CompCert, clang -O1, etc) is easier to understand than optimized code (for me, the sweet spot for understanding is as compiler that does good register allocation and constant folding, but only really does instruction reordering for memory ops), it's pretty rare that I look at the output of highly optimized code and am surprised or find it hard to follow. Some constructs (medium sized switches, for example) can be pain points... but most often reading assembly output from optimized C is either "yeah, that's about what I would have written" or "close, but you missed this optimization/intrinsic, I'll do it myself."


Come and join us in the Rust community! It's a bit more fiddly than Swift, but it's incredibly well designed, still has that slow and steady mentality, and has a lot of the same nice quality of life features that Swift has.


Rust has some very nice things going for it, but it really lacks the kind of design polish that a Swift programmer is talking about. To put a crude analogy in it, it’s kind of like comparing macOS with Ubuntu.


The problem is there are very few jobs currently.


> A few years ago that seemed to start to change. From my perspective, some of the features added to the language to support SwiftUI - specifically property wrappers and function builders - very much felt rushed and forced into the language based on external deadlines.

Yep. If anyone had doubts that the language was no longer the one Lattner designed, SwiftUI should've put the nail in that coffin.

Swift is an imperative, statement-oriented, language. In fact, I get kind of frustrated when writing Swift after spending some time with Rust: I just love writing `let x = if foo { 1 } else { 2 }` or `let x = match foo { ... }`, and it's ugly as hell to try to assign a variable from a switch statement in Swift, etc. BUT, that's okay- I'm almost sure that zero programming languages were written with my opinion in mind, and Swift is Swift.

But, SwiftUI is declarative, which just doesn't work with literally the entire rest of the language. So they added result builders and these weird, magical, annotations just so we can have a UI DSL.

Error handling is now inconsistent and annoying, too. The original approach was to use this quasi-checked-exception syntax where you mark a function as `throws`, and all callers are forced to handle the possibility of failure. The difference between this and Java's checked exceptions is that the specific error type is not part of the signature and therefore the caller only knows that they might get an Error, but not what specific type of Error.

They even have a mechanism for higher-order functions to indicate that they don't throw any of their own errors, but will re-throw errors thrown by function arguments. Clever.

Okay, fine. Pros and cons to that approach, some like it, some don't, etc, whatever. Except then they realized that this approach falls flat in some scenarios (async/promises), so we really just need to go back to returning error values. So they stabilized a Result type.

So, now we need to figure out when we're writing a function if we want to return a Result or make it throw. And, even more annoyingly, Result has a specifically-typed error variant! It's actually a Result<T, E>. Which is it, Swift team? Should immediate callers care about specific error types or not?

Just recently, they landed the async and Actors stuff. Async is fine and great, and is consistent with the imperative syntax and semantics of the language, and it even supports `throws`, IIRC. But Actors? What the hell is that? That's totally out of left field for the rest of the language.

I used to really enjoy Swift in the 2.x to 3.y days, but it really seems like it doesn't even know what it wants to be as a language anymore, which is a real shame- it had a real shot to take the wind out of Rust's sails, IMO (The number one "scary" part of Rust is lifetimes and borrow checker. Swift has CoW structs and auto-ref-counted classes, instead, which can be seen as more appropriate for higher level tasks than writing system libs, etc).


I totally agree. I've been writing varying levels of Swift code since the 1.0 days and I truly can't think of a time I've felt constrained by not having the actor keyword. The threading and DispatchQueue APIs are so rich and flexible already, I just don't really see the use case. Maybe someone who's worked more with them can chime in?


A good critique of libdispatch is here: https://tclementdev.com/posts/what_went_wrong_with_the_libdi...

I think the main issue with GCD is the potential for deadlocks in serial queues (which it doesn’t really help you with), and the related problem of thread explosion in concurrent queues.

Actors make protecting shared mutable state really easy, solving a big chunk of the reason you’d want to use semaphores and serial queues in the first place. If you stick with async/await/Task{}/actors, you’re guaranteed to only have the optimum number of threads that can saturate cores, and won’t have deadlocks. If you use Sendable properly and heed all the warnings, you’re a good way towards the kind of “fearless concurrency” that Rust gives you. It really is pretty great IMO compared to GCD.


I’ve been using Swift Concurrency for the last several months and while it has a lot of nice things going for it there’s still a lot of ways you can “hold it wrong” and get incredibly poor performance. If you block a thread in the thread pool you’re going to deadlock. It’s still pretty easy to create data races (possibly with mis-annotated or poorly thought out types). And logical races are absolutely still a thing, and I might even say they are more common because the messaging is that this’ll magically solve all your concurrency problems and it doesn’t do that.


> If you block a thread in the thread pool you’re going to deadlock.

I’m actually happy with this decision… it was very much done intentionally to prevent the kind of thread explosion that is all too common in GCD. The criticism of GCD I linked to was very skeptical of actors for this reason; if the implementation decides to add more threads to avoid deadlocks, you get the same performance pitfalls as in GCD, and thankfully that didn’t happen.

In practice, I’ve found that avoiding deadlocks is as simple as grepping for `DispatchSemaphore|DispatchGroup` in your codebase and eliminating them with extreme prejudice. If you stick to Tasks/TaskGroups/Task.sleep, there’s no real possibility of accidentally introducing deadlocks unless you try really hard to do so.

And yeah, logical races can still happen in actors, but it’s easy enough at a glance to tell whether you’ll run into one… if you avoid using `await` in a section of code that needs to run exclusively, you’ll be fine. (Even with `await` calls you may get lucky if there’s no actual suspend point happening, but a quick and dirty rule is just “don’t call await and your code will never be run concurrently.”)


> I’ve found that avoiding deadlocks is as simple as grepping for `DispatchSemaphore|DispatchGroup` in your codebase and eliminating them with extreme prejudice.

You’d think so but then you realize that framework code you call into might decide to do this and you wouldn’t know until it deadlocked. On a 6-core iPhone 13 this is probably not going to be noticeable, but that’s probably not true on a 2-core iPhone 6s…


Framework code you call into can block so long as it makes forward progress. Deadlocks happen if you lock a thread in the shared Concurrency thread pool, and expect some other code running in the same thread pool to do the unlocking.

But if you’re calling into framework code which is using GCD, the queues it dispatches into will be run on a separate GCD thread pool. If said framework code is using a semaphore or lock to block your calling (Concurrency) thread, then it stands to reason there’s another thread in the GCD pool which will eventually unlock it. (Or else it’s a deadlock no matter what you do.)

I haven’t come across any framework code which violates this, although maybe you’ve run into problems I haven’t.


I think the motivation for actors is that it's an abstraction which lets you write concurrent code with reasonably strong safety guarantees without adding as much cognitive overhead as a strict system like Rust. GCD is a really nice abstraction for async, but it doesn't protect you from shooting yourself in the foot if you're not careful about what thread is doing what.

That said, I'm really not sold on actors. I think CS as a whole hasn't really landed on the right abstraction for concurrency yet, actors seem a bit experimental, and GCD is plenty good enough in most cases.


Swift seems to me very similar to Rust in the sense that it's a language that try to do everything, to be both low and high level. Rust is leaning on the lower end and Swift is higher end. The different is Rust has a selling point, which is the borrow checker. Swift's selling point is... made by Apple? In that sense Swift might be closer to C#.


To be fair, I think Swift does have a reasonable selling point in that it's an easy-to-write language with ADT's, named function parameters and excellent nullability handling. It really can be quite nice to work with.

But I think it's over-sold as a systems/low-level language. Relying on ARC for all reference types creates a significant floor in terms of performance which makes it unsuitable for a lot of systems programming applications - unless you resort to unsafe Swift, in which case you're not really writing Swift.


Like Rust, Swift offers the performance of a compiled language with memory safety.


> And imo they have a largely negative impact on the language: it used to be the case that you could look at a piece of Swift code and roughly understand how it would compile to assembly pretty easily, but with these new features there's a ton of compiler magic going on behind the scenes.

What do you think the ratio between people who want to understand what the assembly will look like, to people who want a simple and powerful way to write apps? I think it's very close to 0.


I clicked on this thread because I knew exactly what I wanted to comment about, and was surprised to see that Chris mentioned exactly it in his message:

>After several discussions generating more heat than light, when my formal proposal review comments and concerns were ignored by the unilateral accepts, and the general challenges with transparency working with core team, I decided that my effort was triggering the same friction with the same people, and thus I was just wasting my time.

I stopped paying attention to Swift for this exact reason. I have actually submitted and implemented a feature in Swift and I think it was one of my most frustrating experiences with open-source projects. It seems that almost every thread in this forum devolves into people fighting over the most irrelevant details, and my proposal specifically took months to be submitted into review because a couple of users simply refused to back down from how they personally wanted it to work, ignoring the actual problem the feature aimed to solve. After a long time deflecting the comments, the feature was accepted the way I originally intended and I never entered the forum again.

Example aside, it doesn't take a proposal to see that almost every thread in the forum looks like this. Open sourcing Swift has benefits, but I think when it comes to the actual progress of the project, this particular democratic process was a mistake.


Has there been an actively developed open-source language both without a "benevolent dictator" AND without being bikeshredded to death by democracy?

(I wouldn't count C as actively developing, even though it is moving glacially slow, I'm thinking of things like Python, etc. Or other languages that aren't "done.")


Rust


I think Rust does a _ton_ of bikeshedding. Here is one example[0] showing how much of a discussion is required for a relatively simple QoL feature.

[0]: https://github.com/rust-lang/rust/issues/62358


Looks like that one devolved a bit into a flamewar. I'm not sure it's representative of the average feature proposal.


And it looks to me like it was resolved pretty professionally.


I wouldn't consider Rust to fall in this category to be honest, as there are plenty of issues/RFCs where people were bike shedding for months or even years.


To be honest Rust has some of the most beautifully painted bikesheds I've seen.


I'd argue that if Swift isn't more popular, it is certainly more visible than Rust. And probably just more popular.

Apple's marketing around Swift advertises the "open source" aspect, and there's a lot of money to be made making iOS apps, so there's naturally going to be a lot of noise around proposals/developments.


Perhaps it was a culture mismatch?

Mozilla has a long tradition of "open source" that Apple does not. So Mozilla were able to cultivate a community around Rust, where Apple laid down a lot of astroturf...


It would be interesting to read the thread if you could please share a link to it?



Key takeaway for me was the note that it “definitely isn’t a community designed language”. This is telling, I’ve watched a lot of neat languages over the years and absolutely none of the ones without community involvement have grown beyond the sphere of influence of their primary companies.

I like swift enough I’m learning it to build my own personal tech nerd ideal podcast client because I own apple devices and want an app that works between macOS, iOS, iPadOS, tvOS,and watchOS. but I doubt I’ll ever use it for anything beyond this one personally motivated project. Even if i release it as an app on the store for download or purchase I don’t know if I will ever be motivated enough to build anything else using it. Because the scope is too narrow. Business work is converging on open stacks like react, and angular, and the dark horse of C# with its latest release supporting WASM web components backed by GRPC-web and a C# function driven stack from front to back, even without SQL Server costs this is a compelling ecosystem backed by PostgreSQL and other completely open source tools.

But Swift remains Apple’s language for apple stuff and… while a profitable niche, it’s still a niche.

Edit: typo fix.


I’d much rather have a language specifically for creating apps than another generic language. It’s actually my opinion that Swift got worse since they started wanting it be used for more things like the server side.


The weird stuff came out of Tensorflow and SwiftUI IMO. Distributed actors looks like it’s not hurting the design.


which sucks because apple had the chance not to repeat their mistake with Objective-C .. but history repeats itself.

No matter how much apple market share in whatever market grows, developers want to be able to switch platforms and not have to think about the language.


> apple had the chance not to repeat their mistake with Objective-C .. but history repeats itself.

Objective-C has served them for over 40 years, building several different OSes over a multitude of CPUS and platforms.

If anything, Swift would welcome a repeat of this history.


You can build OSes in many different languages, and compile software to many different CPUs and architectures.

That alone does not make it good.


APIs designed for and built in Objective-C have been the mainstay of Apple application development ever since the Cocoa framework, stretching all the way back to NeXTSTEP in the 1980s. Even with the advent of Swift, most of today's iOS and macOS APIs carry the legacy of Objective-C. Much of development with Swift has involved with attempting to use the new language for old APIs. Perhaps legacy alone does not make it good, but it certainly demonstrates that there has not been a need to rewrite all of it using newer languages, that Objective-C has been reliable and venerable.


Objective-C is such a small and concise language. It doesn't get in the way of development in quite the same way Swift can delay building out a feature due to bikeshedding.


Vendor lock-in will always inevitably backfire, because developers and users don't want to be trapped. It might work to maximize profits in the short to medium term, but there's very strong financial incentives not to be stuck with a single platform. Vendor lock-in could mean the death of your product, or even your whole business.


Most new products are positioned to grow into a form of locked-in market that you can extract value from, drawing a box around their customers for the sole purpose of squeezing it in the future. This business strategy is a mind virus and has even overtaken the aspirations of bright-eyed entrepreneurs. The aim has fallen from the simple "get rich by selling a $1 item to 10 million people" down to "create a product where customers are trapped in a dependency relationship with the product by design, give it away below cost to push out alternatives, then flip it and squeeze them for as much as possible" (where the last part is omitted from the initial business plan but still implied, and enabled by outsourcing the dirty work to new management via buyout).

The primary goal should be to maximize value, and within that a balanced tension between maximizing delivered value vs maximizing captured value. It's reasonable to be compensated for the value you add, but it needs to be in service of maximizing value in general. If the correct hierarchy of goals is inverted and capturing value becomes the primary aim then it inevitably devolves into this antisocial, monopolistic, lock-in behavior.


I'd say at least 95% of Swift users don't mind at all that it's mostly an Apple language.


Well, and 95% of heroin users don't mind needles and constipation. It's trivially true that the people who end up using a product are mostly the people who are content with its tradeoffs. That doesn't say much at all about whether those tradeoffs are ultimately optimal.


I don't know about optimal, but I've been a Swift developer since day 1 and yet never heard anyone I know complain about Swift being to Apple centric. It's typically armchair philosophers online that have purity concerns, not practical developers.


I don't have a problem with it being Apple controlled. I have a problem with it being presented as an open project while simultaneously Apple completely controls its trajectory and maintains private forks of everything, which are where the developer tools are actually built from.

If it was a closed process where neat stuff just dropped out of the sky each year, that would be fine. When features drop out of nowhere each year and then get laundered through ex post facto public "review", then I take issue.

If it was a closed process then we would expect that only features coming from Apple would exist. In a truly open process there would be facilitation for contributions from people outside the organization. In the Swift project as it is currently run, those contributions have withered on the vine; the core team doesn't particularly welcome or support anything that didn't originate internally.


This is a textbook example of what I meant by philosophers obsessed with purity.

And it doesn't sound like you're actually following Swift Evolution. A) Most of what happens is done in public, only rarely do they hide stuff until the last minute, like result builders for SwiftUI. B) As far as I know, they have never claimed that it's going to be completely open and 100% community controlled. The core team is mostly Apple employees, that is not a secret.


> it doesn't sound like you're actually following Swift Evolution

Nope, you're completely wrong about that.


Then why would you say something like this? It obviously isn't true.

> In the Swift project as it is currently run, those contributions have withered on the vine; the core team doesn't particularly welcome or support anything that didn't originate internally.

It's also a very niche objection to complain that it's neither fully open nor completely closed. Most people are totally fine that the development is mostly open, with some new features kept hidden for business purposes. The vast majority of Swift users see it as a tool, a tool mostly to write Apple software, and they are more or less pragmatic. Almost all additions to Swift have been very positive for people that use it in their day job.


I also have yet to meet anyone writing Swift for any reason besides "Apple made it and it works on Apple things", so we may be at an impasse here.


Hi, nice meeting you. Now you know one.

My current concern right now with swift is the impossibility to use any code on android in an officially supported way.


But is that different from using Kotlin on iOS?


kotlin has kotlin multiplatform. A project to run non-ui components on both platforms. It's been there for a few years, apple hasn't even started.


You can run non-UI Swift on Android too if you want. I don't know who made that possible, but I also can't see why Apple would sponsor Android app development, seems completely counter to their interests.


you can in theory, but the binding with the jni world will be atrocious.

You're also pretty much on your own with the library ecosystem.

I agree that it's not apple's interest. But that's part if the problem : this language didn't start with the goal of being just an apple language.


Objective-C was not originally designed by Apple (or NeXT)


I think Swift was a strategic mistake mainly due to the niche effect, even if it's a better language.

It would have been much more productive to team up with the C# team as C# is a mature language, and a combined Apple + Microsoft would have been able to compete against java (esp. with Oracle as owner of java).


An Apple C# future is a neat idea, but I'm not sure how you square that with the (IMO correct) observation that allocation control really matters for quality of experience on constrained platforms.

Obviously iOS platforms are much less constrained today, but not having to run a GC is a pretty nice thing.

Maybe C# could've been extended into that universe (I know Midori had their own variant but don't know much about it) but it seems daunting to then make that compatible.


I agree with you, and I've long speculated that lack of tracing GC is the main reason why iOS devices don't need as much RAM. It's too bad that the options for implementing a GC-free GUI that can target Android are quite limited, and as a consequence, the low-end devices used by people who can't afford an iPhone are inevitably saddled with GC.


Apple tried to do automatic GC but did so very badly and with a language not designed for it, so they had a bad experience and were scared away.

The modern GC in java is amazingly fast and with a few tricks likely to be good enough.


I'm fairly convinced that GC is the reason that Android phones need much more RAM and much more CPU to generally be worse than Apple phones in terms of performance.


You may be both right, as android is actually not using Java garbage collectors that are quite good, but they have their own runtime.


I'm pretty familiar with the modern Java GCs, and they're very impressive, but at the same time having to do manifestly less work with ARC is probably good for responsiveness and battery life.


I'm not saying you're wrong about ZGC, but I will say everyone heard exactly that line also at the time of Android 1. And that was not a language "not designed for it." (And Android is still not using it.)


So why are Android developers still complaining about their apps running poorly than iOS apps for years, especially contributing to battery drain?

Sounds like a lot of armchair generals debating in hindsight about any problem in anything that fits their dislike bias rather than fore-sighting about the specifics before it has happened.


You’re confusing Android with OpenJdk. The parent was talking about Java/OpenJdk’s gc implementations, not androids.


From the perspective of the user or app developer, they do not care. When they see an app perform slower on another platform for whatever reason they will complain. Maybe the problem was Java in Android (along side its sheer fragmentation) all along. Hardly any complaints about any of that for iOS.

No wonder that is the reason why they are preferring to moving everyone to use Dart / Flutter in their future operating system instead of continuing to use Java.

It is like as if Android was designed to be a disaster.


Again this isn’t Java’s fault. This is the Android Runtime’s fault. The Android Runtime is not Java or OpenJdk.


It's both of their fault; including the entire Android system design and Google knows it.

It's evident that Google and many developers have had enough of it from the runtime, Java, JDKs and the whole system. Otherwise, why on earth are they planning to move away from it all in the first place?

Once again, from the perspective of the user or the app developer, they don't care, Android still just doesn't cut it against iOS. No wonder Android is always second place.


Java is fast but takes 5x-10x memory that a proper AOTC language app.


Swift needed perfect interoperability with all the existing Objective C code, and that probably would have been difficult to accomplish with C#.


It's arguably difficult with Objective-C today, let alone C#!


Rust would be a more realistic choice, as it's built on LLVM.


> Key takeaway for me was the note that it “definitely isn’t a community designed language”.

If you scroll to the top of the page with Chris' comment, this point is being addressed:

>In the coming weeks, we hope to introduce a new language workgroup that will focus on the core of the language evolution itself, splitting off this responsibility from the core steering of the project. The intent is to free the core team to invest more in overall project stewardship and create a larger language workgroup that can incorporate more community members in language decisions.


Another committee is sure to fix the problem with the existing committees.


> and absolutely none of the ones without community involvement have grown beyond the sphere of influence of their primary companies

C touches much more than telephony


> the root cause of my decision to leave the core team is a toxic environment in the meetings themselves. The catalyst was a specific meeting last summer: after being insulted and yelled at over WebEx (not for the first time, and not just one core team member), I decided to take a break. I was able to get leadership to eventually discuss the situation with me last Fall, but after avoiding dealing with it, they made excuses, and made it clear they weren't planning to do anything about it. As such, I decided not to return.

Oof. Yeah, that’s relatable. It’s hard to continue contributing volunteer efforts to projects you care about when you’re getting shouted at all the time.


It's very disappointing that such a talented person was pushed out, especially someone like Chris who seems to be a genuinely good person. It's all too common to see brilliant founder types be pushed out by power- and influence-grubbers who end up ruining the original spirit of the project. I read through one of the threads he linked that was producing "more heat than light" [1] and, yeah, it's easy to see why he'd move on. The person responding to Chris wasn't adding any value to the discussion, just being a pedantic troll. What's worse is that person was a moderator.

[1] https://forums.swift.org/t/if-let-shorthand/54230/188


Reading this thread for the first time, it looks to me like (1) everybody is fairly well behaved by the standards of an online forum and (2) Chris is probably using more inflammatory language than anybody else.

So maybe one the problems is that at this point, he's no longer getting the deference from other community members that he feels he's still owed?


Swift was never about volunteering: it's an Apple product, even if they tried to market it as open, it's Apple that manages the roadmap, with the usual secrecy, ...


Lattner was volunteering:

> For context, I left Apple over five years ago and the only constant in my life is that I'm always "very busy" :smile:. That said, Swift is important to me, so I've been happy to spend a significant amount of time to help improve and steer it. This included the ~weekly core team meetings (initially in person, then over WebEx), but also many hours reading and responding to Swift Evolution, and personally driving/writing/iterating many Evolution Proposals 19. As such, my decision to leave the core team last summer wasn't easy.


Fork it!

Call it Bird or Goose or something.

Pull out SwiftUI and keep it simple. Focus on the server market more.


That would actually be an interesting direction. Swift would be an amazing server side language.


IBM took a stab at it a bit ago - https://developer.ibm.com/languages/swift/

There is some neat stuff that you can do with it in docker https://hub.docker.com/_/swift


Obligate ARC is not a good fit for server software. Even obligate tracing GC ala Go and all the JVM/CLR languages gives you better throughput than that, though obviously doing neither is best.


I think Swift could work very well in a lambda/serverless context.

In that case, you'd be writing mostly functional Swift code with sparing use of reference types, which would mean you wouldn't hit ARC at all and you'd have static memory management similar to Rust.

That, along with Swift's expressiveness, ADT's and awesome type system in general would make it a great experience.

Kind of like the ergonomics of Python with an actual good compiler eliminating obvious mistakes like Rust.


> in a lambda/serverless context.

Perhaps, but that's not server software, is it?


You wouldn't consider code executing in a lambda to be server-side? What is it then?


A software function, as in "functions as a service". Very different from the concerns of most server-side code.


Yeah I mean I think it's largely semantic. I think when most people talk about "swift on the server" they mean they want to write their server-side business logic in Swift, and that seems perfectly suitable in a serverless context.


Sorry, don't know enough of the theory behind this – why is that not a good fit?

Happy to read up on this if you don't have the time to type it up.


I’m not sure if it’s fair to dismiss ARC as a bad fit for server-side work in general, but the typical argument is that atomic retain/release operations become quite expensive when you share memory across threads. It’s easy to imagine, for example, how a global database connection pool accessed on every request across dozens of threads would have non-zero ARC overhead.

In practice, compiler optimizations reduce the amount of retain/release pairs emitted, but I have no idea how the resulting performance compares to GC languages.


It's mostly not the atomic op overhead that's expensive, though -- it's the sharing.

You can write shared-nothing algorithms using ARC objects that are local to a single thread or core, and while it will be slightly more expensive than RC objects, the N^2 effect of sharing between N cores won't occur.



Interesting, thanks for sharing.

Related thread on Swift forums [1] seems to suggest that the latest Swift compiler would generate code that performs a lot better than the Swift 4.2 version. I'm interested in checking that for myself.

[1]: https://forums.swift.org/t/swift-performance/28776


Ah I see very interesting, thanks!


Because it's too slow.


Great explanation, thanks...


I think he has enough on his plate.


> Call it Bird or Goose or something.

I'd cast my vote for Crested Shriketit. Or maybe Mionectine Flycatcher? Many-Coloured Rush Tyrant?

Incidentally, Wikipedia's list of birds is truly a wondrous place: https://en.wikipedia.org/wiki/List_of_birds


UnladenSwallow


But an African or European one?


I don't know that!


Prothonotary warbler? At least it's not red.


What do you mean by keep it simple? SwiftUI isn't a part of the Swift language.


A rename would be welcome. That singer girl pops up every other search about Swift, so irritating.


Since we'd be adjusting it to fit, perhaps Tailored Swift would work as a name?


Rename it to Taylor


I just searched "swiftlang" and the knowledge box was to some other language called Swift.


I was really excited about Swift when it was first introduced, and I hoped it could grow beyond the confines of Apple's specific needs and ecosystem requirements. Unfortunately, that never really seemed to happen, and rather than Swift actually get better in the areas I care about (coming from a background in very expressive, dynamic languages like Ruby), in some ways it got "worse" (aka lots of verbose syntax all over the place simply to placate compiler-level concerns). SwiftUI was a bizarre advancement in that it attempted to add a very dynamic DSL mindset on top of a language which was far more rigid. Needless to say, everyone I've heard from who's tried to build production-grade UIs using SwiftUI has ended up feeling pretty burned. It's a grab-bag of cool ideas yet remains a buggy nightmare in practice.

I don't really know what Apple can do per se to turn this ship around, but Swift "as a brand" has definitely lost its luster. I'd considered wading back into Apple app development when SwiftUI was first introduced (at the time I hadn't touched any of the dev tools since early-era Mac OS X Cocoa/Objective-C), but I have no interest at this point. (The App Store being a real s**show also doesn't help matters!)


Having built a few apps with pure SwiftUI, I think you are exaggerating a bit. Yes, there are many many problems with it. I wouldn’t blame those on Swift though, since most people‘s gripes are with runtime issues.


> I was able to get leadership to eventually discuss the situation with me last Fall, but after avoiding dealing with it, they made excuses, and made it clear they weren't planning to do anything about it. As such, I decided not to return. They reassure me they "want to make sure things are better for others in the future based on what we talked about" though.

This puts a bullet in my hopes for Swift. Why on earth would Apple not support this obvious A-Team player?


I don’t have any hope for the incumbent OS players. They clearly can’t get their priorities straight, and it’s sad that even within these OS vendors the native application development platforms are rotting in favor of cloud and web investments. Think about it, .NET is being sunsetted in favor of .NET Core, but the latter still doesn’t have desktop support and shows no signs of it. SwiftUI is a total letdown from what I’ve heard. It is ripe for disruption, that’s why I’ve been studying OSDev ceaselessly of late and hope I can find some more OS-minded people to collaborate with.


[Edit: I might be the worst developer on the planet, so take it all with a grain of fine salt]

Yup, hot take: SwiftUI is hot garbage.

Declarative rendering is the worst rendering: it's code written that doesn't have to be written. The beautiful thing about UIKit, NSLayout, and Storyboards, is that when used effectively, not much UI code has to be written, and when it does, it matters.

Now everything has to be written to hells end. And not only do I need a mental diagram of my UI in rendered state, I need it in code state as well.

If someone could enlighten me, I would gladly take the lesson.

Edit: I'm very happy I posted this to learn. I used UITextViews extensively in my only app development, and was extremely frustrated with SwiftUIs readiness on that front.

That said, I still like that my code can reflect the 'flow of thought' rather than the strictness of the UI when developing. I guess a lot of the property wrappers got to me too

And all the improvements given to declarative rendering could be embodied in updates to Storyboards and the like.


I'm a bit confused by this take so my apologies if I'm missing the point. SwiftUI leans far more in the direction of declarative layouts with very little actual code required. It defaults to bottom up sizing so that view sizes are based on their contents and automatically uses default system spacing for basic layout tasks.

The only comparable thing in UIKit is UIStackView. In fact, UIStackView was clearly added to reduce the explicit constraint layout code required for simple layouts. While you don't "write" UI code with Storyboards, it's certainly there, just encoded into a non human readable format.

SwiftUI has many problems (mainly bugginess and missing APIs) but I don't see how the old APIs required "not much UI code ... to be written". If anything, this is one of the primary benefits of SwiftUI: getting to remove most of the explicit layout code and complex type system required to do basic layout tasks with UIKit or AppKit.


Declarative rendering has a lot of advantages in terms of composability. My main issue with SwiftUI is that the tooling just doesn't seem ready at all. I tried a small project recently, and the preview feature basically doesn't work.

Also I think the decision to make combine a core building block of Apple UI development is a huge mistake. FRP is a plague on the industry, and everyone is going to figure that out in a couple years.


> FRP is a plague on the industry, and everyone is going to figure that out in a couple years.

Why is FRP/Combine bad? What is better?


It's bad because it obscures flow-of-control from the programmer and surrenders it to a large opaque system. In some cases it makes things a little bit easier, but when things go wrong it's a nightmare because your stack traces are no longer meaningful and flow of control is at arm's length and it's not easy to trace who's calling what.

Not that it can't be handled well or used to good effect, but the idea of wandering into an unfamiliar FRP-based codebase which has been in the hands of multiple non-expert maintainers over a couple years is really the stuff of nightmares.

edit: answering what's better

Normal flow of control is better. You want to be able to look at a function, and see which functions are called from inside that function. And you want to be able to search all instances and see every place where a function is used.

You don't want to have some black-box scheduler who's calling all your functions for you. That's how you end up with mysterious action at a distance, becuase you didn't know about the extra subscriber your colleague put on a publisher somewhere else in the codebase, or something strange happening because of an order-of-operations issue which is difficult to understand.


Aren't result builders just function calls with implicit do-notation?

I kinda agree that SwiftUI runtime uses a lot of opaque magic and documentation is still under-par. However, I am very glad Apple went the React way – I tried UIKit after I was writing React for few years and it felt very dated and laborious. Yeah, React isn't FRP, but declarative UI with code FTW.


It’s a classic solution-looking-for-a-problem. Async/await is perfectly fine for state reaction, and the learning curve for FRP is enormous, especially for something as straightforward as exponential backoff.

https://github.com/alex-okrushko/backoff-rxjs


What is FRP?

Edit: Is it Factory Reset Protection?



I'm not familiar with this, so googled it, and the third hit was skohan giving dire prophecies about it two years ago: https://news.ycombinator.com/item?id=21011917


You may be able to tell I am not much of a fan of FRP ;)

But to be fair, the comment you linked is not a prophecy it was a lamentation


How many years counts as the "couple in which everyone will figure this out" though?


Many people have already figured it out.

I would imagine a few years after SwiftUI becomes the primary UI stack used by iOS developers, many of them will be cursing Combine's name, as it will be clear most of the problems making iOS developers' lives difficult will be coming from FRP.


I'm sure the App Store, code signing, and bad dev docs will find a way to retain the lead.


Those are things you have to deal with once in a while, and largely cease to be issues once you get over the learning curve. Bad code is something you have to deal with day in and day out.


I've recently attempted to use Swift UI for a very simple Mac app, and I have to concur: Swift UI is garbage.

The syntax is incomprehensible and inconsistent, the documentation is ridiculous, the feature set is severly lacking, and worst of all it's unbelievably slow. I have the fastest processor that Apple sells and a proof-of-concept app that I made in a few hours that consists of three Swift files takes 30 seconds to compile. I can't imagine what building non-trivial apps with it wouldbe like.


  > I can't imagine what building non-trivial apps with it wouldbe like.
lets just say you will be making quite a few pots of coffee...


I found “the old way” of doing things to be a mess. Despite building one of the first apps in the App Store, NSLayout just never became “second nature.”

Instead of the native app world, I stuck to web front ends and managed that awkward transition from jQuery to Angular to React. These models for UI made increasing amounts of sense — instead of fiddling with weird lines a slop gui, I could just tell the system what I want and get it!

Then SwiftUI came along, and lo, I get all the goodness of React in iOS and even MacOS! My first native app in years is coming along swimmingly, and it’s even cross platform!

(yes, I did do an app in React Native, and found it gross for reasons that I can’t put my finger on)


Did you ever try going all the way and using graphical layouts (Nibs/Storyboards)?


Yup. Storyboards were of course a huge improvement, but I could never get the hang of constraints.


Yeah fair enough - constraints can be really powerful, but they really do take some time and practice to wrap your head around.


I'm still looking for a full web frontend framework that uses constraints -- because yes, they are the best version of 'responsive' layouts ever.


Yeah one of the reasons I refuse to do SwiftUI is I want to do my layout in constraints. Its also how my head thinks when I think about layouts, and also probably why I never enjoyed web dev and got into it.


If you don't have declarative rendering, you have imperative rendering with stuff like `canvas.drawLine()` etc. (just a made up example). This code tends to be very annoying to write and maintain because you need to do a lot of building up of elements and tracking the state of them yourself.

What you are referring to with UIKit, NSLayout, and Storyboards is in fact declarative rendering, but the layout code can only be modified with a point and click editor. In most people's opinions, editing code directly is much more controllable and works better for systems like Git.


I think declarative rendering has definite value. When you aren’t using it, you end up having to track your own dirty flags, and often that means you will neglect to update them properly, leading to inconsistent UI. CPUs are fast now, there is no reason to not be doing a few extra tree comparisons if it eliminates bugs and speeds up the development process.

The criticisms I’ve heard of SwiftUI come from a guy who absolutely loved React. I think the problems are unrelated to the overall idea of declarative rendering being flawed.


100% Agreed, I have one desktop program I wrote that has something approaching the same kind of complexity level as a 3D modelling/animation program and with plugins and concurrent updates coming in from the network the state management was just a tedious AND complex mess that caused a ton of bugs.

Since I wrote it (about 10 years ago), I've spent a fair bit of time writing React(and Vue) code and about 90% of the issues would've gone away (and no, this isn't just hindsight, I would've probably re-created many of the same issues and spent a big chunk of time on tedious code if I used the same framework again).

Sadly, the options for something that supports OpenGL and works with declarative rendering are slim on the desktop.


Couldnt agree more with your core argument. Declarative rendering is way more mental overhead than procedural rendering. Designing a system is more about the "how" than the "what".


That seems crazy to me. When designing a window, the first thing you decide is what you want on it -- a button, a text box. It's the most obvious thing about the program. How you do it, that's a completely different matter, and definitely not the most important part until and unless you are dealing with performance limitations (which should typically not pop up until later).


Declarative UI works amazingly well when done properly (see Flutter). But Apple has a tendency to half-ass development tools, so it is what it is.


> NET is being sunsetted in favor of .NET Core, but the latter still doesn’t have desktop support and shows no signs of it.

What do you mean? You can build desktop Windows WPF or Winforms apps with the new .NET. There's also MAUI for cross-platform desktop apps.


Yep. Probably the confusing name change threw him off the scent.

What would have been ".NET Core 5" is just ".NET 5", and that's when the bulk of the desktop framework compat came online.


Yeah. Someone could make a case study out of this for why marketing should not have final say on the names of technologies.


I think marketing was not involved there. I think some manager made the decision after talking to some CTO of customers. Same for the unfortunate MAUI naming. Both cases could have been prevented by a week of proper field research.


Thanks for informing me. I hadn’t heard this so I’ll check these out.


You're welcome!


I think parent is thinking of .NET Core 3 and doesn't realize that v5 and 6 add back many missing features and .NET Core is now just called .NET again.


Reminder that Microsoft is trying to steal the name MAUI from an existing open source project.

Microsoft <3 Linux

https://github.com/dotnet/maui/issues/34


> This puts a bullet in my hopes for Swift. Why on earth would Apple not support this obvious A-Team player?

There's two sides to every story?


I would really like to hear what this yelling was about. Perspective is a hell of a thing.

There are some exceptionally capable people who simply cannot work together on the same idea/problem in my experience. This doesn't necessarily mean that one or the other is inherently bad.


Lattner’s post linked to an example of a heated discussion:

https://forums.swift.org/t/if-let-shorthand/54230/188


I'd like to take a HN poll on whether or not this is even "heated". Seems like extremely healthy debate upon my first scan through.


Healthy? The linked post reads as very snide. Ben starts with claims that Chris isn't viewing his comments in context as being opinion. Rather than move on at that point, he turns around and decides to nitpick one of Chris' statements, characterizes it as hyperbolic and extreme (The extremely hyperbolic phrase in question was "betray a lot of expectations and introduce a lot of bugs." Wow, such extreme hyperbole).

Ben then goes on to call another one of Chris' posts as representative of a part of a broader trend of "performative misunderstanding", basically calling Chris and others disingenuous.

He then dismisses Chris' concerns out of hand as "a possible but frankly implausible misinterpretation", and says that his behavior is a "real problem" that worsens the goodness of other proposals.

I'm not saying that either party here is in the right, or that the discussion as a whole is in Chris' favor, but I would say this thread is definitely not an example of healthy debate. You don't call your interlocutor disingenuous in healthy debate hyperbolically nitpick their posts as hyperbolic. That's not debate, it's squabbling.


If every forum conversation i had was like this i’d soon stop being part of it.


This is not a normal forum conversation though. It influences binaries on billions of devices. Lattner wanted to push his own opinion and got (IMO) gently rejected. Fair enough.


Lattner's point was that high-level design considerations should be thoroughly discussed well before even finalizing the design, never mind the bikeshedding on syntax that takes up much of the thread. Swift's design discussions seem severely dysfunctional compared to how other languages do these things, e.g. Python's Enhancement Proposals, Java's Community processes or Rust's RFC's.


Well if the topic is a minor syntax sugar (if let), then you need to talk syntax. I don’t see any bikeshedding here.


That doesn't seem bad at all from my cursory glance at it.


I'm pretty sure that Rust didn't fall apart when it's creator, Graydon Hoare, left Mozilla to work on Swift at Apple.


My understanding is Swift doesn't have the same level of openness and community involvement as Rust. Apple has a lot more control over the language.


Yeah, on the flip side Apple is putting some serious engineering resources / funding in it. With Rust it is always creating committees, teams , sub-teams and so on and to actually do implementation there are far fewer resources.

So overall things balance out even if not big win for either.

Swift: Close process + big funding -> fast implementation / less feedback Rust: Open process + little funding -> slow implementation / lot of feedback

Also even with lot of feedback many community members in Rust are already claiming very new features like Async in Rust are deficient in parts and feels rushed.


This feels intentionally ignorant of the state of Rust considering there is more funding and more people paid to work directly on Rust than any prior point in the project's entire history.


I am also best paid person today than any prior point in my entire employment history. However that does not mean my total compensation is anywhere near Google engineer or even in same magnitude.


Isn't that what normally happens when the language has important in-house use cases?

>Go has community contributions but it is not a community project. It is Google's project. This is an unarguable thing, whether you consider it to be good or bad, and it has effects that we need to accept. For example, if you want some significant thing to be accepted into Go, working to build consensus in the community is far less important than persuading the Go core team.

https://utcc.utoronto.ca/~cks/space/blog/programming/GoIsGoo...


I understand why Apple has a vested interest in maintaining control over the language. What I'm saying is that this puts Swift at a disadvantage in terms of broad adoption and the odds of the language surviving its creator leaving when compared to a language like Rust that has real community involvement and is perceived by everyone as more neutral.


If you look at the top of the page with Chris' comment, it looks like the Swift team is moving to address this:

>In the coming weeks, we hope to introduce a new language workgroup that will focus on the core of the language evolution itself, splitting off this responsibility from the core steering of the project. The intent is to free the core team to invest more in overall project stewardship and create a larger language workgroup that can incorporate more community members in language decisions.


It's less this one person's (Chris) presence or absence and more the sanctioning (implicit or not) of shitty behavior from some other person or persons.


Yep. I'm not a fan of people who think yelling is a normal part of debate.

I think this comes down to Chris being a very busy person who has better things to do with his time than argue about the direction of a project he no longer controls.


Correct me if I'm wrong, but Chris seems like a very respectful person and I've never heard this direct language from him publicly - it must've been a pretty bad situation.

It's too bad Swift is straying from his original vision but that's the tradeoff of designing a new language with a large company. Not blaming Apple or anything, kudos for them putting so many resources in an experimental new language, and LLVM wouldn't be where it is today without them (which I think is enabling a sea change in engineering), but at the end of the day there's only one use case for swift that matters.


I've listened to both episodes of Lex Fridman's podcast where he interviewed Chris and he seems like an extremely humble and intelligent guy. No question he is one of the leading experts on compilers too, and has a lot of hardcore engineering background to prove it.

Sometimes organizations just grow to be too large and too remote to sustain a productive and respectful environment. I don't think anyone is necessarily at fault here.


Chris is one of those people who gets me to stop what I am doing, shut my door, put on my head phones and just listen. Do not disturb. I am a self taught programmer and have learned my way by many a night studying/reading/debugging but all that teaches me is code. Just code! When I listen to Chris, I feel like I am listening to a technical leader, mentor and visionary. It's hard to find things to teach me how to become that, and it just so happens (maybe it doesn't the more I think about it) that I can usually also learn Swift while learning these other great skills or perspectives _by just listening to him talk_. Thanks for all you've done for the Swift community Chris and good luck in all that you do.


I was at the LLVM Meetup last October, waiting in the buffet line and this guy behind me asked Chris, what are you working on? I turned around not quite recognizing him, looked at his name tag and it was Chris Lattner asking me what I was working on. A very nice, extremely smart person.


> To answer your question, the root cause of my decision to leave the core team is a toxic environment in the meetings themselves. The catalyst was a specific meeting last summer: after being insulted and yelled at over WebEx (not for the first time, and not just one core team member), I decided to take a break. I was able to get leadership to eventually discuss the situation with me last Fall, but after avoiding dealing with it, they made excuses, and made it clear they weren't planning to do anything about it. As such, I decided not to return. They reassure me they "want to make sure things are better for others in the future based on what we talked about" though.

Seems incredibly disrespectful.


Just imagine: if this is how they treat someone with Chris Lattner's reputation, think about how they'll treat a new contributor writing their first new feature proposal. It's not a project I'll ever want to touch.


Indeed, astonishing really.

Seems to me to probably reflect a high degree of insecurity: if you're not secure and feel the need to protect your position I can imagine some people reacting this way if Chris is telling everyone he thinks your idea is a bad one.


I've heard great teams at Apple and toxic teams - maybe more on Software than hardware (see the writeup / postmortem on Aperture which started off great but middle managers under time constraints ruined it/made it toxic).

Not sure if Apple silo-ing off teams lets this perpetuate, but then again, it's not like Google or FB have their own share of toxic groups.

You'd think something like that whole Apple university MBA program would figure out better ways of managing large numbers of people with commensurate egos, but it seems like an unsolved problem.


I don't believe the silo-ing of teams is a problem in itself. If teams are not silo-ed in a company of 150k employees, things will grind to a halt quickly.


Sucks because Apple has hired a lot of really talented developers to work on Swift. I hope they aren't being treated to this kind of toxicity.


>>> not for the first time, and not just one core team member

Playing devil's advocate here, but we should hear the same history from the other side, more so if these "insulting and yelling" happened more than once with different team members.


That's fair. Also presumably Apple's rules mean that we probably won't be hearing the other side.


Yes, true. But also, there are very few situations where what's being called out here is justifiable.


Yes there are situations where it is justified, but we shouldn't draw that conclusion with knowing all sides.

I don't know any details to question legitimacy of the issue, however in my experience I am constantly surprised how many times it is just communication breakdown or gap in what each party perceives.

Also the pandemic has made things more stressful for everyone and outbursts do happen with more frequency perhaps it is easier virtually to shout at some one i suppose.

Torvalds was known for his infamous roasts on LKML, he only goes on a triade when he feels the situation warrants and person should know better. That doesn't make it right , but someone had to really explain it to him before he toned it down recently. I don't think his intent was ever to be toxic or aggressive.

Effectively communicating problems can be hard.


TLDR, life is too short to work with assholes, an attitude I fully endorse. Or maybe he's the asshole, who knows? Either way it works out.


One of the strongest predictorsof work satisfaction is satisfaction with one's coworkers. It beats money and autonomy.


So they wanted him gone, figured out his sensitivities and he fell into the trap. Sad but common if some higher up decides to remove you "voluntarily".


It's possible for an environment to be objectively toxic without regard to someone's "sensitivities".


[flagged]


I know what you mean, but just pointing you're being to much of a pedantic person here.

Yes, toxicity in general is relative, does X cause harm to Y, depends on both X and Y.

But if Y is the general category of human beings. And with what we know as humans and of our feelings and that of others to some approximation. Then we can definitely predict some big categories of very likely to be toxic behaviors such as: shouting, insulting, punching, silent treatment, interrupting, denigrating, avoidance, ridicule, deprivation, personal attacks, lack of consideration, not listening, etc.

These become a pretty simple framework to have an objective measure of toxic interactions.

And that then can be codified into HR policies and societal norms and expectations.


Shouting at people during collaborative meetings is objectively toxic behavior. I'm not sure how else it could be seen unless one is themselves a shouter with no self-awareness.


> Shouting at people during collaborative meetings is objectively toxic behavior.

Absolutely. I can't imagine a situation where I'd want to shout at another adult at work, I can't imagine being shouted at work either. In person, or on a web call. Hell, I don't even think I've had a shouting match with anyone outside of work. It just seems a silly way overall to try and resolve differences.


> I can't imagine a situation where I'd want to shout at another adult at work

As an impartial observer I've seen several appropriate instances of an adult raising their voice over another adult. Usually related to some kind of repeated socially inappropriate behavior like offensive comments, interrupting, things that could be construed as harassment, comments which could create legal trouble, etc.

A shouting match is something entirely different and is a really bad sign. But there are absolutely appropriate times to raise one's voice to speak over another, to correct something intolerable that demands immediate intervention.

It shouldn't happen regularly. The underlying behaviors necessitating raised voices should be addressed, likely in a private setting.

> It just seems a silly way overall to try and resolve differences.

Yes, yelling is clearly not an appropriate tool for dispute resolution. But it does have a place.


My favourite line manager and I were both extremely opinionated people, and once every six months or so we'd end up borrowing a conference room and having a straight up shouting match about a set of design decisions where we both had strongly held views on how it should be done.

By the end of the shouting match though we'd pretty much always come up with a third design that was far better than either of the two we'd had in mind going in, and we both regarded it as a matter of passionate advocacy rather than being a personal attack.

However, we were very much temperamentally suited to that dispute resolution approach, and neither of us would have ever attempt to use it with any of our other coworkers because none of them would've found it remotely pleasant.

(to be clear, given the way Chris Lattner seems to have come away from the interaction feeling, somebody absolutely fucked up in this case, but it seemed worth noting that there do exist cases where things are different)


I'm sure you think the person you're responding to is just an apologist for toxic behavior, and you might be right, but it's true that different behaviors are insulting in different cultural contexts. What is toxic is feeling that you are being treated as less than, with less respect than other people, and that is communicated through behavior in the context of norms. "Shouting" is a word that seems safely toxic, but that's because it's a pretty elastic word that adjusts to the norms of the people using it. I personally came from an extremely quiet household, and I am often asked to speak louder (by my wife, my therapist, by my friends when we're at a noisy bar) while my parents often used to complain in restaurants that they had to "shout" to make themselves heard. When they came to visit me, I had to choose restaurants carefully. Eating in even a moderately noisy restaurant could be unpleasant for them, because speaking loudly enough to be heard in that environment had unpleasant emotional connotations for them. Growing up in a family like mine no doubt contributed to my social anxiety as a young adult because I constantly felt that everyone around me was shouting in an alarming way, but I learned to look around and see that there was no panic, no hard feelings, nothing except people speaking at the volume that was normal for them.

Echoing and turn-taking can also vary dramatically by culture. My family were very strict turn-takers and did virtually no echoing at all. I have to go out of my way to echo back what people are saying, in a way that feels unnatural to me, so that they don't think I'm silently disagreeing and looking down on them. I also had to learn that people talking over the ends of my sentences is sometimes a calculated insult (as it would be in the household where I grew up) and sometimes just a cultural norm for them.

Business is ruthlessly doing away with all forms of difference, of course, and I will venture a guess that you, like me, belong to a group with a lot of power to drive this erasure under the banner of progress, power that we avoid acknowledging whenever we can. Shared norms can help prevent misunderstandings, but it is the sameness that helps, not the superiority of one particular norm about (e.g.) speaking volume, and the people who are helped most by shared norms are the most powerful, who have no adjustment to make because their own norms become the ones that other people conform to.

Chris Lattner is a smart and experienced person, and I have no doubt he judged his situation accurately. I just wanted to push back against your assertion that a behavior that someone might sincerely perceive as "shouting" is objectively toxic.


When I do reference checks, nowadays I specifically ask if they "shouted, angrily" or otherwise "demeaned employees verbally" to make it clear that a raised voice, per se, isn't necessarily a disqualifier.

I was turned down for a few key job transfers into a prestigious corporate research department because the gatekeeper said I had "yelled" at somebody before. The conversation in question was "heated" and I certainly pushed hard, but it certainly wasn't anything out of line with what I grew up with. However, I've reprogrammed my entire way of interacting since coming to California, as I found that a surprisingly large number of people will just shut you out otherwise.


> The conversation in question was "heated" and I certainly pushed hard, but it certainly wasn't anything out of line with what I grew up with.

I similarly used to think that having "heated" conversations with raised voices was normal and fine, and just a sign of caring deeply about something. But the more I have worked with different kinds of people, I have realized that it's just straight up unproductive. If you're getting emotional about work, you need to take a step back, and it's not fair to inflict your negative emotions on your colleagues.

I'm glad that the industry puts pressure on people to be less like this. It has made me a better teammate.


Those moments are tough to navigate, because sometimes half the room perceives the mood as "people have strong opinions about this and want to be heard" which is really great but there are also one or two people in the room who are absolutely terrified that a fight is going to break out. In those circumstances I try to find an excuse to break in at the prevailing volume and then, a little more softly, invite an opinion from one of the quieter people at the table. I think that helps the quieter people understand that the noise isn't a crisis and isn't meant to exclude them, and helps the louder people understand that they might not be getting the benefit of everyone's input when voices get raised.


Behaviors also don't exist in a vacuum, they exist as responses to things, and while some of us want more professionalism at work and some of us less (often for good reasons), there are probably also situations in which shouting loudly in response to something, is way less "toxic" that not pushing back against it.

I would say that in a healthy work environment, there is both an awareness that both actions and intentions matter, and that we all bring our own sensitivities to the table, and it matters less whether a person is "right" or "not", that the feelings that come up can be addressed.

For example, I have seen people, from my perspective, take code reviews far too personally. I have also seen people, from my perspective, be far too personally insulting in code reviews. I am sure from others perspectives those lines would have been drawn in different places. But that's less important than can that be resolved - maybe it can be resolved by getting someone to change their PR style, and maybe it can be resolved by helping someone understand that no one on the team thinks they are dumb and actually understanding that no one meant any comment personally is an important part of growth...

But in my experience in both organizations and relationships is that "what comes up" is less important that "how it is resolved" - because stuff always comes up - we're humans, with our big messy selves that feel lots of things, whether we want to or not or whether the feeling even comes from this circumstance or not


All value judgments are inherently subjective.

The problem is that we don't have an easy term for "apparently near-universal consensus" which is what people tend to mean when calling value judgments objective.


'Intersubjective' is that word (or 'intersubjective for the domain of all human beings'), if you want a technical term.


Ah, quite nice. Thanks for that, new to me.


What if he repeatedly interrupted others? I am not saying this is what happened. Just to point out that people can be shouted at for good reasons.


I disagree with parent, there is such thing as objectively toxic behavior. That said, defining what is and is not shouting, I have never seen that done objectively.


Would somebody explain why I am wrong? I’m OK replacing “objective” with “intersubjective” if that’s it. But maybe it’s that people have seen shouting defined intersubjectively - if that’s it, would someone share an example definition of shouting that could be useful for intersubjectively labeling toxic behavior in a professional environment?

My curiosity is in the spirit of this nearby comment: https://news.ycombinator.com/item?id=30419969


I think that you make a fair point, tbh. Its easy to think that shouting is clearly defined, but we OPERATE IN AN ENVIRONMENT where people will think that I just shouted four words.

One persons shouting can be another person's virtually normal tone but in a more tense discussion.


I don’t generally pay much attention to this space. However, I’ve heard two interviews with Chris, both on Lex Fridman‘s podcast, and I don’t get a sense that he is an emotionally fragile person. The space he’s been operating in for ~20 years is filled with strong and often ideological opinions.

Just sounds to me like he felt like he was wasting his time.


Then why didn't he just say that he wanted to "pursue other challenges" or something like that, especially if he'd actually mean it?


You know what's even more common and isn't a conspiracy theory ?

Engineers letting technical disagreements turn personal.

And Chris himself said he disagreed with the technical direction of the project.


I'd be a bit skeptical of that. Sounds more like a team under stress and lashing out when someone questions the direction they are going.

In any event though not good.


They act like real a### on the forum so can only imagine how things are in private. Chris always came off really humble and down to earth.


link?


I haven't being there for a decent amount of time from my recollection John McCall was always acting really harshly. Was very stark contrast to say how Jose Valim acts on Elixir forum.


My guess would be Ben Cohen, though


I am very far removed from swift community so have 0 insight just shared my recollection of interactions on swift forums


Both. Chris mentioned not just one.


The post under discussion contains a link about "more heat than light".


In many parts of the world that goes against labor laws. You cannot harass or ostracize someone to make them leave. That is not just a toxic environment but way worse as there is intent to make the employee leave.


I don't think there was an employment involved here.


Still incredible toxic. Maybe not uncommon but still toxic.


> they wanted him gone, figured out his sensitivities and he fell into the trap

It's also possible that there are people who feel that yelling is an acceptable part of debate.

Misguided people, sure, but they do exist.


Then they should be taught, quick. Once is grounds for an apology, twice is bye-bye land. Who cares about 'objective' toxicity, WTF is wrong with people defending yelling at people.


> people defending yelling at people.

The probability is unusually high that they personally yell at people in collaborative meetings.


Trying not to make assumptions about people here, but yes, probably.

Many have been taught by abusive parents or figures of authority, or whatever shit television, that it's normal. "Some 'light' yelling is to be expected.", "Different styles of management/debate", "whatever works for them, I'm sure everyone does it a bit". What next? Corporal punishment? Like in the old days, when kids really respected their elders - or else - and were taught /right/ and women stayed in their place?

Those people need to be taught that no one, ever, should have yelled or should ever yell at them for being wrong or just disagreeing. It's not OK, it never was, it never is. Parents, friends, teachers, colleagues.

You spend 8 hours a day at work, if it's not a safe place, and you can find another job, GTF away as fast as possible, or have the perpetrators get the hell away, if you have this power.

I'm sorry we even have to say that.


If you're defending people yelling at people, you probably won't be pointing out that they are misguided if they think yelling is acceptable.

However, the claim being made was that this yelling was a trap that was intentionally set to drive Chris out and not just a case of someone who thinks yelling is acceptable behavior.


I was trying to answer to that (frankly paranoid rant about traps being set) by simplifying: Repeatedly yelling = fired. He or no one should driven away by violent behaviour. Fire the misbehaving person or say (like an adult): hey Chris, we (your management) are in disagreement with whatever, and are going with someone else as leadership, so we're reassigning you. If you want to go, well here's the door.

Anything else is just accepting sociopathic mind games. 'oh he shouldn't have felt threatened by violence and unchecked unacceptable behaviour! he was so naive, toxicity is not objective, suck it up'. WTF. I'm all for hearing the other side of this story but please, everyone (not you, GeekyBear!) stop defending sociopaths (or violent people) being sociopaths, and blaming victims not being sociopaths themselves.

The late Pieter Hintjens' writings on psychopathic behaviour needs a re-reading...


To answer your question, the root cause of my decision to leave the core team is a toxic environment in the meetings themselves. The catalyst was a specific meeting last summer: after being insulted and yelled at over WebEx (not for the first time, and not just one core team member), I decided to take a break. I was able to get leadership to eventually discuss the situation with me last Fall, but after avoiding dealing with it, they made excuses, and made it clear they weren't planning to do anything about it.

Apple's management culture is they're basically running an army of orcs. Anyone who threatens the authority of the mid-level bureaucrats - they'll sic the orcs at you.


Middle management does have a lot of autonomy and leeway to operate as they see fit at Apple, which could explain the lack of oversight.


I have been lucky enough not to work in an organisation where I've been shouted at. What are you supposed to do in that situation? I'm pretty sure now I'd feel confident calling it out (though I definitely wouldn't have as a junior), but, what can you do if it doesn't stop? Just leave? Bullies just make me so angry.


I think you are seeing an example of what you do. Here's a guy that has devoted an enormous amount of time and effort into a project, and decided that being yelled at wasn't for him - enough so that he departs something that is very important to him.

And since he's not naming names, he's being professional about it, but pointing out that the bad behavior isn't for him.

It is about as good as a response that can be considered, given the circumstances.


This really doesn't seem like a "win" for Chris though? He's having to leave a toxic environment and leave a project he cares deeply about. In a company as big as Apple do they not have procedures in place to stop this behaviour? It just feels like the shouting person learns they can get away with this when they should be told this behaviour must stop or they'll be on their way out.


> In a company as big as Apple do they not have procedures in place to stop this behaviour?

In my own experience… Apple is a large company, with a huge variety of teams with different accepted standards of conduct.

But the Swift language is also an open source project, so this is a pseudo-open dev community so I’m not sure if Apple’s internal corporate culture even has a strong hold over this mailing list and those WebEx meetings that probably include volunteers, not just Apple employees.


If you want to fight, maybe it becomes a battle of attrition fighting psychos on their own level. I think it's more of a win to be the bigger person and live your own life, especially after how much he's already accomplished.


> It is about as good as a response that can be considered

Just FYI, you want:

> It is about as good a response as can be considered

(Incidentally, I couldn't agree more with your comment. It was a very magnanimous response that he gave.)


You either leave or crush the bullies under your thumb, and there's rarely any middle ground.

So in practice, yeah you leave.


You pretty much have to leave. There are some people who will respond well to a direct request to stop or a clear statement of how the shouting makes you feel.


I guess my question is: why is this tolerated? Isn't it just simple harassment so management have to actually do something (even if they don't want to?)


Depends on management. I've certainly ended meetings when someone started to raise their voice and had a one on one conversation with them about professional behavior as well as expectations.


Perhaps management is complicit? Given they are a pressure cooker culture with even more secrecy than your average org, this might be normal for them.


Because the organizations who will hire someone who screams at people are either a) under enormous pressure, b) manned by psychopaths, so your boss's boss either does the same thing or is a cool cucumber who exploits those with less self-control, or c) dysfunctional to the point where they cannot see what's happening or correctly evaluate candidates. It's frustrating and an incredible waste of human potential.

That said you are right; you should also raise the issue to upper management. You have more options than just speaking to them directly.


Swift is really one of the most enjoyable languages I've used. I've worked in five languages full-time over the last 10 years, and Swift was the most elegant and productive for me.


Swift, First released in 2014. So we are coming close to 10 years since launch, and 12 years since initial commit. And I thought Swift was getting borrow checker?

I do wonder how many people prefer Obj-C to Swift. Philosophically speaking, Chris Lattner's ideal of Swift could replace PL from Javascript to C, while a noble goal was a big no no for me. I often think Swift is an unnecessary distraction in the overall Apple ecosystem.

And I do notice programming language design and usage tends to get very heated in discussions. And there is a growing charm in ugly language ( so to speak ) that shuts up and get the job done.

But a disagreement is one thing, being shouted at during an online video meeting? That is some level of aggression. Although there is always two side of the story.


Long time since I needed to do any Obj-C but I liked/like it and often revisit it for a quick bit of mental stimulation. Oddly, I never took to Swift once I got over the launch hype, it never spoke to me on an emotional level (but that's a different topic!) so I can't comment about it as a language.

Sorry to hear about Chris Lattner's woes with Swift team. I only know of his reputation so I'm confident he will find a new, happy and appreciative, home for his talent.


I know I do, and a lot of people I know too (prefer swift over objc)


Perhaps this was the straw that broke the camels back? Remember that Chris was influential in getting Google to invest in Swift for Tensorflow. I am hopeful for Chris’ new start up https://www.modular.ai


It’s really hard to understand what they are delivering from their home page.


"Kubernetes for AI stacks" I guess.


What happened to SiFive?


Is Swift for Tensorflow still alive?


no...I don't think so


This is really sad, I also share Chris's sentiment that the direction of the language is not going in a direction where it is simple and composable.


> The catalyst was a specific meeting last summer: after being insulted and yelled at over WebEx (not for the first time, and not just one core team member),

It sucks that we keep enabling abusers. I don't begrudge Mr. Lattner his decision, but why should he be the one to have to leave?


Seems like this discussion about an astonishingly inconsequential piece of syntactic sugar was the catalyst:

https://forums.swift.org/t/if-let-shorthand/54230


So everybody knows Chris Lattner is an amazing genius. I hate to ask, but is Chris toxic himself, even a little bit? Please understand, you have probably known somebody very intelligent, and proportionately & inversely emotionally not-intelligent... something like that. It's like a spectrum or whatever, and what I asking is if Chriss is on the milder less abrasive side of that spectrum? It seems Chris is probably easy to get along with, but I've never had the pleasure myself, so it's just a hopeful assumption. Perhaps the core team hired some brilliant people, who themselves are amazing, but have a small bit of whatever we call "toxic"?

Perhaps Chris is letting bygons be bygons, and going bye-gone?


I've worked very closely with Chris a number of times and had a wonderful experience every time. I also saw how he looked after the people in his team and how much they repected him.

(Part of sticking up for his team means pushing back against idiocy in the upper tiers of management, which isn't always popular!)


I think whats happening here is that discourse that is created by online forum discussions ends up sounding hostile rather than productive. I have experienced this in remote only companies also. In previous generations of open source languages people communicated with email which at least made them think about their argument and rephrase it. Now however (also partly influenced by modern internet culture) people take an adversarial side and then just post it without rephrasing it in a way to sound respectful and constructive.

I do not see an obvious solution to this other the somewhat untenable goal of restricting discussions to email only.


This might be an unpopular thought in a Swift-focused thread, but as an outsider I have never understood how anybody thought Swift had a future. I saw that people left Apple and tried to get a Swift thing going at Google, which appears to have collapsed shortly after. I have no idea whether it collapsed and people quit Google, or people quit and it collapsed. Doesn't really matter which, or whether it really collapsed at all.

To me, as an outsider, the only interesting question remaining is, what ideas, if any, from Swift are good enough to adopt into other languages that seem to have legs?


Perhaps people thought it had a future because it's backed by one of the richest companies in history, who also happen to be producing the most popular consumer product in history, which incidentally runs on Swift?


Not really interested.

More interested in what good ideas we might be able to take from Swift.


Bounds checking by default, explicit unsafe code, type safe enumerations, generic programming with all constraints validated at compile time, standard ABI,....


That was never the purpose of Swift. It’s a practical programming language and in that capacity it’s a resounding success.


Again. Swift has (like any walled-garden language) exactly zero value to me other than, potentially, a source of uniquely new, useful programming language design ideas.

Pjmlp's list does not include any unfamiliar from other languages.


Only you know what's valuable to you, but since this is a public forum, it's a bit strange to be so categorical about your own highly subjective opinions. Why would anyone care about them?

This sounds particularly deluded:

> as an outsider I have never understood how anybody thought Swift had a future.

The objective fact is that there are a million or so working developers happily using Swift. And there are also lots of people that dislike it, but clearly it's very useful and popular.

And in what sense would Swift be "walled garden"? Very few languages were constructed for philosophical purists like you.


Again, not interested. My openly identified opinion on the usefulness and ultimate fate of the language makes no difference. The number of people who have used the language doesn't, either. Anyone not interested in my opinion is welcome to their own. Even you.

Either my actual question has meaningful answers, or it doesn't. Thus far it seems the answer is "none"; that would be the least surprising answer. And the evidence thus far suggests you would like to distract us from that fact, if fact it is.


Clearly the answer is not "none", it's "almost everyone".

The first Swift build tools beta was downloaded ELEVEN MILLION TIMES in the first month alone. I would be surprised if that is not some kind of record. And it was the most loved language on SO already in 2016.

We're talking about a language powering the favourite devices of the richest 20% of the world population, a language that is extremely actively supported by the richest company on the planet. I think any reasonable person would conclude that it "has a future".


My actual question you persist in trying to distract from was, and I quote: "what ideas, if any, from Swift are good enough to adopt into other languages...?"

Apparently the answer is, No features in Swift should be of any interest to anyone else.

And, again, how many NOSES APPLE HAS BROWNED is still of no interest whatsoever. Shame on you. Your behavior is reprehensible, but not unrepresentative; it reflects badly on Apple.


I always thought it was a weird and unsustainable situation to have such a powerful figure, not working for Apple, involved.

Not that it's wrong or that it can't possibly work, it's just not how Apple rolls.


> In is obvious that Swift has outgrown my influence, and some of the design premises I care about (e.g. "simple things that compose") don't seem in vogue any more.

If not Lattner, what figure or faction is now in control of steering the future of the language?


According to the project structure page[0], Apple, Inc. is the project lead and appoints the core team.

0: https://www.swift.org/community/#community-structure


Wouldn't it by nice to have a language where the project owners said "you know what, I think we're done, just bug fixes now".


Elixir announced something to that effect in 2019.

> As mentioned earlier, releases was the last planned feature for Elixir. We don’t have any major user-facing feature in the works nor planned. I know for certain some will consider this fact the most excing part of this announcement! [0]

[0] https://elixir-lang.org/blog/2019/06/24/elixir-v1-9-0-releas...


Rich Hickey talked about this in his History of Clojure presentation and paper.

https://news.ycombinator.com/item?id=27782864

It's definitely an unusual approach for authors to consider software "done", and Clojure apparently gets close to it (?)

Not surprisingly, that kind of stance also generates controversy ... you can't win :)


Common Lisp is like this. There is a standard, but no changes have been done since then.

On the other hand, it also shows some of the disadvantages - for example, there is no multi-threading support of any kind in the Common Lisp standard, because this was not a common feature at the time the standard was created; neither is there Unicode support. Of course there are libraries that implement these things, even portably (and this being CL, the difference between library and core langauge is minimal), but it's still a point some hold against it.


Not a language in the way you mean, but TeX did this a while back.


That's Standard ML. Coming at you live straight outta '97


Mid-2000s versions of C and C++ come close, but I think the new features since then are largely worth having.


C++ was just tied in "C++0x" debacle for over a decade, only delivered when they cut tons of stuff.


Elixir feels largely this way to me.

Though I'm on the fence, that they end run around the pressure to evolve, by just inventing new DSLs via gobs of macro magic for some of the newer stuff. Whereas a language like Swift keeps leaning on the compiler to respond to evolving demands.


Yeah, for a dying language.


A language's surroundings can still be improved, the documentation can still be improved, new tools can still be made, new libraries can still be written, etc.

And at the end if the language is inadequate for something then simply use something else that fits better, there is no reason to force all languages be gigantic kitchensinks.


On the other hand, it is a lot of work to build your product in one language, which you then outgrow, which forces you to switch to another language.

I've seen this happen many times, particularly with projects that started out in Python. Eventually the codebase and teams get too big, and managing a large codebase in a language with Python's type system becomes very cumbersome. Type annotations were a reaction to that, and adding them to the language has probably enabled a lot of teams to continue using Python.

Off the top of my head, other examples of good changes include NIO and better concurrency primitives/lambda syntax in Java, generics in Go, and async/await in several other languages. All of those solved pain points for developers that might have otherwise been solved by switching languages.

Why does this matter? First of all, it's a lot of work to rewrite products in different languages. Also, while it sounds attractive to pick the right language for the task every time, in practice it is difficult to run a coherent engineering organization if every product or team uses a different language -- it's hard to build good shared libraries and app frameworks for every language in use, and if someone leaves the company it can be hard to transfer their projects to other engineers. Most of the time, as an engineering org matures it eventually settles on a few "blessed" languages, invests in good frameworks for them, and requires all the teams to use them. Out of necessity, those tend to be the "kitchen sink" variety.


This is about why you may want a kitchensink language, which can be fine depending on the case and a criterion for choosing it as "the right language" (or skipping a language) for your case but it isn't really about making all languages kitchensink languages. The comment above implied that languages which'd try to avoid becoming kitchensink languages would be dying languages - as if these are the only outcomes.


People miss that programing languages are no different from any kind of software products, and software products without new releases hardly survive.


COBOL awaits you!


Still being revised in standards committees: https://en.wikipedia.org/wiki/COBOL#COBOL_2014


feel free to use QBASIC


awk?


Apple has done some very cool things with Swift. The nice integration of pretrained (or train your own) deep learning models in Swift applications/apps, SwiftUI really does work as a multi-Apple-platform UI library. I wrote an app that was 98% identical across macOS/iOS/iPadOS.

I was deeply disappointed a few years ago when the TensorFlow in Swift project was scrapped.


Same, working with Python is such a poor UX by comparison.

That project had a ton of potential, but it seems like it lacked buy-in from the industry, and the team working on it seemed to lack leadership as little progress ever seemed to get made.


Swift has turned into a hot garbage of a language, that is starting making Enterprise Java from 2001 look good. The sheer complexity (especially with v5.5) of it has just become unmanageable and just looking something more like a mashup of Kotlin, Go, TypeScript and Scala in one language. It has no good direction and it is an overrall confusing language even compared to what it was supposed to be replacing (Objective-C)

Here is a real method signature under Swift:

public func index<Elements: Collection, Element>(of element: Element, in collection: Elements) -> Elements.Index? where Elements.Element == Element, Element: Equatable {

  // Function body got is here
}

It looks like you are doing SQL statement in just a method signature, which is just insane.

The Swift Team really needs to take a step back, and actually start removing features and simplifying the language drastically if they want it to succeed. At his point there is a great chance that Swift is becoming a liability for Apple and it may actually impact its app ecosystem negatively in the long term.

I hope they make a drastic change to it. Java was able to turn around, and start becoming a better language, I hope Swift does to, by simplifying and removing features and not just piling stuff into it.


This is an interesting example of emotional frustration pretending to be a technical argument. I'm fascinated by the mechanics of irrational hype and subjective downward spirals of dissatisfaction on technical topics. It's very similar to how many people consider the best music of all time to be the one made by their own generation or what they grew up with through their parents. Most of us are employed by building more and better options and features, but as a community there is a streak of what I call cranky senior developer that is just tired of all the new features and would like it tear it all down. You can also call it "complexity fatigue".

I don't use Swift full time and I had no problem reading that method signature.

This is something that you would write if you were a library author interested in providing generic and flexible data structure. The part after "where" expresses a non-trivial idea: "the type of the first element parameter should be the same as the inner element of the homogenous collection (second parameter), and we need to compare elements to other instances of its type but other than those constraints this code will work on any element and collection". It does that in about 6 swift terms, and the compiler has a full understanding of what you meant and will prevent errors that violate these constraints. Notice that you never have to write this type of code if you're using the library. Swift has decently organized progressive disclosure of language features. For example I never had to write unsafe pointer arithmetic or interop with C, but I'm not going to complain the feature is there for those that need it.


> The Swift Team really needs to take a step back, and actually start removing features and simplifying the language drastically if they want it to succeed. At his point there is a great chance that Swift is becoming a liability for Apple and it may actually impact its app ecosystem negatively in the long term.

So. This. I have been a long time Apple fan. I noticed many years ago that they had a disproportionate representation amongst creative programmer types. I was intrigued and have come to understand why. It's not perfect, and Apple pisses me off some days, but overall the ratio of "It Just Works/WTF?" has been higher for me than other platforms.

As a Smalltalk guy, Objective-C was tragically comic, but nevertheless effective. It was like talking cavemen with your partner, and giggling about it, but you could still get stuff done.

When they announced Swift as "without the C", I was originally excited/intrigue. Despite some original surprises, I soon told peers "I've always hated C++, this feels like a decent compromise that I could like."

For quite some time, I've prototyped first on iOS, refined, and then replicated in Android. Because it was easier/better in Apple's stack than it was in Android.

At this point, I've given up. It's as complicated as C++, maybe more so. The Android development story is not all that much easier (Kotlin improves some things, but it's still Android), but Apple has actively lost their edge (for me) due to the added complexity.

I still write Swift on a pretty regular basis (I maintain/evolve 3+ apps with it). But to the point, I no longer want to. I'm actively interested in an alternate method of writing app UIs.


That syntax is important. It's specifying that the the collection `Elements` contains values which are the same type as `Element`. Sure, those are probably bad names for the generic types (though better than `T` and `U`). But you need some way to express that the `Element` type is the same as the contents of the `Elements` collection.

The SQL analogy is a good one– what the function signature is doing is specifying a constraint between 2 otherwise unrelated values. Otherwise you could call the function with an array of `String`s an ask it for the index of an `Int`.


I feel like the example you gave is pretty bad as to me that's a perfectly sensible function definition. I didn't write Swift from the beginning but I believe a function like this would've been valid Swift from the beginning? Just seem like core features of the type system.


That method signature is not that bad to be honest, I actually feel this a bit better than Rust. (Though it might have helped if you added a newline in the middle of that signature). Maybe you're complaining more about the complexity of the type system rather than the syntax.

Or maybe because I'm saying this because I'm already acquainted to the evil monstrosities of C++. Here's the same function: (and this is even with C++20 concepts on):

    // Yes, you need to include some headers to get even the most basic features! Typical C++.
    #include <type_traits>
    #include <optional> 

    template <Collection Elements, Equatable Element,
              typename = std::enable_if_t<std::is_same_v<typename Elements::Element, Element>>>
    std::optional<typename Elements::Index> index(Element element, const Elements& collection) {
        // Function body
    }


I agree with you. I get the problem its trying to solve but its removing the ability to parse that line of code without carefully reading it. SQL is able to have this because we are talking about querying a database, but a modern language should never that hard to mentally parse in my opinion.


> It looks like you are doing SQL statement in just a method signature, which is just insane.

Aside from the `where` keyword, I don't see much similarity with SQL. That said, I'm a big fan of SQL, so I might be blind to the issue.

Several other people have already said it, but I find that definition pretty easy to read, and I've never been paid to write a line of Swift. The most confusing part for me was the fact that the argument names seem to have spaces in them. The type constraints look pretty nice to me, to be honest.


Had this dude stayed on any projects long-term after LLVM?


He seems to be changing companies quite often. Apple to Tesla to Google to his present employer, in a pretty short time.


Right, I still remember that Swift for ML as quite a recent initiative. The lack of commitment doesn't feel right, and really calls to question whether people still follow up with his later projects or not at all.


offtop, but I don't like that "Core Team" nomenclature that Rust and apparently Swift use

I prefer

Language Design Team, Compiler (Backend/Frontend)? Team and similar more


Rust does have teams for language design, and for the compiler. https://www.rust-lang.org/governance

But there has to be some mechanism for overall coordination, and with Rust that's the Core team.


C++ has a Directions committee, with exactly zero authority, but immense influence based solely on respect for its members. That is probably a good model to follow, provided you can identify enough people who command such respect.


Time to fork.. it’s pretty obvious now the language is being driven to a weird direction that nobody outside of apple really care about, and far from its original goals. I had no idea chris lattner was being pushed away from core, but everything makes more sense now.


People often suggest forking large projects like this, as if it would solve a problem. If you actually think through what that would entail, you'll find that it would actually create more problems.

The people that actually do the work on Swift are those that control the language and its destiny. You could fork it, but without a similar amount of effort behind your fork there's no way you can convince people to switch from the mainline.


i think a fork not owned by apple could gather a lot of developers interested in expanding the language to other platforms. it is initially a great language.


Should an individual still bother learning Swift now in 2022? Is it a bad investment of time?


If you want to develop software for the Apple ecosystem, then yes; otherwise, it’s probably a waste of time.


If you're 100% in on Apple, it's probably fine. Outside that ecosystem, Kotlin, Go or Rust are all probably better investments of learning time depending on the problem you're tackling.

The question more is, is bothering to learn native smartphone app development a bad investment of time now that the low hanging fruit is gone and we're due for a new platform paradigm?


As an iOS dev, IMO I’d say it’s useful if and only if you’re writing native apps for Apple hardware.

While Swift can be used for linux desktop or as a backend language, what mostly matters for practical projects is the libraries, and I think for those cases the better libraries are on other languages.


Are you writing Mac or iOS apps? Then, no. You'll pretty much need to learn Swift. Objective-C is clearly in legacy mode. You can't even use several APIs with Obj-C.

Are you writing something else? Then, it's probably not the best investment to learn Swift.


Which APIs exactly?


Others have covered why/why not.

I propose learning Kotlin if you want something syntactically similar but with broader set of usecases or Rust if the lack of runtime with modern syntax is what is appealing to you.


This is pretty sad. Swift has some good ideas in it, and thankfully those ideas are making their way into other languages.


What are those ideas, specifically? (Assume I don't know any details about Swift.)


Swift did some interesting work around Async, which I know is making its way into Rust now. Random example here: https://blog.yoshuawuyts.com/futures-concurrency-3/


Most languages seem to be getting async/await features lately, including even C++. Is there anything uniquely interesting about Swift's version?


I'm curious why the person doing the yelling wasn't named. Chris gets named for leaving, but the person doing the yelling doesn't. I've seen this over and over in various professional settings and it doesn't make sense to me to protect and hide the people behaving unprofessionally. Maybe they'd learn and work on their behavior if someone told them?

UPDATE: very good points below about the internet mob that would attack in response. Sad, but true.


Because random people yelling isn't that interesting; it's more interesting that it apparently had no consequences for them. And that's on team leadership, not on the yelling person.


I think it makes a lot of sense.

This is a post to the internet, and I don't think having someone get flamed by a bunch of people online is a conducive way to settle a disagreement, whether they were being unprofessional or not.

> Maybe they'd learn and work on their behavior if someone told them?

I think the problem here is that there would be an online mob constantly telling them.


Publicly naming and shaming would be an escalation that he doesn’t want to partake in. Better to let whoever has knowledge of incident know how impactful it was and move on.


It’s a charity thing. We want to be charitable, we don’t want to assume the worst, we want to assume they can learn, be better, without the maximal social punishment that would likely ensue from naming them publicly…

At least that’s the positive angle. The negative one is they are some kind of scumbag with enough leverage to keep their ass covered and picked a fight they thought they could win, but didn’t win enough to avoid consequences, just not public consequences.


There is in general an asymmetry between good and evil, because good will follow rules and be civil and evil will do whatever it takes to win.


Competent evil is usually civil in public and uses civility as a weapon to suppress any kind of frustrated dissent that isn't expressed perfectly calmly.


It's a variation of Postel's law - "be conservative in what you send, be liberal in what you accept". Those who were involved know without a doubt, and the rest of us really need not know details.

Especially since words once said/written can never be taken back.


see how Hackernews empower toxicity for other languages

but when the same thing happen for rust, it's censorship, nobody talks about rust's internal problems

that's part of the reason i choose to not give rust another chance

the sectarian aspect of it is disgusting, to say the least


> but when the same thing happen for rust, it's censorship, nobody talks about rust's internal problems

You clearly can see through the bias in HN, and each project has its controversies. Out of any project that I have seen that has had drama like this, the one that is more of a cult and has its problems hidden from view is most definitely Rust.

Just look at how this one went [0] or the level of discussion that is going on in [1] and [2]. The way the foundation was setup seems to be heading into another disaster, which even the main promoter of it is starting to distance away from it after getting upset about Amazon [3].

I won't be surprised to see the next controversy in Rust hidden away from view vs the controversies in other communities.

[0] https://news.ycombinator.com/item?id=29501893

[1] https://news.ycombinator.com/item?id=29306845

[2] https://news.ycombinator.com/item?id=30163141

[3] https://news.ycombinator.com/item?id=28783997


Huh, how did all that get buried on HN?!?


What a mess Swift and Swift development has become! Im not throwing away my ObjC books any time soon! You could see this coming. It was a slow moving trainwreck. Like someone mentioned, we need a lightweight Swift. Also: too many cooks! This is what you get with such a big language. Chris has lost control.


As someone who got their start in “real” software development with Obj-C, I’m not sure if I could go back to writing it full time after the past several years of writing Swift. Swift is far from perfect and could use a good refocusing, but all of the built in functionality that I’d have to pull in a third party pod for with Obj-C and no need to maintain header files are huge for me personally. Its strict typing has also saved me from idiot moments that frequently occurred when writing Obj-C many times and made sweeping refactors a lot more feasible.


I miss message passing and the dynamic aspects.


Aesthetically, one of the things I didn’t like about Swift was the use of parameter names in the parameter list to ensure integration with the objc api. Made some function calls too verbose. One of the things we should have abandoned when we left objc. I admit it sounds silly but swift wasn’t a clean break as i thought it would be.


parameter names help you to see what those values are supposed to be though

without parameter names, something like `func range(int, int)` can become dangerous because you dont know if the first parameter is the start and end or is it start and length?

`range(start: int, end: int)` makes it clear at the call site what is going on and gives your brain enough info to not mess up the parameters (especially when refactoring or fixing bugs in existing code)


I just wish they would fix basic usability of strings and protocols in Swift.

Why can't I have a variable that has a protocol as type?

Why can't I pass a substring to a function that wants a string?

Why is string parsing so much harder than it should be?


> Why is string parsing so much harder than it should be?

Mike Ash answered[0] this question back in 2015.

[0] https://www.mikeash.com/pyblog/friday-qa-2015-11-06-why-is-s...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: