Hacker News new | past | comments | ask | show | jobs | submit | Tainnor's comments login

Kotlin

I was maintaining a server-side app written in Swift from 2018 to 2020 and we ran into tons of issues at least back then. I haven't looked into all the improvements they did since, but you might want to check.

My coworker wrote down some of the major issues we were facing: https://forums.swift.org/t/issues-learned-4-years-with-serve...

Personally, I've been soured enough that I don't ever want to write any server-side code in Swift again. It just seemed that Swift pretended to be open-source and cross-platform and then Apple would just unilaterally push some changes just because they wanted them for iOS (e.g. function builders for SwiftUI) without waiting for community feedback. It's fine to be a dictator for your PL, but not when you're claiming to have "open governance". All the while support for Linux was routinely being neglected.

I would rather recommend Kotlin. It's similar enough to Swift and it comes with the advantage of having the whole JVM ecosystem which can probably do most things you'd ever want to do on a server.


In my opinion, exceptions should be for really exceptional situations - things you can't realistically expect and/or which just signal a programming error (e.g. a precondition being violated). They aren't usually recoverable locally and should bubble up the stack, so you can handle them at some boundary (e.g. so you can serve up a 500 page to a request without crashing the entire server for everyone).

For things that are expected to go wrong sometimes, algebraic data types (including Result types) are better. They are part of the type system, so everything you can do with types, you can do with them, including making functions parametric over whether they return an "error" or not. Exceptions need extra mechanisms so that functions can be parametric over them (e.g. Swift has "rethrows", Java has... nothing). They aren't really composable with anything else.


All of these have happened to me:

- The interviewer is wearing a suit.

- The interviewer asks "you've worked with [person who now works at our company] before, what do you think about them?". Bonus points if they insist after I give a diplomatic answer.

- "Why are you using microservices?" "Because everyone is doing it"

- Not following up. For weeks.

- "Can you send us your high school diploma?"

- Take home assignment with a time limit.

- No feedback on take home assignment except "we didn't like your approach".

- Developers use separate laptops for coding and for internet access.

- There are three architects on a single project.


"Take home assignment with a time limit."

What's your concern there? (Honest question!) I feel terrible about asking for home assignments but I feel like it's good for people like myself who are terrible about coding live. (I usually have a choice of live or homework.) I put a "limit" as a guideline about what kind of quality to expect, usually with language about things not to worry about. I don't expect/want folks to burn a bunch of time on things.


Not OP, but in my experience, it always takes more time than the time limit, either because they've just generally misjudged or because I spent the allotted time, did the case, and then through doing all the work came to some realizations about ways I could have done it better. Then I'm left with the option to either submit something that I know has flaws or spend more time updating it. I don't know about you, but I just hate submitting something that I feel is worse than my abilities, so I end up redoing it and spending much more time.

"Soft" limits as guidelines are fine (although in my experience they are often completely unrealistic - e.g. set up a spring boot app with tests, CI config and kubernetes deployment, that implements API XY - all in 4h).

But a hard limit (you will get the assignment at X o'clock and you will have to turn it in within 3 hours) is just bonkers. It's totally unnecessary and artificial time pressure. Maybe I'm having a bad day and I make a silly mistake that takes time to fix - or the assignment is complicated and it takes me half an hour to even understand it and decide on an approach. Or I could implement the assignment in a maintainable way, but the time pressure requires me to do it in the most hacky way possible. Basically, I don't do my best work under time pressure, so I have no idea why you would impose it on me. In my professional experience, there's very rarely ever been any time pressure like that, so it's not like it measures something important.


The main issue is that you can give any guidelines you want but a ton of people will take "Don't spend more than 4 hours on this" with an implied nudge/nudge/wink/wink.

That said, as I wrote elsewhere, if you can't pull a reasonable version of a work product out of a folder, I may be asking you to create one. I'm not necessarily talking code but I absolutely am looking for some proof that you can do the work if it's not already in the wild.


I have had this problem and my assignments really stress that it doesn't even have to be done, we can look at in the meeting and possibly finish small parts together or talk about work that'd be done. But still people do sometimes do far too much and it's not the positive signal they think it is.

Do you compensate them for their time?

If not, why?

How would you feel if they insisted on you doing home assignments?


If somebody has a problem with it, they can always choose the live coding. I try to find ways to make the process more equitable.

So it's either or, and you are most likely losing the best people by insisting.

Maybe that's fine for you, but maybe not.


Yeah I mean.. I have nearly hired people before who turned out to be scammers, no coding skills at all. I need some kind of signal before turning off the recruiting pipeline, emailing a bunch of people "no thanks," and making a commitment. Sometimes strong referrals work but that might unfairly advantage some, can be gamed, and doesn't work at all for most junior devs without a work history. Shoot, even a strong history elsewhere doesn't mean they will be successful on a different product/platform/team/etc.

Both interviews (live and homework) have downsides, referrals don't work for all and hiring blindly isn't an option. I need some kind of signal, some way to choose one of hundreds. I have successfully used contract to hire before, and while that does lower the anxiety on the interview track, many can't consider that option. And that (again) unfairly advantages some.

I.. don't really know. Just over here trying my best. The market is broken in many ways.


I hear that. But I’ve been burnt by people who “coded live” real well and then couldn’t or wouldn’t do anything for months at work. IMO - any one who is going to be good will be good at coding. It’s not rocket science. Haha. Been around long enough to see language du jour change too. Python also isn’t the end all of languages (we’ve moved on to Julia where I work).

Can you think? Are you curious? Those are hard to distill in an interview and harder still in formulaic interviews.


What's wrong with the interviewer wearing a suit?

For what it's worth, I'm near 40 and have never once seen anybody at any work place wear a suit (who wasn't giving a speech at a town hall).

There is nothing wrong with wearing a suit, and it may be common in some workplaces/cultures, but in others it can be corollary to a work culture you are not compatible with.


In 2024, at a tech company, I wouldn't find it common. But, especially for people who may have customer-facing roles, I probably also wouldn't find it worthy of comment or really a second thought if someone in the office was wearing one.

When I last interviewed 15 years or so ago, I wore a jacket and tie (though not a suit) when I interviewed in person. I certainly didn't think it was necessary but I'd have been shocked if anyone had considered it to be a ding.


There's nothing morally wrong with it, it's just not an environment where I'm likely to be comfortable.

My guess is it just doesn't fit their style. Some people want more formal clothes in the workplace and others would rather not have a dress code. Even then, just the interviewer wearing a suit doesn't necessarily mean it's a strict dress code culture, but probably a strong signal.

If they're wearing a suit, why aren't you?

> - The interviewer is wearing a suit.

Came here to say this, specifically.

> - "Can you send us your high school diploma?"

Related: asks questions about that McDonald's job you had in high school, even though you have a master's degree and years of experience in the actual posted job requirements.

> - There are three architects on a single project.

Related: there are more than three layers of management between you and the Big Boss. Two is better. One is best. Too many layers means that it's probably going to take months to get approval for anything, meanwhile management is giving you the stink-eye for "not producing".


What is "FINITE"?

Well, that is a good question. It’s a concept I’ve always just taken for granted. How do you define it? I don’t know.

So the problem with logic and in particular model theory is that you always have to assume some sort of logic for the world where your models live in (a meta-logic) - otherwise you can't talk about anything.

If you take e.g. ZFC for granted in the world of your models, then you can talk about "the" natural numbers, where every number is finite, as a model of PA. And you can also define non-standard models of PA which include infinite numbers. If you use some other foundation, you can probably also define these models in some way. Now whether these constructions mean anything to you, or capture your intuition about "finiteness" is another thing entirely...

The problem is that from within the theory you cannot distinguish between these different models. There's no way of defining a predicate "finite" in first-order PA that agrees with the notion of "finite" that you've established in your meta-logic.

And in general, it's worse than that. As long as your first-order theory allows for any infinite model, there exist models of that theory of any infinite cardinality (this is the Löwenheim-Skolem theorem). So, in general, first-order logic cannot pin down structures uniquely. (Again, this is assuming ZFC in the world of our models.)

This btw also leads to the "Skolem paradox" which states that there are countable models of (first-order) ZFC, even though ZFC says that there exist uncountable sets. But this is only a "paradox" until you realise that "countable" and "uncountable" don't mean the same thing within the model as they do outside of the model.


> There's no way of defining a predicate "finite" in first-order PA that agrees with the notion of "finite" that you've established in your meta-logic.

Why doesn't this work?

Define the predecessor function P such that P(S(n)) = n for all n (and maybe S(P(n)) = n for all n != 0).

FINITE(n) := (n=0) OR FINITE(P(n))

i.e. a number is finite if it is zero or if it predecessor is finite.

Why doesn't that work?

BTW, as an aside, since you seem to know your stuff I thought I'd ask: I noticed as part of a separate discussion over on /r/math that AFAICT there is nothing in PA that distinguishes successor and predecessor until you get to the definition of multiplication. Is that right? Is it noteworthy?


> I noticed as part of a separate discussion over on /r/math that AFAICT there is nothing in PA that distinguishes successor and predecessor until you get to the definition of multiplication. Is that right? Is it noteworthy?

I'm not entirely sure what you mean by this. "PA without multiplication" is basically Presburger Arithmetic (https://en.m.wikipedia.org/wiki/Presburger_arithmetic). One of its axioms (as in PA) is that 0 has no predecessor - but clearly every number, including 0, has a successor. So that's a way of distinguishing predecessor and successor functions.

I believe that when you look at nonstandard models (which basically look like N + any number of copies of Z), the order isn't fixed in those copies of Z (unlike N), so you could swap predecessor and succesor within those copies, but I haven't really looked into that all that much.


Yeah, that's not quite what I meant. What I was trying to get at was more analogous to the ambiguity of i and -i. Suppose I wanted to create a system of math whose intended model was the negative naturals rather than the positive naturals. Let's call this new system NPA. NPA has the exact same axioms as PA except with predecessor instead of successor as the core axiomatic construct. So zero has a predecessor, but no successor. Everything works as before, except with the letter P instead of the letter S. Addition still works, you're just adding negative numbers and getting negative numbers as the result, which is what you would expect. It doesn't break down until you get to multiplication, where, if you follow the same definition as standard PA you get the "wrong" result because you want the product of two negative numbers to be a positive number, and there are no positive numbers in NPA. So AFAICT multiplication is required to distinguish PA from NPA. Which raises an interesting question: is there a non-trivial operation you can define to distinguish i from -i the way that multiplication allows you to distinguish 1 from -1?

In every model of a theory, you can just consistently relabel elements in order to get another model. But that doesn't really tell you anything because that model is just isomorphic to the old one (that term has a precise meaning, but I think the "relabeling" intuition captures it).

So your NPA is just a consistent relabeling of elements n to elements -n - it's fully isomorphic to PA. There's nothing particularly interesting about it. Notice that "the product of two negative numbers is positive" doesn't help because in your alternative model, "positive" (or rather, "nonnegative") just means something different (or more precisely, it doesn't even make sense to talk about "nonnegative numbers" because that's all of them).

It only becomes interesting when you have different non-isomorphic models of a theory - that then means that there are (distinguishable) properties of the model that the theory can't talk about.

In the case of the complex numbers, if you just exchange i and -i, I believe that you get a field which is fully isomorphic to C.


> In the case of the complex numbers, if you just exchange i and -i, I believe that you get a field which is fully isomorphic to C.

To add to this and expand a little, there are only two isomorphisms between C and itself which keep all the real numbers fixed: the identity and the conjugate map (which sends a+ib to a-ib, and in particular i to -i).

If you don't fix the real numbers there are other such isomorphisms but I believe you can only construct them via the axiom of choice (so you can't really "write them down").

In the case of the real numbers there's really only one such isomorphism, the identity.


Right. So in those terms, NPA is isomorphic to PA -- until you introduce multiplication, at which point the isomorphism goes away becauase S(0)S(0) is S(0) but P(0)P(0) is not P(0). That seems like an interesting fact to me, and feels like it might be a clue pointing to some deeper truth.

You can't define a model "NPA" in which all numbers are nonpositive but then define multiplication such that it can yield positive numbers - that just wouldn't be a function.

It's pretty easy to prove that S(0)*S(0) = S(0) with the axioms of PA. That has to be true no matter what interpretation you give "S" in your model, so in your proposed model "NPA", in which the domain are the nonpositive integers and S is the predecessor function, of course it has to be true that the "product" of two -1 and -1 is -1. It just means that in such a model, multiplication isn't what you think it is.

One should really be careful not to mix up logical systems and their models. The way you're using notation, it's really easy to get confused.


If you start with NPA it is identical as PA. You can apply all the same logic and all the first theorems. The problem with multiplication is not that the proofs fall apart, it is that multiplication does not mean what you want it to mean.

When you go to the complex numbers, you have a similar situation. The only thing that distinguishes i from -i is what you want the principal branch of sqrt to be. Otherwise nothing would change if you flip them.


> The problem with multiplication is not that the proofs fall apart, it is that multiplication does not mean what you want it to mean.

Well, yeah, that's kind of the point: the positiveness of the naturals is not just a convention, it is actually a logical consequence of something in the axioms. Specifically, it is a logical consequence of the definition of multiplication. I find that interesting.


That ties to one of the standard axioms of the real numbers in classical mathematics.

There exists a set P which is closed under addition and multiplication, such that for every real number x, one of these three things is true: x is in P, -x is in P, x = 0.

P is, of course, the positive numbers. This axiom makes the reals into an ordered field. Add in completeness, and the reals are unique (up to choice of set theory).


You can have an infinite chain of predecessors of non-standard numbers, each claiming being FINITE(). That satisfies your definition. This could be easily demonstrated in Robinson arithmetic, which has simple, understandable non-standard models.

Robinson arithmetic is an excellent example here.

Using ZFC, Robinson can take a standard model of the reals and construct a nonstandard model, and a function from the standard model to the nonstandard model. Every possible statement of first order logic that can be made about the standard model is true in the nonstandard model. Likewise every possible statement of first order logic that can be made in the nonstandard model only involving things that came from the standard model are also true of the standard model. This is his "transfer principle". However the nonstandard model comes with things like infinitesmals, and infinite numbers. So in the nonstandard world we can really do things like define a derivative as dy/dx, or an integral as an infinite sum. Then recover the classical definitions through the transfer principle.

See https://terrytao.wordpress.com/2007/06/25/ultrafilters-nonst... for an explanation of how this actually works in some detail. But if you get your head around it, you'll see exactly why first order logic cannot possibly capture the idea of being a standard model. Because, in first order logic, the nonstandard model really is indistinguishable. And so finite cannot be expressed in first order logic.


Note that i meant https://en.wikipedia.org/wiki/Robinson_arithmetic , while you wrote about results from different Robinson: https://en.wikipedia.org/wiki/Nonstandard_analysis .

Oops, thank you for the correction! I learned something today.

Though I'm sure you can see how I thought that the other Robinson fitted in the conversation,


> You can have an infinite chain of predecessors of non-standard numbers, each claiming being FINITE().

Ah. That makes sense. Thanks.

So surely someone has come up with a system that captures the intuitive spirit of mathematical induction, i.e. that the chain has to actually be rooted at 0 rather than have an infinite chain of predecessors or be a loop? How hard could that be?


How hard could that be?

It seems like it should be straightforward. But it's mathematically impossible.

Similarly, it seemed to David Hilbert like it should be possible to have a system of formal logic that all could accept, which would prove itself to be consistent. But that also proved to be impossible. And the logic which lead to that being proved impossible is deeply connected to the first line of reasoning that I gave you for why this is also impossible.

And in the end what they all come down to is some kind of self-reference. In this case, to have a finite statement define what it means to be finite, we must already know what finite means. Second order logic sort of assumes that we can simply know what finite means. First order logic doesn't allow you to wave your hands like that. You can't cheat in any way. And so you can't find a way to do it.


> Similarly, it seemed to David Hilbert like it should be possible to have a system of formal logic that all could accept, which would prove itself to be consistent.

Minor nitpick, but I don't think this fully captures the essence of Hilbert's program. If we had a system that could prove itself consistent, that would mean nothing, because the system could be inconsistent, therefore unsound and so its consistency "proof" wouldn't mean anything.

Rather, the goal was to start from an arguably "uncontroversial" base of mathematics (something like primitive recursive arithmetic, which intuitionists might also accept) and use that to bootstrap "infinitary mathematics" of the kind the formalists such as Hilbert were arguing for. The hope was that one could prove the consistency of such more abstract mathematics (PA or ZFC) from the more concrete foundations.

Unfortunately, Gödel's theorems completely demolish that idea. It's not just that a (sufficiently strong) theory can't prove itself consistent, it also can't be proven consistent by a strictly weaker theory such as PRA (because if such a proof existed in the weaker system, it would of course also exist in original system).

Interestingly, there are systems which are neither stronger nor weaker than PA which can prove PA consistent: https://en.m.wikipedia.org/wiki/Gentzen%27s_consistency_proo...


That goal is what I meant by the phrase "that all could accept". You're right that I could have expanded on it though.

That said, Gentzen's result is new to me. Though I have seen other results that are somewhat similar.


> Second order logic sort of assumes that we can simply know what finite means. First order logic doesn't allow you to wave your hands like that.

OK, this I do not understand. How does 2ndOL "sort of assume that we can simply know what finite means"? That seems like an extraordinarily imprecise characterization of 2ndOL. 2ndOL is a formal system just like 1stOL. Neither one lets you "wave your hands" about anything AFAICT.


Read https://plato.stanford.edu/entries/logic-higher-order/ if you wish.

I view the "for all properties" bit as handwaving. See particularly section 6 of that article, which at one point gives:

The semantics of second-order logic depending on these hard questions of set theory puts second-order logic into the same “basket” with set theory. Criticism of set theory becomes criticism of second-order logic and vice versa.

My Constructivist tendencies are due to my discomfort with accepting the validity of set-theoretic constructions with decisions based on unverifiable truths. Accepting second order logic requires accepting quantifying over similar collections of unverifiable claims. I'm going to be no more sympathetic.


Thanks.

In any model of PA, all natural numbers will be finite by this definition. Even if we, from outside of the model, know that many of them aren't.

You've been presented with two different ways to prove this, and a standard textbook. Perhaps read the textbook?

On your aside, in PA there is one element that is not the successor of any other. And it likewise has no predecessor. (It is a matter of convention whether you call that special number 0 or 1.) That special number is always the base case of all induction proofs.


You don't define it.

You list things that you want to be true of it. The Peano Axioms are examples. But no matter how long your list gets, it can always get longer. For example we can ask whether a particular polynomial has integer roots. See https://en.wikipedia.org/wiki/List_of_statements_independent.... (It doesn't, but you can't prove that in ZFC.)

Even so, there is always an, "...and so on." You never finish, because finite does not actually have a finite definition.

Second order logic gets around it because you can say something that simultaneously implies all the things that you want to say about "finite" in one go. Such as, "All possible functions from itself to itself that are onto are also one-to-one." A first order model need not capture the full intent of this statement, because it does not contain all possible functions. As we can see when we look at a model from within other models.

But when I look at what the second order idea of "all possible functions" actually means, I think we're just begging the question of what it means to be finite.

The way that I think about it is that some things are finite. Some things are infinite. And some things on the boundary are not yet determined. But then again, I do have Constructivist sympathies. And this is not a point of view that most mathematicians are very sympathetic towards.


Proofs are natural numbers via Gödel's encoding, or any other encoding scheme that you'd like (e.g. ASCII). The sentence "PA is inconsistent" can be stated in the language of PA and it means "there is a number that encodes a valid proof from the axioms of PA to a contradiction (e.g. 0 ≠ 0)". This sentence is either true in all models of PA (in which case PA would be inconsistent, which we don't think it is), or it is true in some models and false in others - it can't be false in all models because otherwise, by completeness, there'd be a proof of PA's consistency in PA which is what the second incompleteness theorem rules out.

The reason why this is not a category error is because of the self-referentiality of strong enough systems of logic.


> That seems implausible. All natural numbers are finite. I can easily express that in F.O.L.

Just for the sake of clear terminology, I'll take "finite natural number" to mean that it's either 0 or formed by applying the successor function to zero finitely many times.

Here's an easy proof for why you can't prove that every number is finite in PA:

Take the theory PA' = PA + the infinitely many sentences "c > 0", "c > S 0", "c > S (S 0)", etc. (with a new constant c)

We can probably agree that the standard natural numbers aren't a model of PA' because there is no natural number greater than every other natural number.

However, the natural numbers are clearly a model of any finite subset of that theory. And here's where we use the compactness theorem (which is easily proven from completeness): https://en.m.wikipedia.org/wiki/Compactness_theorem

By compactness, PA' has a model. But that model can't be the standard model of arithmetic. Yet it is also a model of PA. Thus we have a model of PA in which infinite numbers exist - so we can't prove that all natural numbers are finite from PA.

To rule out such models - and therefore prove that every natural number is finite - you need the second-order induction axiom.


> To rule out such models - and therefore prove that every natural number is finite - you need the second-order induction axiom.

Sorry, I don't see why second-order induction is required. AFAICT all you need is a first-order induction axiom instantiated on the predicated FINITE.


I don't know why I wrote an entire proof outlining why first-order PA cannot rule out infinite numbers, if you just ignore it and insist on your misconception. The set of finite numbers is undefinable in the language of first-order PA, which is why first-order induction isn't enough.

Read a textbook, these are well-known facts. Here's a free one that is quite good, chapter 3 is relevant: https://milneopentextbooks.org/a-friendly-introduction-to-ma...


Discussing Gödel in terms of "truth" immediately brings about deep philosophical discussions about what that actually means in the context of mathematics. If the "true" nature of the (standard model of) arithmetic is unknowable in full, does it even exist?

I like to be pragmatically classical in my mathematical outlook (I don't worry about LEM), but when we come to foundations, I find that we need to be a little bit more careful.

Gödel's original proof (or rather, the slightly modified Gödel-Rosser proof) avoids notions of absolute truth - it says that for any consistent model of arithmetic, there must exist undecidable sentences. These are ultimately purely syntactical considerations.

(I'm not claiming that there is no relation to "truth" at all in this statement, just that things become less precise and more philosophical once that's how we're going to frame things)


These deep philosophical discussions are about absolute "truth" in general, but in logic, the truthness of formula in specific mathematical structure is well-defined through structural induction over the formula, although with non-finite, non-computable steps. Yes, from strict-finitistic point of view even the concept of truth in the standard model of arithmetic could be problematic.

I'm personally not a constructivist or even finitist (mostly for pragmatic reasons), but discussions about Gödel get sidetracked by musings about either of these philosophies so often that I feel it's important to point out that there is an understanding of Gödel's theorems that can be understood and discussed regardless of one's commitment to a particular philosophy of mathematics (well, at least assuming some "reasonable" base).

If you are a finitist, you could interpret Gödel's theorem to mean that infinite systems such as the natural numbers don't actually exist, because they can never be understood in full (I'm not a finitist, so maybe I'm wrong in this conclusion). If you're a classical platonist, they would mean that the "natural numbers" do exist in some platonic realm, but will never fully be captured by human mathematics. If you're a formalist like me, maybe you'd say that it's very useful to pretend that the natural numbers exist, but that strictly speaking we don't really fully understand what they are or should be (but it turns out not to matter all that much).

Either way, a complete theory of the natural numbers doesn't exist.


> It's just that Godel's proof cannot be rendered in PA

This is incorrect, the proof can be carried out in very weak subsystems of PA.


PA can model Turing Machines and reason about them. I feel like you missed the important link between logic and computability theory.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: