Hacker News new | past | comments | ask | show | jobs | submit | more ebingdom's comments login

As someone with years of experience writing machine-checked proofs in Coq, I cannot imagine the term "Software Engineer" shifting to that meaning, even though it may make sense when compared to other engineering disciplines. The programmers who have no experience with this technology outnumber us many thousands to one. They'd never sign off on it.

That said, I'm all for anything that results in more appreciation for mathematical rigor in programming. It's like puling teeth trying to get colleagues to use tools/languages that help with reasoning about code (even just a good type system), and for a lot of programmers math seems to be an unapproachable alien language.

One thing I'd like to change about the software industry is the perception that formal verification is too hard to do in practice because you can't even write down a complete specification for the program. The misconception there is that the all of the program's behavior needs to be specified in order for formal verification to be useful. Why can't gradual formal verification be a thing?


Our approach for cryptographic systems is pretty much gradual verification. We target high risk or worrisome systems and verify piece by piece. It works! Formal verification is a relatively expensive technique so it's necessary (in my opinion) to target the places where you can achieve the best bang for buck.

For system reliability at scale, I think that stronger type systems and systematic testing techniques are probably the best choice. Anyway, that puts the system in a much better state if you want to apply formal verification later.


and for a lot of programmers math seems to be an unapproachable alien language

is the perception that formal verification is too hard to do in practice because you can't even write down a complete specification for the program

I think the main problem is that it's too formal. Turning something that is really about simple logical deduction into thick abstract maths is sure to dissuade the majority of the people who might find it useful. The elitist gatekeeping attitude that cryptographers tend to have doesn't help either.


A problem here is that it's not a simple logical deduction being converted to "thick abstract math". The math is the proof. A formal proof is necessary to verify the deduction.


I wholly agree it’s important to contextualize formal methods — and particularly in terms of levels of rigor. Typing your program is already basic formal methods, and we do ourselves a disservice by using language which places proofs outside the realm of typing, testing, etc.

I think the “middle ground” between things like typing and things like fully specified programs are things like CDK apps which hook IAM policy or network reachability tools to analyze your infrastructure (and, eg, exclude open ports on the backend).

Which is slowly happening, eg AWS or Prime Video.

https://aws.amazon.com/security/provable-security/


I love math and a good type system. I know little about formal verification of code. Are there tools (ideally) or languages (for educational, if not production, use) you recommend I look at?


I went through Benjamin Pierce et. al - Logical Foundations [0] which uses Coq. It was great fun, and I am someone who has a pattern of shying away from mathematics a little too often. That said, I'm not the right person to tell you why and when Coq > all other proof systems. Pierce is famous though, and so are his books.

[0] https://softwarefoundations.cis.upenn.edu/


Can you elaborate a bit more on the kind of projects you have used Coq (and other tools) to prove correct? I am very much interested in moving in that direction career wise.


This is a really uninformed opinion. I'm disappointed that it's currently the top comment.

You really think functional programmers don't build composite types?


It's like listening to a flat earth advocate.


I agree about the math terminology, but I think it would be more confusing if we created a completely different set of vocabulary for the same concepts. So I don't really know what to do: refer to something by its proper name, or create a new, less precise name to make it sound less scary? Why do we find certain identifiers scary in the first place?


I don't. FP terminology is so bad that clearer names (Mappable, Chainable, etc.) would really help adoption.

You might say "but it will make it more confusing for people who already know the FP terms", but those people are a tiny minority of programmers so it doesn't make sense to cater to them. At least if you want your language to be popular among anyone except academics.


> You might say "but it will make it more confusing for people who already know the FP terms"

People who already know FP would be fine. Newcomers would be the ones to suffer. We already have a large number of tutorials, blogs, books, etc. using the existing terminology. For a programmer new to FP to tap into that they would need to learn both the "friendlier" (whatever that means) terms as well as how they map to the math terms.

We need to stop treating math like some unlearnable alien language—you'd still need to learn it anyway, regardless of what you call the abstractions. Also in many (perhaps most) cases there is no clear choice for the "friendly" term. Words like "chainable" would be highly ambiguous, as many different types of abstractions support some notion of "chaining". This would likely result in bikeshedding and more divergence in terminology and less precision in meaning.

If you think the math terms are bad, try asking a mathematician to rename the terms in their field. They'll tell you the same thing.


> The lambda calculus is an entirely arbitrary way to organize things in math. It’s not based on nature or truth at all.

Lambda calculus, category theory, and logic are essentially 3 sides of the same coin (the Curry-Howard-Lambek correspondence). The rules of lambda calculus match those of natural deduction. It runs quite a bit deeper than you're suggesting here. It's not just some arbitrary formalism.


Glad to see these comments because the parent's "it's math therefore from a deity, and much more correct than your grubby computer stuff" is so often trotted out. It turns out that there's no directionality to the relationship between mathematics and computers -- they're the same thing and one can be transformed into the other. e.g. Lambda Calculus is just a kind of VM someone came up with that can be used to model certain structures. Same for Category Theory, and same for Logic. So you might as well say that Lambda Calculus is derived from IBM 360 assembler language than the other way around.


I wonder why they don't just use the highest precision possible given whatever representation of numbers they're using? I know these extra digits would be unlikely to ever matter in practice, but why even bother truncating more than necessary by the hardware? (Or do they not use hardware to do arithmetic calculations?)


They do. This is precisely the number of accurate digits you get when you use a double (i.e. 64 bit floating point). https://float.exposed/0x400921fb54442d18


This article is completely incongruous and reveals fundamental misunderstandings of category theory. That's actually being generous; most of the content isn't even coherent enough to be wrong.

> Category Theory (at least applied to computing) studies how instructions are assembled into running programs.

Um, no? As a software engineer who has studied category theory, I use it mostly for denotational reasoning about programs, and to inform type-level decisions. It has nothing to do with "instructions". Perhaps a generous interpretation of this statement would be that monads, one very specific concept in category theory, can be used to model imperative programs—but that would be highly reductionist.

> That’s not to say that FP is useless. FP is obviously quite useful in many situations - but perhaps not be enough to displace the central role of wrapper functions / lambda calculus.

What? Functional programming _is_ lambda calculus. The idea of it displacing lambda calculus makes no sense.

> Why? Mathematically speaking, Category Theory is basically the same as Relational Theory (at least applied to Sets).

Oof. Rel is one category; Set is another.

> There is this fanciful notion of a “computational trinity”- the idea is that devs can somehow write to a single source of “truth” and let artificial intelligence / automation decide how it gets translated to hardware.

No, the trinity refers to the connections between category theory, logic, and programming. It has nothing to do with AI.

Was this article written by an AI?


> Was this article written by an AI?

This is my first thought too. The author sounds like they know FP by swallowing all kinds of info about it from articles around the net regardless of them being right or wrong.


FP > lambda because it avoids wrapper functions by using chains

However functions still have single entry/exit that limit reusability of the contents

Cat crafting > FP because it can avoid functions entirely


> However functions still have single entry/exit that limit reusability of the contents

I argue that is due to granularity of function implementation. After all, if I have:

f(x) = 2x + 3

That is really

g(x) = 2x

h(x) = x + 3

and f(x) = h(g(x)). So by breaking the function into irreducible pieces I have achieved maximum reusability.

It seems like the core argument here is that if we (effectively) break down all functions into their smallest granularities and have some sort of ability to (SQL) "SELECT" / project / join those functions together we'll enable some sort of higher-level ability to construct complex software.

There are three reasons I disagree with this:

1) The universe of irreducible functions is infinite.

2) The universe has state and evolves. This means that the (SQL) "SELECT" / project / join of functions described above must itself necessarily change its behavior with the evolution of the universe, and I suspect this (in itself) is an intractable problem.

3) From an API perspective, 99% of the APIs today do not even form a category. That is, I cannot directly connect one object (API) to another object (API) because no morphism currently exists to enable me to do so. Worse, what that morphism is depends on the product of the domain and range of the APIs being connected. Automatic code traversal via some sort of SELECT is doomed to failure unless the set of morphisms is complete. The alternative is to try to automatically generate the correct morphism based on the SELECT...

... and that is exactly what software developers do. So while Multix has some interesting features built into it, based on the above it appears that the goal of the research is to essentially build a god-like AI that writes the software for us. I think that's really why there is abundant skepticism here.


Yes, there is a long-standing debate in categories about this "reduction" problem but that is mostly theoretical

In practice, you only need to break down enough to interface w existing APIs

The morphisms are controlled by the crafting system, which is really a higher order type system

No you don't need to achieve god-like powers any more than GitHub is the source of all code on the planet (it doesn't have to be all or nothing).

Even the crafting rules themselves contain FP chains. In my real world work, I use a mix of crafting, FP chains and old school functions.


you may be right but i think you're doing it wrong


I would invite everyone to look at the Multix categorical machine. The FP style chains and crafting work wonderfully even over a network https://portal.multix.catatonic.ai/


Why?


You as the developer are the one who has to decide which tech is worthy of your attention. And Cats are trying to compete with FP for your attention

Reducing or eliminating "functions" means reducing or eliminating the refactoring problem


You are behaving defensively and a bit irrationally


Fair enough, I needed to improve the answer

The main claim is that crafting > functions because crafting does not have the single entry/exit drawback that functions have


Categories are chains of morphisms

If you don't get that, maybe you missed something in your training

And yes it has everything to do with AI (not categories per se but the article)


Categories are objects and morphisms with identities and composition, subject to some coherence laws characterizing identities and stating the associativity of composition. Category theory studies categories and related constructions, such as functors, natural transformations, adjunctions, (co)limits, universal properties, etc.

> And yes it has everything to do with AI

No, category theory is not about AI. My training was certainly sufficient for me to refute that connection, and the way you keep referring to my "training" rubs me the wrong way.


Categories are about composition. Crafting is also about composition - using inference rules. https://en.wikipedia.org/wiki/Category_theory

The classic schematic representation of X Y Z shown on the above wiki page is basically a crafting rule to create Z. That is, if the AI knows about a "path" (think back to your homotopy training) from X and Y to Z then it can craft a Z from X,Y if it needs to


> it has everything to do with AI

Could you elaborate on this? What does it have to do with AI?


Do you have any Prolog background? Inference rules? Armstrong's axioms etc.?


For the laymen's in the audience such as myself can you answer the question.


(repost from similar question above)

Categories are about composition. Crafting is also about composition - using inference rules. https://en.wikipedia.org/wiki/Category_theory

The classic schematic representation of X Y Z shown on the above wiki page is basically a crafting rule to create Z. That is, if the AI knows about a "path" (think back to your homotopy training) from X and Y to Z then it can craft a Z from X,Y if it needs to

Yes this is exactly reverse to how we normally think about the caller/callee relationship


I usually just upvote. But I'd like to take the time to thank you for responding. Have a good one.


HN skepticism is warranted and deserves decent answers


> but does that mean it's a good idea?

Yes, to me it does. I love when variables have really tight scopes so I don't need to worry about if/how they are used throughout the rest of the program. Locality should be the default, not something that needs to be justified.

Creating a function is just unnecessary boilerplate if the code is not going to be reused. Functions are just named blocks with arguments. But if you don't need to name the block and don't need to reuse it with varying arguments, creating a function is just unnecessary.

Functional programmers are familiar with a concept called "let", which essentially gives you a way to introduce a variable and introduce a new scope just for that variable. This article is essentially just showing how to do that in JavaScript.


> Cold start (time to first response, ms) O(100) O(1000) O(10000)

Ugh, that's not how big O notation works.


On the other hand, it is how it is often used informally in speech. I would probably have written eg ~100 instead, but I easily understood what they meant.


Yes it is.

Big-O means given an arbitrary function of some complexity, it is definitely bounded by this other function from the top, i.e. that other function is always larger than our arbitrary function.

f(n) \in O(n^2) means n^2 (ignoring constant factors) is always larger than f(n). If you have no polynomial elements in your O(g), then you only state the constant factor. Like in O(1).

So saying Cold_start(service) \in O(100 ms) is exactly the same as saying the cold start will always be below 100 ms. It makes sense to not say they are all O(1), although strictly they are, as the interesting bit is the difference in magnitude of the constant.


> Welcome to Move, a next generation language for secure, sandboxed, and formally verified programming.

I didn't see anything about formal verification in the rest of the documentation. Does it have dependent types? Does it have a model checker? Does it have anything that would allow me to verify mathematical properties of my code?


Until wasm provides a way to release memory pages, nah.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: