Hacker News new | past | comments | ask | show | jobs | submit | olzd's comments login

Probably Brzozowski’s derivative of regular expressions, e.g https://matt.might.net/papers/might2011derivatives.pdf


How nice of you to copy/paste my reddit comment!



You type 'proc' and let the editor autocomplete that for you: this is a long solved issue.


I like the parens for the structured editing they allow. Also, you don't need parens to be homoiconic.


The -O3 switch removes a recursive call, among other things (https://godbolt.org/z/oS3Cju).


Interesting. (Awesome compiler explorer website, btw).

The Julia version (whose runtime is halfway between the C versions with vs without -O3) contains both recursive calls, FWIW.


I disagree: it's not that hard to gradually get to a solution via the REPL.


I've played around with APL a bit, and I agree that it's not that hard to gradually get to a solution in the repl, but...

The problem I have is that when I gradually get to a solution in Lisp/Ruby/Python, I end up with something which is also close to lots of other problems I want to solve (or that my manager or users ask for). In solving the problem, I've naturally built up tools for that entire problem space.

In APL, I end up building a solution which is not at all close to other problems I might want to solve. I'll take an identity, flip this, zip it with that other array, reduce it with this function, and then poof, the final step slides all the puzzle pieces together and it's perfect (and only 17 characters!), but only for that problem exactly as stated. When I get a request for something slightly different, I almost have to start from scratch.

I assume this is what the "beautiful diamond" / "ball of mud" dichotomy referred to.


> When I get a request for something slightly different, I almost have to start from scratch.

You don't really start from scratch though (at least I don't): most of the time you can reuse idioms with minimal adaptation.


> When I get a request for something slightly different, I almost have to start from scratch.

Do you start over from scratch in terms of your mental model or just in terms of the actual code?

I’ve found using APL helps me to generalize a problem but the underlying code will often look very different.

I have found I waste a lot of time in other languages trying to re-use components that aren’t necessarily a good fit just because I have already written them.


Your last point is something I fight my hardest to avoid. It’s hard, though, but leaning on small generic functional primitives and building solutions through those has helped a lot (see: Ramda, fp-ts, and others)


This. The REPL allows you to incrementally discover the way to do it, and all the while you are exercising the language. You learn the other primitives quickly in the mistakes you make, so its all productive. Soon you know how to do certain things for future problems, and you just use these up to the next level.


It's also not a characteristic specific to APL, it's specific to REPL languages.


I know especially as an old Lisper, however, J's succinctness allows you to fit a lot on a one-line REPL idea. No blocks, curly braces, or indents like when using other languages' REPLs. Also a simple backspace, and you've deleted a noun, and one character or two, and you're creating a lot more than full words in other PLs. Tools for Thought by Ken Iverson, like using math symbols, but on a REPL.


> And yes, many times it’s hit-n-miss, and someone walking by will tell you how it’s trivial with technique X

Substitute X/REPL.


Actually there is: be a EU resident. It's even explicitely stated there.


Latest release is June 28, 2018. SBCL REPL kinda sucks by itself but nobody uses it that way.


I hope you realize the Idris code actually prove that the list will be correctly sorted, unlike your python code.


I am skeptical; prove that it proves such a thing.

Might it not actually be easier to prove that the Python code sorts than that the Idris code proves sorting?


The idea is that, if you can describe the problem (here, sorting) with a few lines, you don't have to prove that the Idris code is correct, because the machine do it for you. You can throw that sort function without even looking at the code in any critical space mission you want and, if the premises are true, then it will always correctly sort.

You can imagine 3 kinds of code :

- a code that describes the problem (that is both human readable and machine readable)

- a code implementing the solution (here, your Python script)

- a code proving the implementation correctly answers the problem (which would be math if you had to prove your Python code on a whiteboard)

The idea is more about reusing and sharing trusted code, re-running the proof on your own system to make sure it's legit, and move on. You don't need to look at the proof to use the sort function, you would only look at the code that describes the problem and make sure it's the right one for your problem at hand.


>The idea is that, if you can describe the problem (here, sorting) with a few lines, you don't have to prove that the Idris code is correct, because the machine do it for you

The machine can't show that your specification is correct, only that it's self consistent. What you're doing is writing a proof that the compiler will use to verify your code. However, if you make a mistake in your proof, then the compiler will happily verify the wrong thing.

The argument is that it's much harder to spot a mistake in a 300 line Idris proof than it is in a 5 line Python implementation.


> However, if you make a mistake in your proof, then the compiler will happily verify the wrong thing.

I don't use theorem provers in my to day to day tasks, but I had the 101 course at University. Back then I wrote basic proof with the tools, and if your problem is correctly stated and entered in the system, you simply CAN'T arrive to an invalid proof. The only way to fool the compiler is to use logical shortcuts, which are clearly defined in term of language keywords, so you know exactly where is the weakness in the proof, and look for them.

Edit : I don't know every theorem provers out there, so to give a bit more context about my experience, it was with the Coq theorem prover


> if your problem is correctly stated and entered in the system, you simply CAN'T arrive to an invalid proof.

How is that different from: If your problem is correctly stated, and correctly coded into in the system using x86 machine language, then you can't arrive at a bug!


At least in Coq, there is no "bug" when you write a proof, it uses mathematical notations and logical rules to conclude a fact. Because of that you cannot state all the problems you would be able to state with x86. What I meant is that such theorem provers don't give you false-positive : if it tells you your proof is correct, then there can't be a "bug" in your reasoning


In Coq your proof is your program, and if you end up proving the wrong thing that's a bug in the program. There's no magic here. Only a human can tell whether the code does what was intended. To do that you have to understand the code.


If you are proving the wrong statement, then you simply didn't state the initial problem correctly, it has nothing to do with a bug in your proof. It's like saying there is a bug in your implementation of a sort function, while we initially asked you to implement a max function. Even if your sort function is correctly implemented, it's still a "bug" to the person who asked you to implement a max function.


That's my whole point. You're really just pushing the problem as step back instead of solving it. Instead of worrying about bugs in your implementation, you're now worrying about bugs in your specification.


Problem is that these proofs are not the specification. They are very detailed. Specification says "write me a sort function", and the proof is some gobbledygook that deal with irrelevant minutiae of the sorting implementation. Where is the proof which proves that that proof matches the specification?


A theorem prover let you prove that your code is bug free in some cases. Why not use a theorem prover for your specifications too?


Because only a human can decide whether the specification matches the intent. This is not a hard concept. The machine can only tell you that your specification is self-consistent, not that it's specifying what you wanted.


The machine could go a level above and helps the human to reflect about its intents. That would basically be a machine convincing the human he/she is already happy right now, and he/she doesn't need to earn more money, and therefore doesn't need that "new e-commerce website". An artificial psychologist conversational agent would help you know about what you really want in life :)


:)


I think you missed the point I was making which is that you still have to know that you're proving the right thing.

It's entirely possible to have a but in the proof itself, at which point your program is going to incorrect.

Reading and understanding the Idris proof is actually more work than understanding the untyped Python version, therefore it's actually harder to say whether it's correct or not in a semantic sense.


> I think you missed the point I was making which is that you still have to know that you're proving the right thing.

Right now, it's a missing part in many theorem prover systems (but I didn't do exhaustive researching, so it's more my point of view) : a code to succinctly describe, or state, the problem you are trying to solve. For exemple, for a sorting problem, you would state the problem in English : after sorting, for any element E, the next element in the sequence should be lower.

> It's entirely possible to have a but in the proof itself, at which point your program is going to incorrect

The premise of theorem provers is that if the problem is correctly stated, then a proof of a solution passing the prover's compiler and a few human reviews is even more unlikely to have a bug.

> Reading and understanding the Idris proof is actually more work than understanding the untyped Python version, therefore it's actually harder to say whether it's correct or not in a semantic sense.

I would be curious to see a Python proof of the Python sort function. I mean an actual logical and mathematical proof, not a unit test or fuzzy test. It would imply to create a library with a DSL around math and logic.


“Saying it’s correct in the semantic sense” = proving the code.

They found a bug in Java’s binary search after 10 years of it going unnoticed, so I think you’re overestimating your own ability to prove code correct in your head.

Generative testing/spec is absolutely useful. It doesn’t subsume static typing and much less theorem proving though, and you can QuickCheck your pre and postconditions in static languages as well.


I don't find that to be a convincing argument in the slightest. The type system in Java also failed to catch the bug, and there's no guarantee that you wouldn't end up with an error in a specification using a more advanced type system that could actually encode that constraint.

The thing to realize is that theorem proving is not a business goal in most cases. What you care about is being able to deliver software that works well in a reasonable amount of time. Sure, you might end up with a binary search bug in your code that goes unnoticed for 10 years, but clearly the world didn't stop because of that, and people have successfully delivered many projects in Java despite that bug that work perfectly fine.


> The type system in Java also failed to catch the bug

Java is not a theorem prover, and your comment was about theorem provers.

> you wouldn't end up with an error in a specification

Again, completely orthogonal to your argument. If you have an error in specification tests won't help you because you'll test the wrong thing.

> The thing to realize is that theorem proving is not a business goal in most cases. What you care about is being able to deliver software that works

Again, your comment was about "saying things are correct in the semantic sense". You're saying now "I don't care about correctness in the semantic sense, I care about delivering software that "works" (whatever this means; clearly it doesn't mean software without bugs) in a reasonable amount of time" and that's fine. I do too, but again, nothing to do with your original argument.


>Java is not a theorem prover, and your comment was about theorem provers.

No, my comment was about formal methods in general. Every type system is a form of a machine assisted proof in practice. It's just that most type systems don't allow you to prove interesting things about your code.

The main problem is that of basic cost/benefit analysis. If it takes significantly more effort to write a full formal proof, and you end up sometimes catching minor errors, the effort is not justified in most scenarios.

>Again, completely orthogonal to your argument. If you have an error in specification tests won't help you because you'll test the wrong thing.

This is completely central to my argument. I'm saying that encoding a meaningful specification using a type system is a lot more difficult than doing that using runtime contracts, or even simply reasoning about the code unassisted.

I can read through the 5 lines of Python code implementing insertion code, and be reasonably sure that it's correct. I would have a much harder time verifying that 300 lines of Idris are specifying what was intended.

> You're saying now "I don't care about correctness in the semantic sense, I care about delivering software that "works" (whatever this means; clearly it doesn't mean software without bugs) in a reasonable amount of time" and that's fine. I do too, but again, nothing to do with your original argument.

I'm saying that you have diminishing returns. What you want to show is semantic correctness, and type systems are a poor tool for doing that. So, while you can use a type system to do that in principle, the effort is not justified vast majority of the time.


> I can read through the 5 lines of Python code implementing insertion code, and be reasonably sure that it's correct.

Binary search is possibly one of the simplest and most basic CS algorithms, and yet, it took people who were "reasonably sure" 10 years to find that bug.

> I would have a much harder time verifying that 300 lines of Idris are specifying what was intended.

Then don't use Idris?

> I'm saying that you have diminishing returns.

I agree. However, that was not your argument. Your argument was that tests subsume proofs, and that's obviously wrong.

Also, types vs tests is a false dichotomy. Personally, I find a strongly typed language + QuickCheck to be the most practical way of developing complex software. YMMV, and that's fine.


>Binary search is possibly one of the simplest and most basic CS algorithms, and yet, it took people who were "reasonably sure" 10 years to find that bug.

Again, saying there is a bug is not interesting. The question is how much this bug costs you, and how much time you're willing to invest in order to guarantee that you won't make that type of a bug.

>Then don't use Idris?

I think you entirely missed the point I was making here.

>I agree. However, that was not your argument. Your argument was that tests subsume proofs, and that's obviously wrong.

My point is that tests and runtime contracts provide sufficient guarantees in practice. Nowhere have I argued that they provide the same guarantees as proofs. The argument is that the cost of the proofs is much higher, and the proofs themselves can be harder to reason about making it harder to tell they're proving the right thing.

Consider the case of Fermat's last theorem as an example. It's pretty easy to state: a^n + b^n = c^n, and it's easy to test that this is the case for a given set of inputs that you might care about. However proving that to be the case for all possible inputs is a monumental task, and there are only a handful of people in the world who would even be able to follow the proof.

>Also, types vs tests is a false dichotomy. Personally, I find a strongly typed language + QuickCheck to be the most practical way of developing complex software. YMMV, and that's fine.

Again, that is not a dichotomy I was suggesting. My point was that I think runtime contracts are a more effective way to provide a semantic specification than a static type system. I also said that static typing restricts how you're able to express yourself, leading to code that's optimized for the benefit of the type checker as opposed to that of the human reader. I'm not sure how you got types vs tests from that.


> Reading and understanding the Idris proof is actually more work than understanding the untyped Python version, therefore it's actually harder to say whether it's correct or not in a semantic sense.

That was your original, nonsensical, point (emphasis mine).

> My point is that tests and runtime contracts provide sufficient guarantees in practice.

That's your revised point (which I don't care much about discussing), after moving the goalposts sufficiently.


> The thing to realize is that theorem proving is not a business goal in most cases. What you care about is being able to deliver software that works well in a reasonable amount of time. Sure, you might end up with a binary search bug in your code that goes unnoticed for 10 years, but clearly the world didn't stop because of that, and people have successfully delivered many projects in Java despite that bug that work perfectly fine.

I agree with you on that part. However, if the whole stack of computer technologies were more reliable, we would maybe think about new business usecases that we subconsciously dissmised because of lack of trust in current technology.


It's basic cost/benefit analysis. At some point you end up with diminishing returns on your effort that are simply not worth the investment.

Also worth noting that nobody has been able to show that static typing leads to more reliable code in practice https://danluu.com/empirical-pl/

Claiming that to be the case putting the cart before the horse. A scientific way to approach this would be to start by studying real world open source projects written in different languages. If we see empirical evidence that projects written in certain types of languages consistently perform better in a particular area, such as reduction in defects, we can then make a hypothesis as to why that is.

For example, if there was statistical evidence to indicate that using Haskell reduces defects, a hypothesis could be made that the the Haskell type system plays a role here. That hypothesis could then be further tested, and that would tell us whether it's correct or not.

This is pretty much the opposite of what happens in discussions about static typing however. People state that static typing has benefits and then try to fit the evidence to fit that claim.


I agree with you, there is no much data about the effectiveness of more rigorous software development tools. It's clearly a research topic.

My intuition is that, with advances in robotics and AI, we may see the need of more robust logical systems. At some point, mathematicians and algorithmicians may use those tools to prove new concepts, which they will be able to share and valid more quickly, which then will percolate into software engineering more quickly.

Beyond software engineering, the expirements with theorem provers may lead to new ways of exchanging information, such as news, mathematics and legal documents. There are inspirations to take from the formalizations created in theorem provers for applying automated reasoning in more domains than just programming (imho)

Edit : Intuitively, it has do to with building trust in the algorithms and informations we share at light speed. With more validated building blocks, we may explore more complex systems. Accelerating trust between different entities can only lead to plus value, I guess. That's all very abstract tho

Edit 2 : too put it in even fewer words, theorem provers may be more about collaboration than just pure technical engineering


To be honest, I think that once we have advances in AI there will come a point where you won't really be doing the kind of programming that we do today by hand. You'll have an AI assistant whom you'll give queries that are close to natural language, and it will figure out how to implement them for you. I can see that coming within a few decades for many kinds of applications such as your typical CRUD apps.


In such an hypothetical world, the "typical CRUD application" may just not even exist anymore.

I was talking about advances in consumer drones, autonomous cars, and personal robots, such as SpotMini from Boston Dynamics. More autonomous embedded systems evolving around us means different needs in term of safety in software development.

AI will have to explain the reasoning of their decisions (prove they did right) in natural language. The humans who do that in our world are scientifics, politicians, lawyers and mathematicians. Those people use a specific kind of natural language, with domain specific words to communicate. Theorem provers in software engineering are a step forward that direction imho


Sure, you wouldn't really interact with a computer the way we do now once you have AIs that can understand natural language. It would be more like having a personal secretary.

I don't think theorem provers are actually the way to get there though. AI systems are probabilistic in nature, and neural nets are self-organizing. One of the biggest problems is that it's really hard to tell how such a system arrives at a decision. The human brain itself is not based on formalism, and we find formal thinking to be very challenging. It's something that needs to be trained, and doesn't come to most people naturally. Our whole cognition is rooted in heuristics.


> One of the biggest problems is that it's really hard to tell how such a system arrives at a decision. The human brain itself is not based on formalism, and we find formal thinking to be very challenging

So far, "neural networks" in AI is a fancy name for what is nothing more than a giant equation system with many parameters. It's not even close to a biological, actual self-organizing, neural networks. It's closer to a weather model prediction.

The human brain is not based on formalisms, so let's create an AI that helps the human brain's weakness. Maybe we shouldn't try to replicate the human brain capacities, but rather create a new "form of life" complementing our biological skills.

So far, theorem provers, with expert systems, are the only works I'm aware of about systematically explaining reasoning and decisions.


Neural networks are graphs that evolve at runtime by balancing their weights based on reinforcement, and as far as I know there hasn't been much success in using formal methods for AI.

I do think theorem provers can be useful in certain contexts, and I can see AI using these tools to solve problems.


> Neural networks are graphs that evolve at runtime by balancing their weights based on reinforcement, and as far as I know there hasn't been much success in using formal methods for AI.

This is not correct in the current state of tech. Neural networks are parametrized equations systems. You train the parameters on a dataset in a training phase, then freeze the result, then distribute the model to devices. Once distributed, the "neural network" can't be modified, and stop to "learn" new cases.

Edit : I mean, you are not completely wrong, you described the training phase of the neural network. That's only half of the story tho


> Reading and understanding the Idris proof is actually more work than understanding the untyped Python version, therefore it's actually harder to say whether it's correct or not in a semantic sense.

But you don't have to write a proof to sort a list in Idris. So you're not being honest with your comparison.


Sure, but then you're not getting the formal guarantees. The whole argument here is regarding whether adding more formalism improves code quality. If you agree that less formalism is better in some cases, then it's just a matter of degrees.


Come on. A lack of tests or documentation isn't specific to Lisp. TBH, I'd rather use a well-designed DSL than a shitty API.


No it's not, it's way worst when it about new syntax that functions.


Alright, do you have an actual example to show?


I don't even need to go far, the last comment I had with a lisper on HN ended by :

https://news.ycombinator.com/item?id=17128977

I don't know what's the worst. That you can't help it or that you won't ever recognize you have a problem.

Unfortunaly there is no Lisp Anonymous, so people leave the language. Shame since it's a fantastic tech.


Yeah, you managed to link a comment with some code (although I agree it looks useless)... I expected a github link to some lib relying on undocumented macros.

> I don't know what's the worst. That you can't help it or that you won't ever recognize you have a problem.

Anyone can write bad code. Lisp certainly makes it easier, I won't deny it. And no, I don't have a problem, thank you.


> you won't ever recognize you have a problem

> I don't have a problem

If only people were homoiconic like LISP, they might be better at reflection.


If only people could properly argument...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: