> A language with dependent types and borrow checking
I'm definitely interested in this! There are significant challenges here though, particularly with respect to combining dependent types with effects and coeffects! This is open research, but significant progress is being made, and I'm hopeful. I'm currently keeping an eye on https://github.com/granule-project/granule, which seems like it might have some interesting things to say about this.
Currently I'm trying to learn how to implement dependent types in my programming language, Pikelet: https://github.com/pikelet-lang/pikelet (currently doing a rebuild of the front-end in https://github.com/brendanzab/rust-nbe-for-mltt). I'm hoping that once I've done this I might eventually be able to start looking at implementing something to do with borrow checking in it, but who knows! The more I learn the less I seem to know! Always interested in chatting to people about this at our Gitter channel: https://gitter.im/pikelet-lang/Lobby
> Idris is perhaps the most famous dependently typed language.
I'd probably put Coq and Agda ahead of Idris in terms of being well known and established, but Idris is certainly pretty cool in how it tries to target practical programming.
A good foundation for such a language would be one based on Quantitative Type Theory [0]. It’s a dependent type theory that records usage information in every typing judgement.
The Idris successor, Blodwen [1] is being based on it.
Yeah, I'm aware of Quantitative Type Theory, it's very cool! Another neat iteration is described in "Resourceful Dependent Types"[0]. I'm interested in what the Granule people are doing though - they can track usage information at the type level too. It's still non-dependent for now, but I hear that they are interested in extending it to dependent types too. They use an SMT solver to track usage information which is really neat - apparently it allows to track more interesting usage patterns than Blodwen can.
Sadly this doesn't do everything I want though - AFAIK, you can still have multiple out-standing references in linear typing [1]. I'd really like some story for uniqueness too in order to have support for in-place updates while avoiding a GC.
While I know of Coq and Agda, I don't associate dependent types with them at all. I did screams dependent types to me. So it might be more famous in circles of the lay people.
It is, but they are not full spectrum dependent types, as far as I know. ATS doesn't have a Rust-style region system either, as far as I'm aware. It's very cool though, and quite inspiring, if a little unfriendly UX wise! :)
* a language with a mix of contracts and refinement types: encoding preconditions and postconditions in types of input and output values. this way one can match postconditions of one function and precodnitions of the next. but programmer wouln't have to prove everything: the checks could be run-time or disabled overall, enabled just for testing, which could replace unit tests - asserting all conditions for example inputs would be enough,
* a language with possibility of restricting computational model per subroutine. not everything requires Turing completeness. orthogonally, most functions can be pure. the goal would be to stick to simpler models if you can and only use more complex one in small part of the codebase,
* a language with possibility of user defined linear types (/ affine types / relevant types), e.g. make it possible to require having every exception logged as an error, unless explicitly ignored,
* a language designed to be translated into other languages with the translation being readable code (in contrast to CLJS, ScalaJs, etc.), as idiomatic as possible; e.g. translating map / filter chains to loops when translating to imperative language.
This might be as well one language, but as complexity rises exponentially, it might be better to start with few smaller ones instead.
Love bullet #2. Being able to start with more constrained features and using more powerful techniques when necessary would be a huge aid in developing good architectures.
In such case I highly recommend Noether: Symmetry in Programming Language Design presentation by Daira Hopwood: https://www.infoq.com/presentations/noether. It explores the idea by describing a possible stack of sublanguages ranked from simple and easy to reason about to complex and robust.
I think it's a compiler hint for optimizations, but the onus of correctness (ie. Is this really pure) remains on the programmer / whatever generates the C code
you could have mix of assertions and proofs, obtaining something that you could call gradual verification - you wouldn't have to prove everything from axioms, you could base on assertions.
the difference from asserts and unit testing is that you could match post-contract from one function with pre-contract of another one, as in:
fun sort(l: List[Int]): SortedList[Int]
fun minusReverse(l: SortedList[Int]): SortedList[Int] = l.map(el => -el)
fun f(l: List[Int]): SortedList[Int] = minusReverse(sort(l))
when you have sort and minusReverse functions unit-tested, you could use proof system to prove that f is correct (assuming that sort and minusReverse are).
while looking for similar concepts in existing programming languages I have found that SPARK 2014 [1] (subset of Ada 2012 with static verification) follows such philosophy: you can prove some parts of the code and unit test the rest.
> A language where the program is constructed by editing a running process in-place with a debugger/IDE, then the process is "saved" as an executable.
Others mentioned Smalltalk, but this is exactly how Common Lisp, and many (most) other Lisps work.
In Common Lisp, you start with a process that has a compiler (or interpreter, depending on implementation) and a REPL, and everything you do from there is modifying that process, called "an image". After you're done, you can dump the image into a standalone executable (but unless you're distributing executables or care about startup time, it's not something you'll likely do). Source code in Lisp is really a set of operations you do on that image.
Sorry, I should have given more detail. Both Smalltalk and Lisp use reflection for editing in place. The IDE, or REPL, is part of the process being edited.
I'd like to use an external debugger, based on ptrace or the like, to accomplish the same development style, but without (necessarily) having any reflection in the program being developed, and without the need for tooling code comingled with program code.
> A language where the program is constructed by editing a running process in-place with a debugger/IDE, then the process is "saved" as an executable.
It's been a few years, but this reminds me of learning Smalltalk back in the day, as well as how some Lisps (CL?) treat the VM as the program being produced
I knew a couple of Smalltalkers that would write their programs in the debugger. Its pretty good for rapidly figuring out what you need to get the job done, and then they would go back with the refactoring browser to clean it up and get it in line with the rest of the system.
> editing a running process in-place with a debugger/IDE, then the process is "saved" as an executable
I am currently working on a variant of the Smalltalk image concept that works this way, based on Objective-Smalltalk[1] You have documents in a live coding environment, and those documents can be saved as executables.
One inspiration is DropScript[2], which was an app that converted Unix scripts into (simple) MacOS X apps. Another is obviously Smalltalk.
> An iPad app that lets people draw dataflow programs with a pencil
Objective-Smalltalk also has pretty good dataflow support. Drawing would be a reasonable addition.
I'm thinking of something that would be a lot like Smalltalk, but with strict separation between program and metaprogram. (In the spirit of Alan Kay's notion of fences between metaboundaries). The IDE would run in a separate process from the program, and attach to the program via operating system debugging facilities.
I think most of the problems with Smalltalk can be traced back to its origin as an operating system and the mingling of the program and tools for writing the program in the same image. Eg, shrink scripts for deployment.
Since it's HTTP, it also works on the device. You can obviously also access those facilities from within the process, at least for a macOS app.
Right now, I am focusing on a more image-like user-experience, but with individual (small-ish) documents instead of a huge monolithic image. The difference in trying out UI-oriented idea compared to having to create a separate Xcode project each time is huge, and of course jut doing it live without restarting.
My guess is that these documents will evolve to be a lot like .frameworks, so code + resources. Budding an app is then no more than just copying the app shell and filling it with the right frameworks. (See Xcode). Once budded, you might interact via the HTTP route. Or maybe via debugger?
> A language where the program is constructed by editing a running process in-place with a debugger/IDE, then the process is "saved" as an executable.
Why do people want this? I honestly don't understand. The point of code as opposed to just doing stuff by hand is that you can edit the code independently of its execution. You can fully inspect it without running it, and without any weird crevices for information to hide in. The workflow where you sort of edit and save processes, like smalltalk images, seems like it would throw away those advantages.
It allows rapid productization by combining experimentation and development into a single process. The dream is to do something just once manually, and then have it automated with no further effort. This is why Excel is one of the most used programming environments.
Since I've mostly lost interest in computer science, my goals are much more modest.
I think I'd like to pound out a language that succinctly describes sheet music. I've seen at least one, but I think I could do a better job. The toolkit would probably include a realtime display showing the results of the interpreter in order to avoid a PITA save_file/convert_file/look_at_output cycle.
This would be especially useful on smaller projects like leadsheets.
A problem I see in music scoring software is that they are waay to complicated. You end up with all the possibilities in that world even when you just need a few features in your own corner.
What I'd like would be a way to really efficiently do a jazz leadsheet like you'd see in a fakebook. That cuts the notation down quite a lot. The idea is that you could type in something from an existing sheet really quickly in order to do things like edit it or change the key...or you could bang out something from transcribing.
Spend some time getting it to have a good handwritten look on output.
After that, add just enough stuff to do big band (or small group) charts. But no more.
I've always liked the idea of a language that combined syntax level support for functional reactive programming and symbolic programming. So you define z as x + y. Then you change the value of x and that change cascades across your program, like a spreadsheet almost.
Not exactly the same thing, for one these aren't function calls, and for another the view is memoized: You can see the difference with some print statements:
Well, you can do that in any language with first-class functions. But there's obviously a syntactical difference between a function invocation and a variable read.
Isn't that Haskell? Or any other declarative, lazily-evaluated landuage.
Granted, you need to use something like the ST monad to be able to "change the value of x," but once you do, all other values fall into place like dominoes.
I understand that is how object oriented databases work. For example, you can define the equation for a financial derivative, then change the price of one of the components and derivative price is updated automatically.
“Python-esque with ADTs” is exactly what I loved about F# when I first found it.
Unfortunately, the tooling is just as much of a nightmare (though with the dotnet CLI it is slowly showing signs of improvement), and cross compatibility is still iffy for a lot of things, so I’m not sure it satisfies the rest of the author’s specs.
I would kill to see a dialect of F# completely freed of the .net nightmare and allowed to fly free on its own.
Ocaml is ok, but coming from F# it feels just clunky enough to be noticeable, as if you’re using some early pre-release version version of F#, which in a way yu are, since F# started out as just a port of it. Adopting a light-ish syntax similar to F#’s helped, but there’s still a sense it isn’t as polished for public consumption, and the tooling is no less painful while also being almost hostile to Windows users. :(
Bucklescript-TEA was kind of nice though, getting to play with the Elm architecture but with a more flexible type system and better inference. But I found the size of the bundles and tools and the slowness in the environment I was using it in made me just go back to Elm instead. :(
As for Reason ... well ... Facebook has taken over webtech with a lot of halfassed misunderstandings of better technologies, but this one might take the cake.
I honestly find that whole project insulting, and so should you. It’s as if they’ve declared JavaScript developers to be so stupid that they physically can’t use a language with any other syntax, so they have to fuck up one to get them to use it.
I can second ocaml. Im using it more lately and really enjoying it. Of course the compiler error message leaves a lot of room for improvement. And it does take a lot of time to get used to all of the peculiar syntax.
But you get ADTs, the best modules on earth (even if the syntax is a bit old ...), GADT if you want them, object row polymorphism, polymorphic variants, streams, etc.
http://coconut-lang.org/ is actually a Python superset with algebraic data types and other functional programming features. It even compiles to Python that is compatible with both Python 2 and 3.
I don't know much about Pony but glancing over the doc it seems very Scala-ish, which can already be written in very pythonic way and has ADTs. Am I missing something here?
It is not. Its syntax is simpler and more regular than Java or C#, and on par with Kotlin. I haven't heard many complaints on them having complex syntax. Ironically Kotlin is promoted as simple, when its syntax is almost the same as Scala.
What are you talking about? Scala doesn’t even have a consistent syntax, on purpose. By design there are multiple ways to write many things in the language, often with unpredictable and inconsistent results, method syntax being the most obvious example.
I think you have confused Scala for something else. “Regular” it is most assuredly and deliberately not.
Basically Swift and Kotlin based their syntax exactly on Scala.
> It is a kitchen sink with every feature under the sun.
You obviously don't know Scala and you're just repeating BS found on the Internet.
Scala is a much more powerful and much more abstract language than Kotlin, so comparing their syntax complexity is unfair from the start.
But if you restrict the problem space to what can be covered by both Scala and Kotlin, then Scala has in fact fewer features than Kotlin. It just happens to have a few more generic, and very powerful unique features, which might look unfamiliar to Java programmers.
Examples:
* Kotlin has special syntax for null-checking. Scala has no special syntax and implements the same feature in the standard library.
* Kotlin has special syntax for coroutines. Scala achieves the same with a library.
* Kotlin has special syntax for extension methods. Scala doesn't provide special syntax for extension methods - they can be created with implicit objects which are a much more generic feature and have other important use cases not possible in Kotlin nor Java.
* Java has special syntax for array access. Scala does not.
* Java has special syntax for interfaces with method implementations. Scala does not.
* Java has static members which are completely alien to OOP. Scala does not.
* Kotlin and Java have operators, and Kotlin has operator overloading (just like C++!), making operators second class citizen. Scala has just method names, so "operators" and regular methods are the same.
> Scala is much closer to C++ than Java
Only in terms of abstraction power (IMHO it is much above C++ in abstraction power; actually much closer to OCaml than C++), but not in terms of complexity. Scala language specification is ~200 pages long. Java is ~750 pages, C++ is ~1300 pages.
What I would love to see is a declarative object oriented language. It would marry the idea of FauxO from Gary Bernhardt's Boundaries talk [1] with the syntax of Elm.
Elm already allows for escaping into other languages. I would love to see that mechanism expanded and a bunch of "core" DSLs implemented using ideas from the "STEPS Toward the Reinvention of Programming" [2].
FORTH and other concatenative languages are probably the closest to this. There's very little grammar to them or even need to define any custom identifiers, other than newly-defined "words"; you could probably speak out FORTH code over a noisy radio connection, and the person at the other end would accurately input your code.
(It's interesting though that the author talks about the need for an "effect system for FORTH" to "understand operations on the FORTH stack" when typed lambda calculi basically give you exactly this. You can even just use postfix syntax for function application, and (optionally) numerical indexes instead of named variables to refer to active λ-abstractions (both of these as proposed by De Bruijn) and you end up with a quite minimal, FORTH-like syntax; since postfix application ( expr ) {as in `( expr )func`} essentially means "push expr on the stack", while lambda abstraction `[var] expr` is understood as "pop var from the stack and plug it into expr".
IOW, "application" and "abstraction" can be paired off immediately with no need for parsing as such, thus preserving a basic property of "concatenative" languages ala FORTH.)
Urbit defines pronunciations for all of its sigils with the goal of enabling verbal communication of its forms; however, it makes no attempt to be sensible language, its more of just random english syllables that compose without too much harshness, justified by the use of greek letters in mathematics (part of their value is they have little/no context to bring in, so you can freely build your own)
I look forward to seeing what we learn from urbit ten years from now.
0=true? Crazy sigils, three ways?
I just went back to their website to remember the special name they used for distributed computation, and they have introduced /even more new words and layers!/ Godspeed, you.
I've thought about this - along with a few other changes like using '.' as a statement terminator.
As others have said, Python's significant whitespace is a mess because there are syntactically important characters that not only can't be pronouced but sometimes can't even be visually distinguished. That could have been addressed by either banning tabs or mandating them like Makefiles, but allowing mixed tab and space when they're semantically different is madness. What does Python do with the various invisible Unicode spacelike characters, by the way?
Why not python? On a broader topic seems like speaking punctuation would work fine if the environment were smart about where to put it in context.
Seems like there would be a ton of value in an environment that used what makes sense in terms of the programming language in terms of recognition. If a word is slightly more likely to be foo than bar but bar is an identifier maybe its more likely to be bar?
Again this really seems like a case for a smart environment or even just a smart addon for an existing environment rather than a new language.
I talked to a three different blind programmers in the last few years. They all dismissed Python and other indentation-based languages, because - surprise-surprise - screen readers don't read out significant whitespace.
The only whitespace that Python cares about (that other languages do not) is whether a line is indented more or less than the previous line (off-side rule). So the screen-reader could read out “INDENT” and “DEDENT” when the indentation changes (matching the Python lexer: http://web.archive.org/web/20180217005004/http://www.secneti... / https://docs.python.org/2.0/ref/indentation.html), or read “start block” and “end block”. This would be about as frequent as reading out (in brace-using languages) “open brace” and “close brace”.
An effect system is like a type system, but keeps track of the 'effects' (think 'side effects', but we're doing them on purpose) that may occur when evaluating an expression. For example `print` might have a `log` effect. Unlike types, which get 'consumed' (e.g. by calling a function with an argument), effects tend to accumulate.
I've mostly seen effect systems in the form of 'algebraic effect handling', which allow effects to be "handled" in a similar way to try/catch for (resumable) exceptions. For example, if we have an expression with the `log` effect, our tests might wrap it in a handler which just accumulates a string, so we can test what was logged; our production system might use a handler that plugs into an external logging system (which would require an effect like 'IO' or 'socket' or something).
Stack languages like Forth and Kitten have "stack effects", which are the effects that evaluating an expression has on the stack, e.g. popping some values.
Effects are sort of the inverse of capabilities: an expression with effect E tells us "evaluating this expression might result in E happening", whilst an expression with capability C says "to evaluate this expression, you must allow me to C".
The language I want to write would have built-in support for the flow of control within a single function to flow seamlessly across multiple processes/machines. I don't mean fancy RPC. That still requires the programmer to break up the parts that run in different places into separate functions and callbacks, define the arguments that get passed between them, etc. I mean the compiler knows about the notion of a "place" and which data exists in each place. When it hits an "on (place)" construct it can figure out what needs to be passed, handling it for you between one line and the next. Fuller explanation here:
This doesn't completely free the programmer from worrying about distributed-programming concerns such as node failure and broken connections. Nothing will. However, the fact that the flow of control and data looks like that of a local sequential program would make such code easier to write, test, and formally verify. I see a lot of bugs in the distributed systems I work on that occur because the low-level mechanisms are so intrusive that programmers can't (or just don't) test or verify the higher-level behaviors.
One with no errors. There's no concept of a "syntax error" or a "compiler error" or a function error. One of the most common things we joke about is people not reading error messages.
Whatever the design is, it would not ever spit a big red blob in your face which makes you wince and feel bad, nor would it fill your terminal - the place where you work - with screen after screen of tracebacks. It is unthinkable that you could write "x-1 = x+1" on a piece of paper and the equation scribbles on your workplace and you can't use it anymore.
There would be no adversarial relationship, you wouldn't be "fighting" the compiler or the borrow checker, no red squiggly underlines in an IDE. The whole approach would be reframed into the computer being a mechanical assistant helping you solve problems and guiding you towards success instead of slapping you.
A tuning fork plays a tone you can use to tune a piano string. It doesn't slap you if tweak the string tuning the wrong way. An archery target lets you see when you miss the centre, it doesn't also shout at you. Code not working is feedback all by itself. The tools shouldn't add negative feedback to that, they should guide towards where working is.
This seems to have a weird personification of a compiler/linter's error messages as a telling off, an insult, or a put-down.
It's just a message to highlight what is wrong. Red doesn't equate to bad, red equates to "this is a compilation blocker", just as yellow is typically used for "this will work but it isn't a good idea". The compiler _already is_ guiding the user to success.
If somebody is taking compiler errors personally, they are going to struggle in a lot more areas in life than writing code, and the issue is their frame of mind.
And this is a dismissal of the idea that words have effects on people.
Do you think a manager's choice of words has no effect on their subordinates? Do you think a book's choice of words has no effect on the reader? Do you think ALL CAPS IS NOT SHOUTING AND IS PERFECTLY NORMAL?
Are all software tools the same experience for you - you never find one frustrating and another pleasant, one neutral and another you curse at?
It's just a message to highlight what is wrong.
Nothing is wrong. That's the point. Unfinished code doesn't work, that's normal, not erroneous. (Almost tautologically so - of course it doesn't work, it's unfinished).
> And this is a dismissal of the idea that words have effects on people.
No. Context is important. A manager, a human being with authority, is very different to a standard, unchanging message baked into a program. Your claim and comparison (to me right now, correct me if I'm wrong) seems to be that IDE error messages are just as effective on the programmers feelings as the manager coming up to look at your code and pointing a finger while bluntly saying "this is incorrect". I don't believe this at all.
> Are all software tools the same experience for you - you never find one frustrating and another pleasant, one neutral and another you curse at?
They are not. Ironically it is the ones that don't produce direct error messages (especially css) that anger me the most.
> Nothing is wrong. That's the point. Unfinished code doesn't work, that's normal, not erroneous. (Almost tautologically so - of course it doesn't work, it's unfinished).
There's more than one state code can be in while incomplete. It can be incomplete because it doesn't accomplish the task it is being written for yet, but all the same it will still compile. If it is syntactically incorrect, it doesn't even matter if the logic attempted to be expressed is correct, the program is broken. But these errors are easy for a computer to expose - so they should, so problems are fixed as early as possible.
Ironically it is the ones that don't produce direct error messages (especially css) that anger me the most. [..] But these errors are easy for a computer to expose - so they should, so problems are fixed as early as possible.
Yes, they should. As I said elsewhere, I'm not suggesting silencing errors or having no feedback at all. That would not be the computer assisting the user in writing correct code, that would be another way of hindering it.
Your claim and comparison (to me right now, correct me if I'm wrong) seems to be that IDE error messages are just as effective on the programmers feelings as the manager coming up to look at your code and pointing a finger while bluntly saying "this is incorrect". I don't believe this at all.
Not "just as effective", but effective nonetheless. Has an effect on. A compiler is not there to reject "broken" code, it's there to run a state machine over some text and report where it got to. Reporting that state with words that humans use to describe unpleasant situations and problems turns that neutral comparison into an unpleasant state and a problem. It becomes a problem because that's how we describe it. It need not be that.
I dont know why you're being downvoted, since the core of your idea is valid from a usability perspective.
F0-X,think about the way Google Maps (or any other map app for that matter) "corrects" you vs other programs/apps that throw up an error message. When you make a wrong turn, the map software re-routes you based on that choice instead of telling you you're not following its instructions. It assumes your choice is not mistake and goes with it, while still trying to get you to you destination.
Now imagine your compiler (and by extension, IDE) doing that, and you get what I think jodrelblank is getting at.
Two problems, however:
1. Our interaction with language interpreters/compilers is less conversational and more episodic: we submit what we think is "cooked" code, and it tells us where it isnt. If instead it were a conversation, then we'd see more of an "I think you meant this, not that" type of error message. Go and other newer languages are trying to do this with more descriptive error messages that propose alternatives, as are features like incremental compilation.
2. We really cannot specify the goal with a general program. How do you translate "I want an app that plays sudoku" into something a compiler can understand and guide you with? Or even something seemingly simple like "Convert celsius to fahrenheit"? Since the problem space is unbounded, language tools use the bounds setup by their syntax.
I dont know why you're being downvoted, since the core of your idea is valid from a usability perspective.
Maybe the same people who dislike the recent movements to mandate niceness see this as the same (it isn't) or maybe it's a dumb and unworkable idea.
When you make a wrong turn, the map software re-routes you based on that choice instead of telling you you're not following its instructions.
This sounds nicer, but I wasn't thinking of DWIM (Do What I Mean) systems. I was thinking more that .. when you drive into a car park, and the car park is full, you don't think of that as an error, you don't need an alarm about it, you just drive somewhere else and park. You knew that might happen, you plan for it - the car park being full is a normal state of affairs and dealing with that normal situation shouldn't be a completely separate concept handled in a separate way.
You don't try to park, experience an error, bounce back to a control thread, try to park on the street, experience an error, bounce control back to somewhere else, try to park round the corner, get a space.
You have a preference of places to park where you go to the car park, the street, round the corner, or you go home. And none of them are "error states".
It would be annoying if you employed a delivery driver and told them to park in the car park, and they rang you up to say the car park is full, and waited for instruction, then you told them to park on the street and they rang back and said the street was full.
Instead you want a delivery driver who will try reasonable things to park, and if they cannot, they bring the package back to the depot and it goes for delivery tomorrow. If it's urgent, you include that in the original instructions and they try reasonable and unreasonable things to park. The "what to do" is built in.
Instead of having 4 states, made of 1 good and 3 errors, there are 4 possibles, 1 completes quickly, 3 require more input or more processing, and none are errors.
Have you ever used Python's dict.get(key, fallback) ? You pass in a fallback value which is returned if the key is not in the dictionary. Control moves forwards with the possible states handled in advance. Compare that to C# and a dictionary lookup which either returns or throws an exception - and there is no alternative. You either check yourself whether the key is there first, or you handle an exception. There is no way to inform it what to do if the key is not found, no way to tell the dictionary that's fine, no way to tell it to create a new key, no way to give it any information to work with. All you can do is tiptoe around it like a minefield (check before you step), and if it explodes, pick up the pieces.
Simply having a polite compiler error message or IDE sugar papering over the Key Not Found Exception doesn't make that design any nicer to use.
Do you have a concrete example of how this would work? If I write some garbage and ask the computer to run it, what should happen?
I think term rewriting languages are close to this idea, in that if you write something that is not a redex, it just stays the way it is. I've never heard anyone describe this as particularly ergonomic or friendly however, as it can be hard to figure out why nothing is happening. It also severely constrains the language design.
> If I write some garbage and ask the computer to run it, what should happen?
You will be pushed. J
Seriously talking, if I need to wrote error checking code with user visible message, usually I'm trying to add a suggestion to user how to resolve problem. It's good UX.
Do you have a concrete example of how this would work?
Nope, it's only a recent idea. But say in a Python REPL you do len(0) what happens is this: https://i.imgur.com/wzbKaJ6.png it says, in red:
> len(0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'int' has no len()
When you try to use a crosshead screwdriver in a slotted screw, it simply doesn't fit. It doesn't also say Traceback: TypeError: object of type 'slot' has no orthogonal drive slot, in red, while pushing your work away from you.
I'm not suggesting hide the error, pretend it's fine, or paper over it with "error but I know you can do better :) :) :)". I'm suggesting a thorough but subtle shift in the approach so it's not you trying to build code and the computer bluntly rejecting it for being imperfect, and then you fight that. Instead it's you and the computer cooperating on the task of making code which works and does what you want.
> len(0)
Line 1: len() received something it cannot process (an 'int').
It is intended for lists and strings, use help(len) for more details.
1) Doesn't emphasise that it's an error. You already know it's an error because it didn't give you a length. Function could have a result with details of what happened, instead of throwing an exception. HTTP has a description of one part of it: "The status-code element is a 3-digit integer code describing the result of the server's attempt to understand and satisfy the client's corresponding request.". That's neutral, which is good. It then goes on to say that 4xx codes are client errors, and 400 is "Bad Request". That's not good. Imagine you went into an ice cream shop and asked for blueberry ice cream. Would they say "bad customer!" or "Error: we don't have any", or would they say "we don't have any"?
2) Tries to guide you towards what it is intended for and how to make it work, instead of bluntly telling you it's wrong and leaving the rest up to you. "expected end of statement" is better than "parse error".
3) Isn't red. If everything's important, nothing is. Asking for the length of a number isn't a massive critical emergency, worthy of a star ship Red Alert, it's just a slight adjustment of the kind that is required all the time. Probably a typo, chill out.
4) Better if the experience allowed you to dismiss the error message, and it vanished. Visual Studio Code's ability to insert extra lines next to the code to show you messages, and remove them. Visual Studio's lightbulbs to the left of lines of code and hovering tooltip message boxes are good here. PythonWin lets you edit and delete command output.
In Visual Studio as an example, trying to assign a float literal to an int variable: two red cross icons, the text "1 Error", an error code, a red error underline under the code, a red marker on the scrollbar at that line. OK, okay, I get it. The message is neutral, helpful, "Cannot implicitly convert type 'double' to 'int'. An explicit conversion exists (are you missing a cast?)". Have that message without the confrontational iconography. Tell me "the result of the parser's attempt to understand the code", draw my attention to it, draw an arrow pointing to it, highlight it, do useful things. Don't give me the constant low grade stress, don't give me the same kind of feedback when I mistype a comma as a dot as when something really serious is going wrong.
if you write x-1 = x + 1 on a piece of paper it just sits there silently being wrong but code runs it shows information on the screen, it launches misles or intercepts them, it helps your phone play a tune or make a phone call.
All of this is complex work not merely because we are making it harder but because we are asking more and more.
Short of AI its infinitely more useful to have an exact answer to what went wrong instead of just silently not doing the correct thing because this is ultimately much harder to reason about.
I'm not sure how your kinder gentler world is even supposed to function or how you imagine you are supposed to work without better not worse tools giving you negative feedback when inevitably things go wrong.
Why would you even think my suggestion for "tools which point you towards success" maps to "worse tools which don't help you"??
An IDE which highlights matching pairs of parentheses as you move the cursor around is precise and helpful. It's not a "worse tool" because it lacks a modal red popup with a large exclamation mark and a DING! sound effect and the text ERROR 401 MISMATCHED PARENTHESES.
"kinder and gentler" are human traits. I'm thinking of it more like sanded smooth wood rather than splintery rough wood.
If it sounds that way, then I did a bad job of turning an idea into words. Having a language with a concept of errors, and hiding them behind an IDE interface, is not the same thing as having a language without the concept of errors.
The best way to avoid mismatched parens is to have the closing paren inserted automatically and have operations operate on either just contents or contents and both parens.
See smartparens or paredit. Never having an error is indeed the nicest way to deal with errors but I don't know how you deal with all possible errors in the same structured way as smartparens deals with parens. Further efforts in the same vein might have value but even if we can correctly discern intent and correctness what do we do with it?
In theory you could not allow someone to type foo.len() if foo doesn't have a len method or force the programmer to click a dropdown of available methods but that seems cumbersome.
What if it will have such a method when you are done and the error is merely a function of the intermediate state at present?
Ultimately flagging that section of the text may be the best way to allow code to be entered that will ultimately be in a correct state but isn't. Using the word error and the red squiggly line is probably if anything good UI because it correctly expresses the current state of the code and does a good job of drawing attention to the source of the problem.
Other people argued that block languages, such as Scratch, don't have syntax errors. If you can drop the block, then it will run.
I'd argue that it still is a syntax error when it prevents you from placing the block at an illegal position, but the feedback loop is much faster and less disruptive than traditional syntax error messages.
I'd argue that it's not a syntax error simply because the whole point of Scratch is to make programming accessible an audience (small children) for whom the kinesthetic and iconic mentalities are dominant. Syntax belongs to the symbolic mentality, which they haven't developed yet.
Even in the case of other visual programming languages that are targeted towards adult professionals, the point of such tools is to suppress the symbolic mentality because the iconic and kinesthetic might be better for getting the job done.
I don't know if it's exactly what you want, but this team is working on making a language and compiler capable of "filling in the gaps" on syntax errors. I.e. it can provide completion suggestions and allow you to explore the consequences of those suggestions, by propogating them through the whole program.
The link below has pretty extensive coverage of this idea, with a paper and youtube video.
He wants a python with algebraic data types? I wonder if he has looked at Crystal. Itis neither, but still perhaps close enough. Ruby espressabilty and a type system that while probably not powerful enough to be classified as algebraic, still is powerful enough to have things like sum types.
I always find "English like" languages difficult, as if there's the effort of translating my thoughts twice. Somewhat like how one might learn a language & try translating it to their native tongue, but to be fluent one has to directly translate the new language to the abstract symbols of the mind, which is _not_ words in the native tongue
What I'm suggesting here is that programming languages are closer to mapping to our mind's abstract representation of programs as graphs than prose is. Prose is optimized for communicating things in the real world, not specifying programs, which tend to be elaborate filing systems
That's the issue one has programming in English without bringing up the difficulty of actually writing a good parser for English & having multiple people agree on its interpretation. If I'm programming in English, let it be my English, not yours
That looks promising on initial inspection of e.g https://github.com/CTSRD-CHERI/beri/blob/master/cheri/trunk/... , although possibly a bit more imperative than I had in mind - still has for(). If I was working on HDL commercially I'd definitely give it a deeper look.
* a cli tool to compute resource awareness based on args provided, e.g. `ra --mem On, --time O1 file.ex file2.ex file3.ex` that all implement a single interface.
I still think a decent Agent Oriented Language is possible. Something that takes into account proper agent communication and is not just a Smalltalk variant or a Java framework.
I thought Agent Tcl was about half way there. It just felt like the infrastructure didn't quite get over that hump that would have allowed it to be a success. I do admire the effort.
A sane shell language would be nice. Powershell is closer than most, but it still has some pretty WTF aspects, like `-lt` instead of `<` and how lists collapse.
Noob question: don't dependent typing requires the language to be dynamic? How do you check dependencies in types otherwise (eg. the size of an array)?
If I understand your question, generally the compiler would use a theorem prover / SAT solver to prove that the constraints of the dynamic types were always upheld. If it can't, in some languages this is a compiler error. In others it will introduce runtime checks (but this still doesn't require a fully dynamic language, as such, but it is in a sense partially dynamic).
I would like to make a lightweight markup language for text and data, with some concept of possible future revisions. So when you try to commit a new revision of a file, it could automatically be verified as a valid edit of a specific previous revision.
As net4all pointed out, your C++ example isn't really dependent types. It's achievable with parametric polymorphism a la System F, AKA the ML-style or Java-style generics.
C++ does have value arguments for templates, however. Perhaps a better example would have been std::array, which takes a size as an argument.
However, C++ still doesn't really have dependent types the same way Agda or Idris has them. One big difference is that C++ templates are only checked upon instantiation. So, this code:
template<typename T, T a> std::array<int, a + 1> add()
{
return {1, 2};
}
compiles, but when I try to do
add<int, 0>()
the compiler complains. Because the dependent function type means "forall," it's important that the compiler make sure that it truly does work for all inputs.
In this piece of code, the return type is explicitly dependent of the type of the argument (it's path related template
#include <iostream>
#include <string>
struct A {using ret = std::string;};
struct B {using ret = int;};
template <class T>
auto f(T t) -> typename T::ret
{
return {};
}
int main(int argc, char *argv[])
{
auto s = f(A{});
std::cout << s.size() << "\n";
auto i = f(B{});
std::cout << i+1 << "\n";
return 0;
}
There is also stuff like this where the return type is defined as the return type of builder.makeObject(), builder which is the argument:
template <typename Builder>
auto
makeAndProcessObject (const Builder& builder) -> decltype(builder.makeObject() )
{
auto val = builder.makeObject();
// do stuff with val
return val;
}
The return type has to be dependent on the value of the input, not its type. Like it'll return an int[3] if you provide 3 as in input and int[4] if you provide 4. Note that both 3 and 4 should have the same type, but int[3] and int[4] are different types.
I'm definitely interested in this! There are significant challenges here though, particularly with respect to combining dependent types with effects and coeffects! This is open research, but significant progress is being made, and I'm hopeful. I'm currently keeping an eye on https://github.com/granule-project/granule, which seems like it might have some interesting things to say about this.
Currently I'm trying to learn how to implement dependent types in my programming language, Pikelet: https://github.com/pikelet-lang/pikelet (currently doing a rebuild of the front-end in https://github.com/brendanzab/rust-nbe-for-mltt). I'm hoping that once I've done this I might eventually be able to start looking at implementing something to do with borrow checking in it, but who knows! The more I learn the less I seem to know! Always interested in chatting to people about this at our Gitter channel: https://gitter.im/pikelet-lang/Lobby
> Idris is perhaps the most famous dependently typed language.
I'd probably put Coq and Agda ahead of Idris in terms of being well known and established, but Idris is certainly pretty cool in how it tries to target practical programming.