> it has unit conversion built in, so 3km ÷ 26m will be evaluated to 4.62km/h, and also you can ask for m/s
I’d expect 3km ÷ 26m to evaluate to roughly 115.4 (no unit). I hope it wouldn’t try to be too clever about whether “m” means metres or minutes, because such cleverness invariably goes wrong. Change it to “26min” or “26 minutes” or some such thing, please?
Mmm, true. I started using units(1) quite recently, and “h” being the Planck constant rather than hours keeps on tripping me up. The built-in abbreviation “kph” for km/hr isn’t comfortable either. (mile/hr may be commonly abbreviated as “mph”, but km/hr is conventionally “km/h”). Ah hah! There, just created ~/.units containing `h hr`, now this will bother me no more (since I’m unlikely to ever want to use the Planck constant). Pity it doesn’t use XDG paths (which would probably land the file at ~/.config/units), though you can define the environment variable MYUNITSFILE.
Yes!! Can we all just agree to make that illegal? If a developer writes an app that automatically defaults to having configs right in the home dir or if they don't separate their cache from their persistent data, they go to jail for a week to think about what they did.
All joking aside, it really is a shame so many things don't stick to the standard. The ability to just copy .config to another machine and have it be pretty much a carbon copy of the original would be really useful. I guess I should just finally switch ti Nix, huh?
> If a developer writes an app that automatically defaults to having configs right in the home dir
To be fair, I think `units` and `~/.units` predate XDG paths by a couple of decades at least (`units` was from Bell Labs, XDG paths are from somewhere around 2005-2010 as best I can find.)
Oh yeah, definitely, and so do most other programs that don't respect the standard (VS Code being the biggest exception). Still, wouldn't hurt to update things a little, although in the grander "backwards compatibility vs modernisation" debate, this is still a small and insignificant issue.
> same with h which can either mean hour or the planck constant.
To be fair, I think the probability of a normal person using 'h' to mean the Planck constant is low enough that "2h" meaning "2 hours" is going to be right ~100% of the time.
(Although, actually, having just learnt that `units` comes from Bell Labs, they might well have been using 'h' for Planck constant more than for hours.)
Some of us ...of a certain age... may remember an ATG Night at an Apple WWDC in the 1990s.
This kid comes out on stage, holding a tablet, and starts writing math formulas, which are recognized and displayed on the screen. Then he starts balancing the equation; dragging variables from one side of the equals to the other, and the program automatically does stuff like invert the operations.
Then, he graphs it.
A whole conference hall of geeks gave him a standing ovation.
That became the first Apple Graphing Calculator app.
I don't know if it ever worked exactly like he demoed (I have never really used the graphing calculator. It wasn't useful to me. I have always used PCalc), but that was exactly what his demo did. That was why we stood up and hooted and hollered.
A lot of ATG demos never really saw the light of day (except in MS products, which was one reason they stopped doing them).
Nowadays, you can do that with a Web browser, but back then, we hadn't seen anything like it.
In the final example the handwriting is adapted to the application, and differs significantly from the original handwriting used in earlier examples. Then it goes on to say that the app has everything you need, while needing to learn to write in monospace for it to work.
Being someone with poor handwriting, this would slow down writing for me. A possible answer for this would be to update the expected places for a character by taking the boundaries of previously written ones.
This is also what bothers me. The example before the one you mentioned looked the best to me. And it seemed to reliably detect numbers and operators as well, so I don't see why this approach should not be able to resolve the equation like the last example did.
Writing in monospace is not only unnatural, but the approach shown even makes it one step worse: you need to write in the cells. You're completely giving up the flexibility of handwriting, you need to know your equation before you write it down, otherwise you're risking running out of "cells" on the left or right.
Yes, the grid paper is a stepping stone for the line paper. However, it is a tradeoff. With grid cells you can tell what symbol is in it. With symbols on a line, you have do guess a lot. Illustrative example: there is a unicode symbol for "colon equal" – "≔" – if you write it in a cell you can be sure you got it right. If you write it on a line you could mean two distinct symbols – ":=".
The problem you mentioned – that you have to plan ahead – is not true for grid cells. There is a part in the GRAIL system demo where it is shown that you can go quite far with this approach. [0]
But as you mentioned, flexibility of handwriting is in its non-modal interactions. I am currently working on a line-based editing, but it's a bit tricky.
IIRC, Graffiti itself used to be patent-encumbered in the 1990s, but any such patents should be fully expired by now. I wonder what's blocking a FLOSS reimplementation of something similar for the modern mobile platforms. If the glyph shapes are problematic (though they shouldn't be) there's also https://en.wikipedia.org/wiki/Moon_type which is over 100 years old.
Historically, graffiti-like systems have also been used for "night writing", so they might have some accessibility benefits as well.
> RAND Tablet had handwriting recognition that was based on _clever heuristic+, i.e. no machine learning, and it required person to write slightly different than it would normally write with a pen and paper.
I am familiar with Chalk Talk, however I think Ken is trying to achieve something different. He is trying to explain stuff to other people with his tool; summoning visualization that are suitable at the moment. My aim is bit different — enrich the individual’s ability to think unthinkable thoughts at the moment. Let me explain. I like to think with paper and pencil, but I am often limited by my “computational” mind. Computer is just faster than my mind and I would like to leverage that speed in the moment with pen and paper.
I find the current state of the art on stylus text entry so disappointing. I made a hacky prototype 7 years ago (!) and this kind of direct interaction remains a dream: https://youtu.be/So4x3Cu4WI0
I had a glimmer of hope when Apple started selling styluses, that they might succeed where Microsoft failed. But instead, they're putting mouse cursors on iPads, and the stylus is still a niche tool for artists.
Remarkable 2 is a total breakthrough product. I rarely use my iPad any more. The pen with the friction with the tip that wears down was key—-feels like paper.
I use Paperlike with my iPad Pro for the same reason and must say that it's really hard to switch back!
I'd use Remarkable if: it was cheaper and supported a full colour display/Procreate. But, obv. that's not what the device was designed for and that's ok.
And I'm still not sure what the point is, its still easier for me to type using the magic keyboard. Maybe for some people its easier to write with the pencil than to use the touch keyboard.
It's easier to type pure text, but I'd bet taking notes in a math or physics or biology class would be way faster using a pencil than a keyboard (even more so when excercising your math) - you often need to jot down diagrams and complex multi-line equations. The 2d nature of real paper is completely lost with the 1d abstractions of the keyboard.
It’s an easier mode for me when standing and holding the iPad with one hand and the stylus in the other. The keyboard is nice when sitting and using both hands for sure. Same was true when using my old thinkpad tablet in the same way.
I like my regular old organic paper just fine, thanks. Sure, writing with ancient mechanical tools lacks modern computational technology, but that's kind of the point.
Unconnected from the rest of the world, my writing exists only as a conduit from me to the page.
If I want to refine that and digitize it, I can then hand-type it into a digital medium, which acts to reinforce in my brain what I wrote and allow me a bonus editing pass.
> I like my regular old organic paper just fine, thanks.
I'm picturing the scene in 'Saving Private Ryan' where the translator is trying to shoehorn his typewriter into his travel supplies and Tom Hanks slowly raises a pen instead.
I agree with you. I have enough paper and pens to last a good while of first drafts. My typing them into digital medium works for me like it does for you.
Reminds me of how OneNote worked in... 2013, I wanna say?
Funny how different people still act equally as blown-away by technology that's been around for decades, but I guess that's retrofuturism raison d'etre.
OneNote you mentioned [0] seems nice, but the point is taking this kind of thing to the next level by providing a more advanced notation coupled with dead-simple user interaction.
I recently looked at all the math lectures I'd written up in university (before throwing it all out) and I'm honestly not sure if I want or had wanted this.
Maybe my MO of writing this down is not the standard way, but there were endless examples of a quick 2-column layout here and there, special underlines of parts of the line or single characters, arrows pointing somewhere.
Also I'd been usually writing very packed (my high school math teacher once wanted to gift me some extra paper for christmas so I'd use more space) and I'm simply not sure it would translate well. It's not really an artsy representation, but it's definitely my style that I highly prefer over how math text books look.
The same is not true for normal text, but even there I very often insert quick drawings or diagrams and I'm also not sure if the software will let me keep them in THAT place where I put them. I think it's a hard problem to solve for people who don't just write paragraph after paragraph.
I wonder when can we get a Pen and Paper on a computer system with 1ms latency [1]? The Apple pencil said it has 9ms response time, which is already close to the lowest latency it could support on its 120Hz Screen ( 8.3ms per frame ).
And I may be in the minority but I much prefer there is an iPad that doesn't have a camera bump so it can lay flat on the table.
I got the iPad Pro because I want the lowest latency possible. But later I realised I can’t tell the difference between the iPad Pro (9ms) and iPad Air (18ms). I think other people might be able to tell the difference between 9ms and 5ms, but I doubt it’s distracting people out of their workflows.
Technically it is possible to update parts of some screens instead of the whole screen, so the screen refresh doesn't need to be the limit unless it's scrolling large areas.
Hey, that is quite cool. :) It looks like Plankalkül [0] or early blackboard APL [1]. The 2D notation is probably a problem, but it is always cool to see new takes on this problem.
Thanks! Why do you prefer 1-D notations for paper programming? I think of the second dimension as being one of the primary advantages of using a pencil rather than a keyboard: on the keyboard it is awkward to move around but it is easy to spew out thousands of characters, while for the pencil the situation is precisely reversed.
Thank you for linking me to Grzegorz's post, which is a much better explanation of Plankalkül than I'd seen previously! I just celebrated Zuse's birthday on Tuesday. I jotted down his TPK example in paperalgo and found that it was about 64 symbols (characters and non-character lines), about a quarter the size of the Plankalkül version he gives†:
But there are definitely some real parallels: the horizontal bracket for the loop and what I take to be the tabular condition-consequent-condition-consequent representation of conditional logic, with each consequent to the right of its condition, and of course the use of subscripts for array indexing. It is a peculiar combination of the comfortable and the rebarbative—why A9 or W2, for example?
______
† If we count by non-whitespace characters, it's a bit over a third the size of his Python, too.
While I understand why 2D is objectively better choice for expressing algorithms, I believe 1D is better suited for our minds. 2D is just hard to parse. You don't know where to look and how to proceed. With 1D you start in the upper left corner, and there is only one direction to follow. Less is more.
I read what you have wrote on paperalgo and I like it a lot. It's terse and thought out. I think pattern matching makes more sense than conditionals and looping, but it is easier to transliterate it from different languages. ↑ from Smalltalk just feels right.
Interesting, I usually think of code as being easier to parse when it's laid out in 2D like this:
enum { size = 8192 };
int main()
{
char *n = "tmp.mmapcrash";
int f = open(n, O_RDWR | O_CREAT, 0666);
if (f < 0) err(errno, "open: %s", n);
for (size_t i = 0; i < size; i++) {
if (write(f, "x", 1) < 0) err(errno, "write");
}
....
rather than in 1D like this:
enum { size = 8192 }; int main() { char *n = "tmp.mmapcrash"; int f =
open(n, O_RDWR | O_CREAT, 0666); if (f < 0) err(errno, "open: %s", n);
for (size_t i = 0; i < size; i++) { if (write(f, "x", 1) < 0)
err(errno, "write"); } ...
Are you saying that your experience it's the other way around?
Pattern-matching makes some code a lot easier to express and understand, but it's true that there are some languages where you end up having to kind of hand-translate it into a nest of conditionals and whatnot. It's kind of mindless work, though, so I don't really feel like I'm missing anything important in the pattern-matching representation of conditionals in paperalgo.
I remember the Apple Newton, mine took 4 AA battery's and did very adequate handwriting recognition.
From wikipedia:
The original handwriting recognition engine was called Calligrapher, and was licensed from a Russian company called Paragraph International. Calligrapher's design was quite sophisticated; it attempted to learn the user's natural handwriting, using a database of known words to make guesses as to what the user was writing, and could interpret writing anywhere on the screen, whether hand-printed, in cursive, or a mix of the two. By contrast, Palm Pilot's Graffiti had a less sophisticated design than Calligrapher, but was sometimes found to be more accurate and precise due to its reliance on a fixed, predefined stroke alphabet. The stroke alphabet used letter shapes which resembled standard handwriting, but which were modified to be both simple and very easy to differentiate.[..]
For editing text, Newton had a very intuitive system for handwritten editing, such as scratching out words to be deleted, circling text to be selected, or using written carets to mark inserts.[5]
Later releases of the Newton operating system retained the original recognizer for compatibility, but added a hand-printed-text-only (not cursive) recognizer, called "Rosetta", which was developed by Apple, included in version 2.0 of the Newton operating system, and refined in Newton 2.1. Rosetta is generally considered a significant improvement and many reviewers, testers, and most users consider the Newton 2.1 handwriting recognition software better than any of the alternatives even 10 years after it was introduced.[6] Recognition and computation of handwritten horizontal and vertical formulas such as "1 + 2 =" was also under development but never released
I've been disappointed at how limited the pen input is on the iPad. Few apps support it and its built-in understanding of text has a very simplistic model.
I do -- the Mac has had that feature since forever (if you plugged in a tablet like a Wacom). A shame Apple doesn't seem to respect the Mac any more -- it's a great example of a use case that went from the Mac to the iPad.
I prefer to take notes with a pen. The built in Notes app seems to think you write in a notebook the way you type into one: that each line contains one topic. But in fact when writing notes I tend to make columns of stuff on the right (mostly stray thoughts or TODOs that come up, or list of action items, or an outline, or meeting attendees, or who knows what). I add missing points or clarifications elsewhere and use an arrow to show where they go. Notes agglomerates them all together based on vertical distance from the top edge.
Notability and Goodnotes have some editing that is better than Notes and some that is clumsier. Notability and Soulver and most other apps don't recognize the pencil at all.
What they all need to do is do some actual user research and explore how people use paper and use some machine learning to do a better job of inferring what the user is up to. Some of them (including apple) do that when you draw a triangle or circle but at the end of the day those are just party tricks.
I do assume more effort has gone into the drawing/"painting" capabilities. That's not my use case so I can't tell. But I doubt those applications require or bear the same kind of semantic interpretation that's required for freeform text.
I write in the notes app all the time and have none of the problems you describe. I assume that’s because I don’t use scribble, and I just write in my own handwriting.
Scribble has been less useful than it appears. Like you I simply write notes as I always have. It’s when I want to grab a bunch of writing and paste it as text into a message or other document that the limitations appear.
> Notability and Soulver and most other apps don't recognize the pencil at all.
I may have misunderstood your comment, but Notability uses pen-pressure to regulate stroke width, so it must know the pen is there. Conversion from handwritten strokes to text is very far from perfect (unusable for code snippets and math formulas), but it is good enough for writing "human speak", like say forum comments or a CV intro.
The parsed version flashing every time a change was made was distracting… you don’t want to keep having your eyes pulled away from where you are writing.
Also, having all of the input terms be integers, yet the answer returned as a floating point, seems a basic type error.
In general, though, it seems to miss the mark. If you’re going with a hybrid electronic/pen approach, why not let the computer handle the annoying bits, like having to wrap things in parentheses that you manually need to keep balanced? Why not draw a circle around some terms, and have that get used for grouping? Why not have the entry itself be interactive, like in the works of Bret Victor?
> The parsed version flashing every time a change was made was distracting… you don’t want to keep having your eyes pulled away from where you are writing.
Totally.
> Also, having all of the input terms be integers, yet the answer returned as a floating point, seems a basic type error.
Spot on.
> In general, though, it seems to miss the mark. If you’re going with a hybrid electronic/pen approach, why not let the computer handle the annoying bits, like having to wrap things in parentheses that you manually need to keep balanced? Why not draw a circle around some terms, and have that get used for grouping? Why not have the entry itself be interactive, like in the works of Bret Victor?
I am familiar with Bret Victor's work, but I don't know which interaction you have in mind. I think he would despise any such notation akin to APL.
> I am familiar with Bret Victor's work, but I don't know which interaction you have in mind.
Just the idea that since you’re working on a computer, you should be able to take advantage of it. So once the numbers are in there, you should be able to tweak them around without having to re-write everything. What if this value was higher? What if it was lower? How would that affect the results? Let the calculation be explorable.
Note to the author: it would be neat if you could include multiple codecs for the video tag like vp9. My stock linux install does not have codecs for mp4. Browsers will automatically pick a preferred codec if multiple are listed.
Well I would, but hosting those videos was a bad idea – I am currently at 250GB of free 100GB bandwidth. I'll reupload them to the ad-infested internet TV, YouTube.
Am I the only person here with handwritting so bad that the automated recognition just completely and utterly fails?
I got the latest iPad Pro with the Pencil 2 some time ago - and as much as my friends rave about the "very good and totally great" handwriting recognition in text fields, it's just totally bust for me. I can't write a single word and have it recognized correctly on first try. It takes me 10x as long to scrible as it does to just poke at keys on a virtual keyboard with the pencil.
And yes, I can read my own handwriting. Just the computer apparently can't.
I have the worst handwriting I know of. It’s so bad that I can’t do whiteboard interviews, I specifically request a laptop with a notepad.
I tried getting into the iPad Pro + pencil workflow but it didn’t work for me initially. Even I couldn’t decipher what I’d written because the smooth surface made my handwriting worse. I didn’t think that was possible.
What changed it for me was adding a textured screen protector and Apple releasing Scribble in iOS 14. My handwriting can’t be deciphered by humans but apparently ML can manage it! The workflow of annotating PDFs and kindle books with the pencil very enjoyable after these two changes. It subjectively feels more productive because I’m not interrupting my thoughts by bringing up the keyboard and searching for keys with one finger.
I don't know if there's any way to use Palm Pilot ('Graffiti') style alphabets on modern devices. I guess every year handwriting recognition gets better, fewer and fewer people are outliers, and it doesn't matter, but it has always appealed to me as a way to increase accuracy, finding an accommodation between man and machine.
On a side note, I've made my handwriting more readable on the past year, and it was a fairly small effort. Changing a few letter forms, but mostly just slowing down and being conscious about it. Little things like slanting the crosses on 'f' and 't' so they're distinct rather than running together.
My handwriting is so bad. I’m 35 years old. And I try so hard. I slow down and hold the pencil/pen/stylus differently. I try making characters in different ways. And no matter what I do my handwriting goes from terrible to childlike and back. It’s embarrassing honestly.
This is where something like Scribble would be useful if it could let you write and draw in your own handwriting, and then fix things up along the way to look highly professional. E.g. if you draw a square quickly and title it Square, you’d end up with straight lines and the word Square neatly drawn on top of it. Even better if could be stylized with different drawing flavors.
While Scribble can do a little or it by correcting shapes if you are careful enough as you draw, it isn’t able to “autocorrect” hand drawn titles by replacing with rendered text.
Try Nebo app [0], it is so far the best note taking app for iPad. Also, it is the second to last demo in the article, and that is the target I am aiming for.
Rand Grail was very cool to see on video. The fact that it's 'assembly' did not seem that bad to me. What was painful was watching the demo of writing the equation and all of those parentheses being drawn around. Maybe RPN or some other prefix would work better.
There's something oddly satisfying about writing individual characters inside boxes.
When I watched the video in the post where it converted the written text to characters, my immediate thought was that I wished those characters lined up with what you wrote because it's jarring to just see them rearrange far from where you wrote them out. I was happy to see the later example with the boxes.
A little unnatural for English, but this would work fine for Chinese script. Each character and punctuation mark is a monospace box. In fact, students practice handwriting by drawing characters in boxes. You can buy little elementary school booklets that have grids of squares for this.
If I ever get the time to work on it, I've bits and pieces put together to do something like this with a reMarkable. My goal is to make programming work on it and my angle is to use ideas from interactive theorem provers and language protocols. I just want to liberate computing from glowing screens and keyboards and be able to walk around and enjoy my garden or throw my work up on a board. Make computing more flexible for folks with different abilities. And maybe less intrusive in our lives. A computer that is there but you don't have to see it.
The kobo elipsa has some rudimentary version of this (it can recognise equations and calculate solutions). [1]
What people often forget is the importance of writing feel. I have a remarkable and its like night and day compared to writing on a regular tablet. Fortunately with the remarkable the kobo elipsa the onyx we are now seeing more and more devices aimed at note taking.
I tried to find more information about it, but it looks like it died as a demo. Some interactions were nice, but majority of them would probably be better as a dialogue with Siri-like assistant.
video from the past were amazing, considering it is just in couple of decades after invention of transistor. ( I think that is because before invention of transistor, we already had mathemetical foundation of computation almost ready. ) in current era, do we have something that needs to be invent ? ( i.e. different part of the science can predict it, but the actual part of the science is falling behind ? )
Most Augmented Reality concepts are at that stage IMHO - we don’t really have light weight efficient hardware that can execute that vision yet. 20 years from now we will, and will probably look back at early experiments and concept videos from our time as “ahead” of their time.
For someone looking for something like this, Id recommend taking a look at Microsoft math solver built into edge [1].
That's the closest I found to something that actually works, and recognizes my (disaster of a) handwriting first try.
Are you sure about "nobody"? As a corollary, I've only ever used × to indicate the cross product of two vectors. For scalar multiplication, ⋅ is the standard symbol in my country (and many others).
I guess I'm being called out for not handwriting a lot myself anymore either. Yes, I'd use the centered dot too. Anyway, the point is that the asterisk is not what we use in handwriting.
Depends on the country. In Poland it's usually nothing for multiplication:
(x+3)(10-y)
then dot, then asterisk. Nobody writes "x" for scalar multiplication cause it's easy to mistake it for the variable x. What do you mean by 2xy? 2 * y or 2 * x * y?
Another cultural difference is how 7 and 1 is written - in USA you write 7 with 2 strokes and 1 with 1 stroke. In my country 7 is always crossed (always has 3 strokes) and 1 has 2 strokes.
And variable z is usually crossed in equations to distinguish it from 2 (but that varies regionally).
That was how I learned to notate multiplication in high school or earlier. It’s how it’s done in all the mathematical literature or textbooks I’ve ever seen. And people only started using * when they started programming computers.
I write asterisk like that. :) But middle dot "·" works as well. Problem with "×" is that it is basically letter "x". This ambiguity applies to more characters, just a simple example – a lot of Americans write "1", "l", "I", "|" as one symbol. This is fine when you have context, e.g. a phone number, but programming language is a mixed bag where context isn't much of a help.
You don't write letter x in handwriting. You always write cursive/italic x that cleanly solves the ambiguity with × sign. Similarly, "l" is handwritten and | is "broken" in the center.
Even though I am not German, I am aiming for Grundschrift [0] with letters and numbers since it is a close match for how I am writing. The broken pipe makes sense though.
In my handwriting, I always write x as the typed letter. The cursive variant feels very awkward for me. In my country, after primary school, we always use a dot for multiplication (or,in maths with single-letter identifiers, no symbol at all - a multiplied by b is 'ab').
Actually using asterisks for multiplications is something you do on computers not when writing on paper, or at least this is how I was taught at school and university
Faulty handwriting recognition causes data loss. Instead of relying on machine learning as a crutch, we should design systems and UIs that can do recognition, but don’t discard the original data.
Bad handwriting causes data loss since 5000 years probably. Whether it's an AI or a human reading it, handwriting is sometimes impossible to decipher. To avoid data loss, don't use handwriting or write very neat.
Telecommunications enable conversations over space.
Recordings enable conversations over time.
There's a broader symmetry between records and signals generally that I've realised, though I don't seem to find any other notice of this:
Signals transmit encoded symbolic messages from a transmitter across space through a channel by variations in energy in time to a receiver, enabling creation of a new record.
Records transmit encoded symbolic messages from a writer across time through a substrate by variations in matter in space to a receiver, enabling creation of a new signal.
When I tell you something in person, we did an instant thought transfer. However, we were bound by the time and space. When I write you an email, you can read it on Mars in the year 2037 and we did almost the same thought transfer even though I might be dead.
I’d expect 3km ÷ 26m to evaluate to roughly 115.4 (no unit). I hope it wouldn’t try to be too clever about whether “m” means metres or minutes, because such cleverness invariably goes wrong. Change it to “26min” or “26 minutes” or some such thing, please?