This is the "modern" one with the egregiously bloated UI (including a loading screen!) and telemetry... but funny that it's using the same calculation code. Of course they still managed to screw it up somehow: https://news.ycombinator.com/item?id=17670644
In one of the old Win2k source leaks the code can be found for the far more efficient "classic" one.
The loading screen technically isn't part of the calculator's UI - Windows automatically shows a loading screen for all new (UWP) apps unless they explicitly opt out (and even then the OS will still show the loading screen if the app takes more than a second to load). The purpose of this is to keep the OS feeling responsive, since it allows it to bring up the window immediately (for UWP apps the OS shell is in charge of creating and managing apps' window frames, unlike Win32 where it's the apps' responsibility), rather than leaving the user staring at a screen with nothing happening for several seconds (and maybe thinking it's the OS's fault for being unresponsive)
Regardless of whether it is or isn't part of the calculator's UI, the presence of a loading screen for a calculator is a symptom of a deeply-rooted problem.
> The purpose of this is to keep the OS feeling responsive, since it allows it to bring up the window immediately, rather than leaving the user staring at a screen with nothing happening for several seconds
Wait, you only get one of those? As of about a month ago on my computer, launching calc.exe takes 5-6 seconds for anything to happen and I'm then greeted 1-2 seconds of a loading screen after the window appears.
I've taken to just leaving an IDLE shell open at this point.
Seems there are 4 pull requests open to fix various math issues, all of them opened in 2019. Most PR merges seems to be about minor changes like translations and version bumps. Not sure we can call it a success just yet.
It's very cool it's open source in the first place though, don't get me wrong. And MS doesn't have to merge fixes if they don't want to of course.
But what about growth and engagement? I wouldn't be surprised if some idiot's performance metrics are tied to how many people "engage" with the calculator.
At least it’s not a subscription model for £6.99/month like almost all of the apps you find on the app stores that used to be either free or a one-time £X.
You get the addition and subtraction for free, but you have to upgrade via in app purchase for multiplication or division. It makes sense because those are advanced features the average user wouldn’t need anyway.
Explorer UI is capable of showing loading animation for folder contents since it was first publicly released. It makes sense not only in context of potentially slow devices, but also because "Folder" is not an "Directory" but an somewhat arbitrary COM object and thus showing contents of folder can lead to a lot of potentially slow operations.
IIRC somewhere along the line of all the Win32 windows versions the loading animation got somewhat broken and explorer window was more likely to get simply unresponsive instead of displaying loading animation, but that is another story.
Folders in Windows can be network locations or internet locations without any guarantees on their access speed, makes sense to show to the user that it's still doing something.
A big part of that loading bar, outside of network ops and slow drives (and remember, today you don't have floppy drive noise to know if it works or got hung), is for things like sorting and getting file icons and other attributes right. This can take non-trivial amount of time.
They still have the old calc.exe from Windows 7 in the source tree. You get it if you the LTSC version of Windows. I wish there was a way to get it on regular Windows.
They seem to be going the other direction; notepad.exe is now updated from the App Store in the Dev channel.
This code is MIT licensed, it should be possible for someone to re-create the old UI and still use this calculator's modern calculation engine (with all the bugfixes)?
I don't think they meant "this released source tree" but as in "microsoft's source tree" since it is still actively maintained for the LTSC release channel.
Thanks for that link. The forum thread is perhaps the most perfect example of how terrible technology forums tend to be that I've ever seen: if you report a problem, some fanboy with over 10,000(!) posts will immediately show up to tell you it's your fault and not really a problem:
> Do you really think it is going to make much difference to the final answer?
> Even the algorithms used are probably only accurate to so many significant figures.
Beautiful. (Keep in mind the issue referred to here only requires five decimal digits to show up.)
Eh, if you launch python in powershell there is still a loading delay, since powershell for some crazy reason takes like 30s to launch on a default win10 installation.
You must have something very wrong; neither Windows Powershell nor Powershell 7 takes 30s to launch.
But anyway, for a quick calc, is there anything faster than Spotlight (for mac)/Gnome shell (for linux)/Powertoys Run (for windows)? Instead of app name you are going to launch, just punch in your expression.
Not the person you responded to but I have the same experience with Powershell as them. I just checked again to be sure and it takes 20s to start and become usable on my laptop. CMD.EXE, in comparison, starts instantly.
It got bad enough I wrote a (now unused) script in batch instead of Powershell. Also having to explicitly enable scripting for each machine was a pain, though understandable.
I shudder to think what you're running Windows 10 on because it only takes 2 seconds to come up on this Celeron 2957U I'm using to browse the internet.
I'm running Windows 10 on a Ryzen 5 with 16GB of RAM and an NVMe drive. Powershell takes something between 0.5 to 1 second to start. The first ~20-30 seconds after the start it feels sluggish when doing simple tasks like listing directory contents. It makes it very unpleasant to use.
Sluggish behavior during and just after Windows startup is sometimes caused by mapped network drives that no longer resolve. Worth checking if you haven't already.
If it's a corporate system like my (presently only) Windows machine then I wouldn't be surprised. They manage to configure these things so they're as fast as a Pentium 133MHz processor from 25 years ago, even though you've got an 8-core i7 with 32GB of RAM.
I wouldn't either, our devs were given the standard issue Lenovo T480 which would have been mostly fine were it not for the fact that we had Raytheon ForcePoint DLP, LANDesk, Trend Micro Apex, WebSense, Cisco Amp, Cisco Umbrella, and who knows what else watching everything we did.
It took upgrading to i9s to get something that could run Teams and Outlook at the same time without audio stuttering during a voice call.
I've started using the browser console window more and more as a calculator. The only downsides are the a longer syntax (eg can't do 3^2) and that it is a tab in the browser. Variables and other such things are very useful.
Why the console window? For basic math things that I sometimes need, I just enter stuff in the search/address bar, and it pops DDG or Google or whatever with the answer; and even your own example of "3^2" works there as-is.
I just have a hotkey that opens a Node CLI. I know some people who do a similar thing with Python, but I think JS is better at composing quick one-liners.
To bad it's the new one. I still recall the Win3.1 (or possibly 3.11 WFW) calc.exe returning 0 when calculating 2.11 - 2.1 .
I've always been curious to know what could have triggered that error since it was too big to remain unnoticed for so much time.
Here's another test I do with calculator apps (in radians):
Pi - 4 * ATAN(1)
These apps really should produce 0 for this and avoid floating point issues. Most of us understand why it wouldn't. To me it's an experience/quality issue.
For what it's worth, the default Android calc app produces 0 (as well as nice RPN apps like Droid48).
The macOS calculator, on the other hand, has a totally ridiculous bug: it displays "Not a number" for any negative number that you try to take e^x of. 2^x and 10^x work fine, as does using the x^y button with base e. Typing "e^(-whatever)" on the Spotlight search works, as does e^x of -whatever in the iOS calculator app.
I just use a wolframalpha for everything these days. It uses symbolic computation so there's no possibility for problems like this, and it's a lot more powerful than any built-in calculator app. Almost anything can be typed in one line with natural English directives like "300mi/14mpg * $2.7/gal to Euros". Especially convenient with a browser keyword.
I usually use Google for quick calculations but get annoyed when it only returns large results in scientific notation. WolframAlpha gives results in just about every conceivable way, e.g. "7 trillion" or "7 000 000 000 000" instead of 7e+12. Much nicer for someone like me who isn't used to scientific notation.
But Spotlight is just cmd-space away at all times. My only complaint is that I can't get results for date-based queries like "days since 17 Jan 2021" or "50 days from today" and I have to go to Google for those although I can at least just hit return after entering the query to have the result pop up in Safari.
My biggest complaint after switching to Windows is how poorly the start menu does math. Half the time it tells me "results are unavailable" for simple arithmetic... I don't know if it makes a web query or something to calculate but it's frustrating.
Another comment suggested they switched from floating point to rational number representations after that version. Since 2.11 and 2.1 can't be represented exactly in floating point, it's possible that the issue was the difference was too small (smaller than just .01) and was being rounded off (or was below the limit for number of digits to display).
Try 2.11 - 2.1 here: http://weitz.de/ieee/ . I don't think they used 32-bit floating point, otherwise there should clearly be a large enough number to display.
in that era, you might or might not have had FPU hardware; where it didn't exist or was off there were a variety of libraries with their own quirks that got used.
> I'm right now running pacman on Arch Linux via WSL while playing 'Call of Duty' in another window. Let's see if your OS can beat that.
Yeah?
I'm running Far Cry $SOME_SEQUEL_NUMBER on Wine, while running Android Builds in a background terminal, using another terminal to hack into the Israeli Defence Force network to deliver my Android exploit that will cause their centrifuges to spin slightly too fast in the incorrect direction, slowing down the spin of the earth ever so slightly ... I'M STOPPING TIME! ...
My OpenBSD server came with the ancestor of The Greatest Game You Will Ever Play™®ʲᵏ¹ preinstalled. Been using GNU ever since I replaced Windows ME and 3D Pinball for Windows – Space Cadet with Slacko Puppy Linux and Xsoldier, and fully quit Windows when it started shipping with Candy Crush Saga ads and no Minesweeper. Also I typed all but two characters of this message, thanks to the X11 Compose key.
Probably because everyone you’re trying to convince stopped listening out of fatigue. Nothing revs me up to hear an opinion more than “you’re running the wrong operating system/editor/browser, mine is clearly superior,” when we all have limited lifespans in which to argue about exceedingly pointless trivia such as that — especially discarding any implicit requirements for choosing the tools one has chosen which are almost certainly unknown to the person making such an arrogant observation. People have different needs and goals for using computers and telephones.
I’ve been cutting people out of my life for repeatedly talking like GP (and you, to a lesser extent) lately; that’s how frustrating it is to be on the other end after 20 years of hearing it from elitist peers. I finally snapped when I set my iPhone on the table at a bar and was treated to the entire conversation turning to iOS vs. Android for the next two miserably painful hours, while ignoring that we all went there to drink to forget about shit like that.
You might consider my approach harsh. That’s fine because I know my approach is clearly better. (See?) It amazes me that it’s been decades now and advocates of non-mainstream operating systems and tools (read: FLOSS) haven’t figured out looking down at the people you’re trying to convince is repulsive to anything you’d say - and sometimes you as a person. Persuasion from a negative is almost always a net loss. Ask anyone in sales.
Tell me why it’s better for my specific needs. Not that my choices are inferior. Until then, I’m distantly happy for you that you’ve found a tool that works for you, and I’d prefer that you tell someone else (in general, not you specifically). I’m especially thinking of the randoms who don’t know me and approach at conferences or social events to reflect on my choices. It’s never Windows or macOS or Edge or VS Code advocacy, either, weirdly enough. Emacs in particular seems to catch these attitudes like a magnet.
Stuff like this gives me a really bad feeling about the quality of software. When it's so close, but not perfect, it seems like someone tried to get it right but just hacked it together and checked by eye instead of solving the underlying problem. I prefer to see software where they haven't even tried.
Perhaps what's surprising to non-programmers is that older code tends to be better/faster/more optimized than new code, which is unintuitive because new hardware is orders of magnitude better.
It's an inverse relationship. As the "just release a patch" method of development takes over the idea of a 'gold master' disappears. When software used to ship on floppies and no one had modems, in general software had to ship working 'well enough' because it was not easy to fix.
Also older software tended to be documented better because there was no 'fire off an email' or 'ask Bob on Slack' what this code means. The barriers to quick communication where higher, hence reading the code comments would have been the path of least resistance.
I completely agree. I wonder if there would be a niche for high-performance languages that are maybe a little harder to write than Python but still easy, and closer to the performance of C.
“better” by definition of often applying rather absurd hacks for the sake of performance, yes, hacks that very often also compromise the output of the end result.
In particular, some old video games feature the absolutely most wonderful hacks to achieve what they wanted to with the available hardware, but also often lead to engines having to make particular compromises in possibility of environment design that many players failed to notice.
Many older game engines were actually incapable of stacking walkable planes on top of each other, though the levels were three dimension and one could ascend and descend, there was never any walkable surface directly under another, which was necessary for some optimizations.
Indeed. For anyone who grew up in the industry after the Internet was widely available, it might be unnatural to visualize how much effort was spent on making sure the code is correct before shipping.
Shipping meant producing that gold master build and sending it off to a floppy replication service and then got distributed to the store shelves. That 1.0 version was frozen until you'd ship a new version a year or more later. There was no second chance for that release, code had to be as close to perfect as possible, first time out.
> Is it arrogance to say that medicine is far better than it was in the past?
Irrelevant. Medicine and Software Development are not correlated, just because we want them to be. That's the point. Software Development results seem to have devolved (quality metrics such as binary size for equivalent reliability, functionality, and notably time to release), despite improvements in almost every aspect of development tooling.
Also interesting to note, these quality metrics follow along expected curves (accounting for development tooling) when DUPLICATING (sometimes with small improvements) working software. This tells us a great deal about what is a large problem with modern software.
People expect much more features with modern software as before, and accept worse reliability because of the faster bug fix cycle.
Sometimes even playing music doesn't work on my mobile phone, and I have to restart it. I never had that problem with my casette player 30 years ago. But that doesn't mean that I go back using a casette player instead of my phone for listening music.
Interesting that "avoids causing unnecessary costs" is not a quality metric to you. It's fun to see how it can suddenly become a really really important metric on platforms with size limitations, even if they do not care about "cost to users", which is the more common effect of binary size.
That cost is frequently swamped by development costs, runtime costs, support costs. Platforms with super restrictive size limitations are a small minority, these days you can plop a 5-10MB binary on your toaster.
To the vast majority of software developers and software projects out there, except for very constrained embedded environments.
Server back ends don't care about it, front ends don't care about it despite the lip service paid, games sure as hell don't, etc
A small minority of web devs seem to care, together with creators of already gigantic mobile apps and as I was saying, embedded devs working in very constrained environments.
The market, as a whole, maybe has this as their 100th priority. Which means it's not.
This is often the excuse we hear for why modern software is bad: "well, normal people don't care about that", a strawman argument that implies anyone who does care is abnormal and therefore worthless.
> anyone who does care is abnormal and therefore worthless.
No, they're not worthless, they're worth less :-)
Don't tell me, tell the business people financing everything we build. By and large, they don't care, it's reflected in incentives and it's reflected in what we build.
> Don't tell me, tell the business people financing everything we build. By and large, they don't care, it's reflected in incentives and it's reflected in what we build.
Another common scapegoat for developers who write bad software. I just wish we had a little more integrity as an industry, especially since so many of us insist on calling themselves "engineers".
To use a more extreme example, regular engineers build tanks and jet fighters and drones and machines that make mustard gas, etc. The software world is just a reflection of the outside world, at this point.
I don't think that's quite the point they're making. They're saying that small binary size doesn't always translate to user value. Technical excellence can be viewed as a goal in its own right, but it's not the same thing as user value.
Yeah, for 99% of software produced, smaller binary sizes do not offer increased user value.
Software designed to spec and user requirements does (where those requirements could be fast development time, high performance for whatever the performance criteria are, longevity, easy and cheap extensibility, etc.).
Binary size, as I was saying before, is like 100 on the list of priorities for most categories of software. Mobile apps sometimes have it as a priority and embedded apps frequently have it, too. But for mobile apps those limits are loosening, so soon they'll stop caring, too, and embedded apps are a minuscule percentage of all apps developed out there (most of them are web apps, especially Line of Business - LOB - apps).
Computer science evolved a lot, we have self driving cars running on the road. Of course the computing power improved a lot, but deep learning algorithms are also improving faster than Moore's law.
It's just improving classical computer science has less practical benefits than machine learning nowdays, so the focus of research changed.
PSA: Here's another, much more feature-rich calculator called Expression Calculator (or Global Calculator sold in Germany in 1997), written in Delphi - https://github.com/dblock/excalc, full source code and a working executable that still runs on any version of Windows.
The original intention of computers was to perform calculations. The inclusion of facilities that allow for calculations to be performed on a computer is ironic how?
That seems like thinking it ironic that lawn mowers have a place to put blades for cutting grass.
The purpose of computers is to perform calculations. Then we built a massive pile of abstractions on top of that souped-up calculator to make it into a general purpose machine. Then we built a small calculator app on top of those abstractions. A lot of work to get back to square one.
(This obviously misses the point that we made computers usable by mostly anybody along the way, but it’s still funny)
As someone who sat alongside when my granddad soldered 8080 clone to the board, entered commands via switches, and seen binary results on a LED display (not a modern LED display, but literally a row of 8 red Light Emitting Diodes, you know) I tell you with a great confidence that clacking 1+2x3 on a keyboard and seeing an answer on SVGA+ screen is much quicker and more convenient than:
- planning a program
- implementing missing mul and/or shl instructions in your mind
- managing register pressure for more complex expressions
- having to start from scratch on a mistake
- debugging with a multimeter
Even if you think of MBR-style calculator, there is at least dozen kb BIOS and dozen kb VGA BIOS to start with a blank screen that can do cursor and digits. There is nothing cheap or straightforward down there, and the first thing people did was abstracting baremetal away ASAFP.
One way of thinking of it is that a physical calculator has two parts - the circuitry that performs the calculations and the UI - in the form of physical buttons, displays, and the connections between them and the calculation circuitry - that allows people to effectively make use of it for their calculating needs.
A modern CPU has got the calculation part covered by itself, but it still needs the UI. Of course there are other kinds of UI for people to do calculations - for example Excel, or programming language REPLs - but desk calculators had pretty good UI for some use cases, so it makes sense to have an option based on them.
That only seems to explain why it could seem odd to buy an entire desktop PC to only ever use it as a basic calculator. In that case, it might make more sense to just buy a calculator. But there's certainly nothing odd about buying a general purpose computer and then use it for lots of different specific purposes, one of which is numerical calculation.
I agree with the parent because creating code which runs arbitrary calculations is quite weird. You would assume you could just send instructions to the CPU and print the output.
The idea that there’s a huge wrapper around the CPUs core functionality is indeed weird because you would expect that functionality to be available without any program at all.
> That seems like thinking it ironic that lawn mowers have a place to put blades for cutting grass.
A more apt comparison would be having a 40ft tall Mech that transforms into a jet and flies from neighborhood to neighborhood then crouches down and uses a tiny pair of scissors to cut the grass.
Given computers are nothing but a mind-bogging amount of very simple operations (add, mov, whatever on fixed size values), I'd suggest an equally apt comparison is that a computer is billions of nanobots cutting grass one blade at a time, and the calculator app is the Mech.
I suppose what he meant is it's ironic due to the insane overhead. When the end-user double clicks calc.exe, types in 1+1, and sees the result on the screen, the CPU has executed on the order of tens of millions of instructions (filesystem code, OS code, UI code, display driver code, etc) to show a simple result that could have been done in a single x86-64 ADD instruction.
The irony is more so the unnecessary skeuomorphism, that, inside of a machine that is a strict superset in capabilities of another, simulates the limited interface of the latter machine to make it “look and feel” as though it be an actual calculator.
Obviously, a prompt that accepts arbitrarily complicated mathematical expressions and returns a value is a far superior interface than clicking on buttons with a mouse, but skeuomorphisms in design are very common place, in spite of their lesser efficiency.
This might be the Sheri an version of ironic, which includes anything a bit funny, or weird, or they can’t think of a term for. Like rain on your wedding day.
Why is that ironic? The obvious use case for a computer is to, you know, compute things, and the most obvious way to do that is to use a program designed for that very purpose.
Because a computer is obviously more powerful than a desk calculator (unless you happen to have a programmable calculator, which would itself be a computer.)
Transposing the metaphor of a desk calculator verbatim is inefficient and unproductive, graphical gimmick exclusively. Unix bc, which predates Windows calculator by a couple decades, is Turing-complete and in fact close to a full-featured modern dynamic language.
Today's equivalent would be a {python|ruby|node|perl} REPL.
The UX of a REPL sucks if you're not an expert. A calculator UI gives you a lot more guidance about what's available.
Accessibility is everything. I would guess the skeuomorphic calculator apps have 5 orders of magnitude more users than bc, and 4 more than the REPLs (not that the REPLs were even built for this purpose).
I think they will evolve over time away from skeuomorphism, but not into anything resembling a plain REPL.
Excel has the same accessibility as a calculator UI. I'll have that, in a smaller window :)
Also, the UX of a REPL sucks because it lives in a white-on-black terminal and prints scary version messages about a thing called "clang". Do the same thing with nice colors and friendly messages and I doubt the usability would be any worse.
This is precisely what I'm trying to illustrate. No, Excel does not have the same accessibility as a calculator UI. Excel is somewhere in between a calculator UI and bc in terms of how discoverable and usable it is.
Engineers fall into this trap a lot - projecting our own preferences onto what we think other people should find accessible. Most people require training before they can use Excel. The training required to use a calculator is so minimal, they teach it in elementary school. And a REPL is gibberish to the vast majority of people, even if you dress it up with nice colors and friendly messages (which to be fair help a lot).
To explain the irony for OP: A computer is or has been basically a fancy calculater, so putting a program called calculator on it, seems redundant. It would make more sense to call it e.g. algebra and have other programs called geometry and so on.
I think the issue is more that calculator apps are one of the last strongholds of skeuomorphism. Why does almost every calculator app have a keypad, even those for non-touch OSes?
For a refreshingly modern take on calculator apps, take a look at something like Soulver/Numi/Calca which use a notebook-style interface, or SpeedCrunch which uses a REPL-style interface (and which does have a keypad, but you can turn it off).
If the idea of a calculator app just seems ugly to your brain, PowerShell does floating-point math and is scriptable to boot. I've used it as a calculator in the past, though it means having to put up with some wonky syntax.
> Why does almost every calculator app have a keypad, even those for non-touch OSes?
How would a non technical person do a square root or exponent? It's not obvious to type sqrt(2) or 3^4 into a text field, and people shouldn't have to read a manual to use a calculator. The default calculator is meant for the average person, and advanced tools are available for the rest.
It's been done with Mathematica, which includes a "basic" toolbar with square roots, integrals and the like. There's plenty of room for middle ground between a REPL and emulating a physical calculator.
The thing that makes me giggle is how it also happens to be the most primal function of a computer.
I'm sure all tools you suggest are great, but as someone who can program, I can just use the REPL of whatever programming language I'm using at the moment. I have no need for a dedicated tool in the first place.
Floating point implementations vary among programming languages. On the same machine and OS (win10), the equation 0.3 - 0.1 equals varying numbers depending on the calculator;
Powershell says 0.2,
calc.exe app and libreoffice calc agree with 0.2,
BC running in CYGWin also 0.2,
Python 2 and 3 answer 0.19999999999999998,
JS in Vivaldi and firefox also answer 0.19999999999999998,
> if you get 0.2, the code is using rational numbers or a custom implementation
IIRC, there are languages that use IEEE double precision but the way they handle default display presents this as 0.2 (basically, they use a display algorithm that displays the shortest decimal expression that has the same double precision representation.)
0.2 could just be base 10 / decimal floating point rather than the base 2 single/double i think, It’s all ieee754 it’s just the base10 stuff came later
As far as I know, BC doesn't do floating point arithmetic. It does fixed point arithmetic, built on top of arbitrary-precision integer arithmetic. Which is why it won't give you one of those funny floating point answers. It's what a human would do on a piece of paper.
Floating point implementations can vary not only between languages, but also between different CPUs, if the language relies on the hardware implementation (which is the smart thing to do in most cases).
As a side note: Python's implementation of FP isn't standard-compliant. E.g. when you divide by 5.0 by 0.0, the standard says you should get a +inf. In Python you get an exception.
Actually, according to the standard, you should get either +inf or an exception.
Both behaviors are valid.
Nevertheless, according to the standard, the user should be able to choose between getting +inf and getting an exception.
If in Python there is no way to mask the divide by zero exception, then that is not standard-compliant.
Wait, isn't this the calculator that doesn't know order of operations? When I used to teach math, I would tell my students to type in 1+2*3 in their calculator and if it didn't give 7 as the answer, they should throw it away. The Windows calculator was one of the ones that failed, if I remember correctly.
Windows 10 calculator has a simple mode where operations are evaluated immediately which would give 9 for those keystrokes, but does at least show what it's actually calculating: https://i.imgur.com/au7dvfg.png
Rather than not knowing order of operations, it doesn't parse algebraic expressions, it just takes binary inputs for the common real operations. It doesn't operate on "1+23" as an input, it operates on "1", "+", "2" = 3 "" "3" = 9.
It works like you say in standard mode by design because it's trying to match the behavior of older desk calculators that worked that way. In scientific mode it respects order of operations.
It seems absurd to make the default behavior incorrect because older cheap calculators did the wrong thing. This isn't like making an incompatible change to a programming language to fix an incorrect behavior, this is more like if the designer of a new programming language decided to make 1+2*3 give nine because his cheap desktop calculator did the same.
I've found that a surprising number of people (outside of fields like math, engineering, programming) are used to the idea of entering an operation and having it applied to the current value. E.G: pressing the [√] button immediately square roots the current number, pressing [X][2] doubles the current number, etc.
It's a different paradigm to using the buttons to enter a full expression then pressing [=] to evaluate it, but I don't think it's accurate to call it "the wrong thing" - just that the button presses aren't transcribing to the equation you expect (the equation actually displayed by Windows 10 calculator is now accurate).
Back in stoneage, while using Windows 3.11 (I think it was calles Windows for Workgroups or so...) there was a bug in calc: 0.02-0.01 resulted in a flat zero. Perhaps it's time to open an issue.
Somewhat related, HP calculators used floating point BCD internally since the beginning, to avoid this sort of thing. (Custom mostly-4-bit CPU chips with specific hardware for the purpose, etc.)
> the very earliest iterations of Windows Calculator, starting in 1989, didn't use the rational arithmetic library, instead using floating point arithmetic and the much greater loss of precision this implies.
As I recall, there was quite a bruhaha about Windows calculator solving 2 x 2 as 3.9999999945 or something like that. Pretty sure this rational number library was added in response to bad press.
Notepad.exe is just a wrapper around the edit control [1] that Windows ships as part of its core system. It doesn't even properly support undo as you can only undo the last action, which becomes the new last action, allowing you to only toggle between two states.
It's a window that mostly uses an edit control, but, having code in Notepad, it's not just a wrapper around the edit control. Could you write Notepad in a day or two? Perhaps. But it's deceiving to say that it's just a wrapper around the edit control.
at least the oldest icons are still exists in modern win 10, yes, even after ms decided to update some icons to fit new "Fluent Design" https://imgur.com/a/6T31ziE (from latest insider build
I probably wouldn't have time to do this for next few years, but I wish they'd release regular Notepad as open source. There's a part of me that sort of wants to do some kind of masochistic feature-by-feature port of Notepad into a functional language, just to pay homage to the first editor I ever used.
I know that I could obviously just create my own text editor, but I think it would be kind of fun to specifically recreate notepad of all things.
One mans spaghetti is another mans elegant architecture. Though sometimes it is spaghetti.
Having read the code and lots of other MS code. It is in pretty typical MS style. Though a bit undercommented for the random few modules I looked at. I am sure there are probably a few bits in there that are hairy. But I think I will skip that code review today.
Do not mistake someone else's "style" for "ugly". It is an easy trap to fall into.
This is the standard way calculators have operated since forever. They don't evaluate a whole expression, they simply have an accumulator and you operate on it one step at a time.
It's probably a concession to skeuomorphism. Most physical calculators I've seen that look like the standard mode ignore PEMDAS as well, whereas pretty much all scientific calculators will respect PEMDAS.
If you took one of the people who still use old fashioned "standard" calculators and put them on a standard mode that followed PEMDAS, they'd probably be confused, at least initially. I've seen that a lot at cash registers in restaurants, dry cleaners, and other small businesses.
While its basic math you would be surprised at how many people have forgotten about PEMDAS. I would argue that the additive way comes more naturally to people. For people wanting to more complex equations scientific one does follow PEMDAS.
Personally I've always found relying on operator precedence uncomfortable and usually add always parenthesis even when they are not strictly needed, this was the case even back in high school when we were doing calculations with then fancy graphing calculators. I just find the extra degree of confidence worth the slight extra effort and noise. Maybe I'm just a belt and suspenders type of guy.
These days for actual physical calculators, and on phone, I've transitioned to RPN which neatly transcends such petty ambiguity problems.
I think it's just a matter of people adapting to the tools that are easily available. Almost 100% of the time I've seen someone use an actual basic tabletop calculator, they've been tallying prices and slapping on a sales tax. At that point it's just a learned ritual and they're not thinking about order of operations. If they needed parentheses, they'd just memorize that the parentheses go around the prices.
Once the computations get complex enough that you have to write down the formulas, then it makes a lot of sense to treat the computations as algebraic expressions, meaning PEMDAS.
In one of the old Win2k source leaks the code can be found for the far more efficient "classic" one.
Also, title needs (2019). There are some other HN articles from around that time discussing it: https://news.ycombinator.com/item?id=19321217