As a fun fact, I have a not-that-old math textbook (from a famous number theorist) that says that it is most likely that algorithms for adding/multiplying continued fractions do not exist. Then in 1972 Bill Gosper came along and proved that (in his own words) "Continued fractions are not only perfectly amenable to arithmetic, they are amenable to perfect arithmetic.", see https://perl.plover.com/yak/cftalk/INFO/gosper.txt.
I have been working on a Python library called reals (https://github.com/rubenvannieuwpoort/reals). The idea is that you should be able to use it as a drop-in replacement for the Decimal or Fraction type, and it should "just work" (it's very much a work-in-progress, though). It works by using the techniques described by Bill Gosper to manipulate continued fractions. I ran into the problems described on this page, and a lot more. Fun times.
No, all finite continued fractions express a rational number (for... obvious reasons), which is honestly kind of a disappointment, since arbitrary sequences of integers can, as a matter of principle, represent arbitrary computable numbers if you want them to. They're powerful than finite positional representations, but fundamentally equivalent to simple fractions.
They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
Any real number x has an infinite continued fraction representation. By efficient I mean that the information of the continued fraction coefficients is an efficient way to compute rational upper and lower bounds that approximate x well (they are the best rational approximations to x).
> They are occasionally convenient for certain problem structures but, as I'm sure you've already discovered, somewhat less convenient for a wide range of common problems.
I'm curious what you mean exactly. I've found them to be very convenient for evaluating arithmetic expressions (involving both rational and irrational numbers) to fairly high accuracy. They are not the most efficient solution for this, but their simplicity and not having to do error analysis is far better than any other purely numerical system.
> fundamentally equivalent to simple fractions.
This feels like it is a bit too reductionist. I can come up with a lot of example, but it's quite hard to find the best rational approximations of a number with just fractions, while it's trivial with continued fractions. Likewise, a number like the golden ratio, e, or any algebraic number has a simple description in terms of continued fractions, while this is certainly not the case for normal fractions.
That continued fractions can be easily converted to normal fractions and vice versa, is a strength of continued fractions, not a weakness.
That's the issue, no? If you go infinite you can then express any real number. You can then actually represent all those whose sequence is equivalent to a computable function.
It is possible for pretty much all the numbers you could care about. I'm not claiming it is possible for all real numbers though (notice my wording with "express" and "represent"). In fact since this creates an equivalence between real numbers and functions on natural numbers, and not all functions are computable, it follows that some real numbers are not representable because they correspond to non-computable functions. Those that are representable are instead called computable numbers.
Naturally the person pursing a PhD in number theory (whom I recruited to our team for specifically this reason) was unable to solve the problem and we finished in third place.
(It's not a good article when it comes to the attack details, unfortunately.)
- An eye-rolling, critical emotion - where they used up a valuable spot on the team to retain a person who ostensibly promises to specialize in exactly this type of problem, but instead they proved to be useless even in the one area they were supposed to deliver value.
- A emotion similar to that invoked by "c'est la vie". Sometimes this is resigned, sometimes this is playful, sometimes this is simply neutrally accepting reality.
Follow-up comments from the person who wrote it indicate they meant it in a playful sense of "c'est la vie", and indicated that the team found camaraderie and joy in teasing each other about it.
Sorry if this sounds a little bit like ChatGPT - I wrote it myself but at the point when one is explaining this kind of thing, it's difficult to not write like an alien or a robot.
To highlight a key point: “naturally” is slightly humorous because it implies that while the outcome was ironic, it should almost be expected that an ironic bad thing happens. In addition, it signals my opinion on such situations more generally, whereas “ironically” is a more straightforward description of what happened that would add less humor and signal less of my personality.
The older and longer paper of Defining Real Numbers as Oracles contains some exploration of these ideas in terms of continued fractions. In section 6, I explore the use of mediants to compute continued fractions, as inspired by the old paper Continued Fractions without Tears ( https://www.jstor.org/stable/2689627 ). I also explore a bit of Bill Gosper's arithmetic in Section 7.9.2. In there, I square the square root of 2 and the procedure, as far as I can tell, never settles down to give a result as you seem to indicate in another comment.
For fun, I am hoping to implement a version of some of these ideas in Julia at some point. I am glad to see a version in Python and I will no doubt draw inspiration from it and look forward to using it as a check on my work.
Similarly with decimals and Cauchy sequences, what is lurking around to make those useful is an interval. If I tell you the sequence consists of a trillion approximations to pi, to within 10^-20 precision, but I do not tell you anything about the tail of the sequence, then one has no information. The next term could easily be -10000. It is having that criterion about all the rest of the terms being within epsilon that matters and that, fundamentally, is an interval notion.
A good way to think about the framework, is that for any expression you can compute a rational lower and upper bound for the "true" real solution. With enough computation you can get them arbitrarily close, but when an intermediate result is not rational, you will never be able to compute the true solution (even if it happens to be rational; a good example is that for sqrt(2) * sqrt(2) you will only be able to get a solution of the form 2 ± ϵ for some arbitrarily small ϵ).
The problem with that from a UX perspective is that you won't even get to write out the first digit of the solution because you can never decide whether it should be 1.999...999something (which truncates to 1.99) or 2.000...000something (which truncates to 2.00). This is a well-known peculiarity of "exact" real computation and is basically one especially relevant case of the 'Table-maker's dilemma' https://en.wikipedia.org/wiki/Rounding#Table-maker%27s_dilem...
The sales lady gave us a hard sell on their "complete package" which had basic C programming but also included a bunch of unnecessary topics like Microsoft Excel, etc. When I tried to ask if I could skip all that and just skip to more advanced programming topics, she was adamant that this wasn't an option; she downplayed my achievements trying to say I basically knew nothing and needed to start from the beginning.
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
However in the end I was naive, she was good at sales, and I was desperate for knowledge, so we signed up. However sure enough the curriculum was mostly focused on learning basic Microsoft Office products, and the programming sections barely scraped the surface of computer science; in retrospect, I doubt there was anybody there qualified to teach it at all. The only real lesson I learned was not to trust salespeople.
Thank god it's a lot easier for kids to just teach themselves programming these days online.
> However I wasn't sure how to improve further without help, so my mom found me an IT school.
This sounds interesting. What is an "IT school"? (What country? They didn't have these in mine.)This literally brings rage to the fore. Downplaying a kid's accomplishments is the worst thing an educator could do, and marks her as evil.
I've often looked for examples of time travel, hints it is happening. I've looked at pictures of movie stars, to see if anyone today has traveled back in time to try to woo them. I've looked at markets, to see if someone is manipulating them in weird, unconventional ways.
I wonder how many cases of "random person punched another person in the head" and then "couldn't be found" is someone traveling back in time to slap this lady in the head.
So yeah, a kid well-versed in Office. My birthday invites were bad-ass, though. Remember I had one row in Excel per invited person with data, and in the Word document placeholders, and when printing it would make a unique page per row in Excel, so everyone got customized invites with their names. Probably spent longer setting it up than it would've taken to edit their names + print 10 times separately, but felt cool..
Luckily a teacher understood what I really wanted, and sent me home with a floppy disk with some template web-page with some small code I could edit in Notepad and see come to live.
If anything that was a great insight about one of my early C++ heroes, and what they did in their professional life outside of the things they are known for. But most importantly it was a reminder how deep seemingly simple things can be.
I don't think anyone who worked on IEEE 754 (and certainly nobody who currently works on it) contemplated calculators as an application, because a calculator is solving a fundamentally different problem. In a calculator, you can spend 10-100 ms doing one operation and people won't mind. In the applications for which IEEE 754 is made, you are expecting to do billions or trillions of operations per second.
https://cray-history.net/2021/08/26/cray-floating-point-numb...
What? Pretty sure there's more precision in [0-1] than there is in really big numbers.
In the single precision floats for example there is no 0.000000000000000000000000000000000000000000002 it goes straight from 0.000000000000000000000000000000000000000000001 to 0.000000000000000000000000000000000000000000003
So that's not even one whole digit of precision.
Maybe the hardware focus can be blamed for the large exponents and small mantissas.
The reasonable only non-IEEE things that comes to mind for me are:
- bfloat16 which just works with the most significant half of a float32.
- log8 which is almost all exponent.
I guess in both cases they are about getting more out of available memory bandwidth and the main operation is f32 + x * y -> f32 (ie multiply and accumulate into f32 result).
Maybe they will be (or already are) incorporated into IEEE standards though
This implies a strange way of defining what "beautiful" means in this context.
In real life, no instrument is going to give you a measurement with the 52 bits of precision a double can offer, and you are probably never going to get quantities are in the 10^1000 range. No actuator is precise enough either. Even single precision is usually above what physical devices can work with. When drawing a pixel on screen, you don't need to know its position down to the subatomic level.
For these real life situations, improving on the usual IEEE 754 arithmetic would probably be better served with interval arithmetic. It would fail at maths, but in exchange you get support for measurement errors.
Of course, in a calculator, precision is important because you don't know if the user is working with real life quantities or is doing abstract maths.
Partially. It can be fine for pretty much any real-life use case. But many naive implementations of formulae involve some gnarly intermediates despite having fairly mundane inputs and outputs.
The issue isn't so much that a single calculation is slightly off, it's that many calculations together will be off by a lot at the end.
Is this stupid or..?
I can't see what would be worse. The entire raison d'etre for computers is to give accurate results. Introducing a math system which is inherently inaccurate to computers cuts against the whole reason they exist! Literally any other math solution seems like it would be better, so long as it produces accurate results.
You’re going to have a hard time doing better than floats with those constraints.
That's doing a lot of work. IEE-754 does very in terms of error vs representation size.
What system has accurate results? I don't know any number system at all in usage that 1) represents numbers with a fixed size 2) Can represent 1/n accurately for reasonable integers 3) can do exponents accurately
You can only have a result that's exact enough in your desired precision
You mean what power of ten to divide by?
I can see why you wouldn't necessarily just want to use it, but I thought the RIM pager had a JVM with floating point?
I mostly just used mine for email.
I am using, in Android, and emulator for the TI-89 calculator.
Because no Android app has half the features, and works as well.
But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps. Even the devs of the apps that are supposed to be snappy, like speedcrunch, seem to completely misunderstand the niche of a calculator, are they not using it themselves? Calculator is neither a CAS nor a REPL.
For Android in particular, I've only found two non-emulated calculators worth using for that, HiPER Calc and 10BA by Segitiga.Pro. And I'm not sure I can trust the correctness.
Plus of course not having to do even more arithmetic when one site gives me kilograms and another gives me ounces.
Random example off the top of my head to show off some features: say it takes 5 minutes to get to space, and I heard you come around every 90 minutes, but there's differing definitions on whether space is 80 or 100 km above the surface, then if you're curious about the G forces during launch:
(2*pi*(6350+ 90±10)km / 90 minutes) / 5 minutes to gees
((2 × pi × ((6350 + 90±10) kilometers)) / (90 minutes)) / (5 minutes) ≈
2.5470±0.0040 gees
(the output has color coding for units, constants, numbers, and operators)It understands unicode plusminus for uncertainty tracking, units, function calls like log(n,base), find factors, it will do currencies too if you let it download a table from the internet... I love this software package. (No affiliation, just a happy user who discovered this way too late in life)
It's not as clever as WolframAlpha, no natural language parsing or Pokédex functions (sometimes I do wish that it knew things like Earth radii), but it also runs anywhere and never tells you the computation took too long and so was cancelled
Edit: I just learned there's now also an Android app! https://github.com/jherkenhoff/qalculate-android | https://f-droid.org/packages/com.jherkenhoff.qalculate/ I've checked before and there wasn't back then, so this is cool. This version says it supports graph plotting which the command-line version doesn't do
My only gripe with it is that it doesn't solve compounding return equations, but for that one can use an emulated HP-12c.
I still have my actual HP, but it seems to chew batteries now.
Performing arithmetic on arbitrarily complex mathematical functions is an interesting area of research but not useful to 99% of calculator users. People who want that functionality with use Wolfram Alpha/Mathematica, Matlab, some software library, or similar.
Most people using calculators are probably using them for budgeting, tax returns, DIY projects ("how much paint do I need?", etc), homework, calorie tracking, etc.
If I was building a calculator app -- especially if I had the resources of Google -- I would start with trying to get inside the mind of the average calculator user and figuring out their actual problems. E.g., perhaps most people just use standard 'napkin math', but struggle a bit with multi-step calculations.
> But for some reason the authors of calculator apps never optimize them for the number of keypresses, unlike Casio/TI/HP. It's a lost art. Even a simple operator repetition is a completely alien concept for new apps.
Yes, there's probably a lot of low-hanging fruit here.
The Android calculator story sounded like many products that came out of Google -- brilliant technical work, but some sort of weird disconnect with the needs of actual users.
(It's not like the researchers ignored users -- they did discuss UI needs in the paper. But everything was distant and theoretical -- at no point did I see any mention of the actual workflow of calculator users, the problems they solve, or the particular UI snags they struggle with.)
[1] - https://play.google.com/store/apps/details?id=com.algeo.alge...
Classic algebraic calculators are able to to things like:
57 x = (displays the result of 57x57)
3 = (repeats the multiplication and displays 57x3)
[+/-] MS (inverts the result and stores it in the memory without resetting the previous operation)
7 = (repeats the multiplication for 7 and displays 57x7)
7 [1/x] = (repeats the multiplication for 1/7, displays 57/7)
It doesn't have to be basic arithmetic, this way you can do complex numbers, trigonometry, stats, financial calculations, integrals, ODEs etc. Just have a way to juggle operands and inverse operators, and some quick registers/variables one keypress away (see the classic MS/MR mechanism or the stack in RPN). RPN calculators can often be more efficient, although at the cost of some entry barrier.That's what you do with the classic calculators. Often, you are not even directly calculating things, you're augmenting your intuition and offloading a part of the problem to quickly explore its space in a few keypresses (what if?..), give a guesstimate, and do some sanity checks on whether you're in the right ballpark, all at the same time. Graphing, dimensional analysis in physics, error propagation help a lot in detecting bullshit in your estimates as quickly as possible. If you're also familiar with numerical methods, you can do miracles at the speed of thought. Slide rules were a lot like that as well.
People who do this might not be your target audience, though.
pkg install mplayer
cd /sdcard/Music
find -type f | shuf | head -1 | xargs mplayer
(Or whatever command-line player you already have installed. I just tested with espeak that audio in Termux works for me out of the box and saw someone else mentioning mplayer as working for them in Termux: https://android.stackexchange.com/a/258228)- It generates a list of all files in the current directory, one per line
- Shuffles the list
- Takes the top entry
- Gives it to mplayer as an argument/parameter
Repeat the last command to play another random song. For infinite play:
while true; do !!; done
(Where !! substitutes the last command, so run this after the find...mplayer line)You can also stick these lines in a shell script, and I seem to remember you can have scripts as icons on your homescreen but I'm not super deep into Termux; it just seemed like a trivial problem to me, as in, small enough that piping like 3 commands does what you want for any size library with no specialised software needed
Marvis on iOS is pretty good at this. I use it to shuffle music with some rules ("low skip %, not added recently, not listened to recently")[0] and it always does a good job.
[0] Because "create playlist" is still broken in iOS Shortcuts, incredibly.
I have many thousands of mp3s on my phone in nested folders. PowerAmp has a "shuffle all" mode that handles them just fine, as well as other shuffle modes. I've never noticed it repeating a track before I do something to interrupt the shuffle.
Earlier versions (>~ 5 years ago) seemed to have trouble indexing over a few thousand tracks across the phone as a whole, but AFAIK that's been fixed for awhile now.
My personal favorite feature that I got addicted to back when I was using Amarok in KDE 3 was the ability to have a playlist and a queue that resumes to the playlist when exhausted. Then I can listen to an album in order, and then go back to shuffling my driving music playlist when that's done.
I’m with you in that I think shuffle should be a single list of all songs, played in a random order. But that requires maintaining state, detecting additions and updating the list, etc.
Years ago, a friend was adamant that shuffle should mean picking a random song from the list each time, without state, and if that means the same song plays five times in a row, well, that’s what random means.
You should be able to accomplish this with trivial amounts of state (as in, somewhere around 4 ints).
As an example, I'm envisioning something based on Fermat's little theorem -- determine some prime `p` at least as big as the number of songs you have (N), then to determine the next song, use n := a*n mod p for fixed choice of 1 < a < p, repeating as necessary as long as n > N. This should give you a deterministic permutation of the songs. When you get back to the first song you've played, you can choose to pick a new `a` for a new shuffle, or you can just keep that permutation.
If the list of songs changes, pick new a, p, and update n to be the new position of your current song (and update your notion of "first song of this permutation").
(Regarding why this works: you want {a} to be a generator for the multiplicative group formed by Z/pZ.)
Linear congruential generators have terrible properties if you care about the quality of your randomness, but if all you're doing is shuffling what order your songs play in, they're fine.
Say I have a 20 song list, and after listening to 15 I add five more. How does this approach only play the remaining 10 songs (5 that were remaining plus 5 new)?
It doesn't. If you add 5 more songs, then the algorithm as presented will just treat it as if you're starting a new shuffle.
If you genuinely need to keep track of all the songs you've already played and/or the songs that you have yet to play, then I'm not sure you can do much better than keeping a list of the desired play order, randomized via Fisher-Yates shuffle each time you want a new shuffled ordering -- new songs can be appended to said list and shuffled in with the as-yet-unplayed songs.
This has some obvious downsides (e.g. an empty slot that was skipped when played and filled by a later insert won't be played), but it handles both insertion and deletions without replaying songs and you only need to store a single integer.
You probably shouldn't have quoted "detecting additions and updating the list, etc." then.
Many approaches that guarantee that property have pathological behavior if, say, you add a new song to your library after each song that you've played.
To me “shuffle” is a good metaphor because a shuffled deck of cards works a specific way (you’d be very surprised to draw the same card twice in a row!)
But these things are implemented by programmers who sometimes start with implementation (“random”) and work back to user experience. And, for a specific type of technical person, “with replacement” is exactly what they’d expect.
On the whole programmers given a source of random bytes and told to pick any of 227 songs at random using this data will take one byte, compute byte % 227 and then be astonished that now 29 of the songs are twice as likely as the others to be chosen†.
In a class of fifty my guess is you're lucky if one person asks whether the random bytes are cheap (and so they should just throw away any that aren't < 227) showing they know what "random" means and all the rest will at least attempt that naive solution even if some of them try it out and realise it's not good enough.
† As a bonus in some languages expect some solutions to never pick the first song, or never pick the last song.
> you're lucky if one person asks whether the random bytes are cheap (and so they should just throw away any that aren't < 227)
If you can't deal with the 10% overhead from rejection sampling (assuming your random bytes are uniform), I guess you could try mushing that entropy back into the rest of your bytestream, but yuck.
In Rust this abuse would either "work" or panic telling you that er, that's not a coherent ordering so you need to stop doing that. Not certain whether the panic can only arise in debug builds (or whether it would detect this particular abuse, it's not specified whether you will panic only that you might if you don't provide a coherent ordering).
In C++ this is Undefined Behaviour and there's a fair chance you just introduced an RCE vulnerability into your codebase.
https://stackoverflow.com/questions/962802/is-it-correct-to-...
An example of this out in the wild: https://www.robweir.com/blog/2010/02/microsoft-random-browse...
Any info on how can I achieve this
E.g. you have 4 items. You shuffle them to get a random permutation:
4 2 1 3
Note: these are not indices, but identifiers. Let's say you go through the first two items:
4 2 <you're here> 1 3
And two new items arrive. You insert each item into a random position among the remaining items. E.g:
4 2 <you're here> 5 1 6 3
If items are to be deleted, there are two cases: either they have already been visited, in which case there's nothing to do, or they're in the remaining list, in which case you have to delete them from there.
> You insert each item into a random position among the remaining items
Thinking about shuffle + adding, I would have thought "even if it's added to a past position", e.g.
`5 4 6 21 3` as valid.
What do folks expect out of shuffle when it reaches the end? A new shuffle, or repeat with the same permutation?
I don’t think that provides a totally clear answer to “what happens at the end”, but for me it’d lean me towards “a new shuffle”, because for me most of the time a shuffled deck of cards draws its last card, the deck will be shuffled again before drawing new cards.
I tried to make a joystick controller for a particular use case on one platform (Linux) and I gave up.
VLC solves a hard problem. Supporting lots of different libs, versions, platforms, hardware and on top of that licensing issues.
https://github.com/vanilla-music/vanilla >Note: As of 23. Jun 2024, Vanilla Music is no longer available in the Google Play store: I simply don't have time to comply with random policy changes and verification requests. Any release you see there is probably an ad-infested fork uploaded by someone else.
To create and add to one: long-press on a file/folder/track/album for the context menu or use the ... menu while in the now playing screen.
Edit: I checked I can also shuffle a folder without adding it to the library.
I've been working on it for what will be a decade later this year. It tries to take all the features you had on these physical calculators, but present them in a modern way. It works on macOS, iOS, and iPad OS
With regards to the article, I wasn't quite as sophisticated as that. I do track rationals, exponents, square roots, and multiples of pi; then fall back to decimal when needed. This part is open source, though!
Marketing page - https://jacobdoescode.com/technicalc
AppStore Link - https://apps.apple.com/gb/app/technicalc-calculator/id150496...
Open source components - https://github.com/jacobp100/technicalc-core
Where I am standing, that never happened, but that would require that a simply staggering number of people be classified as unreasonable.
https://play.google.com/store/apps/details?id=uk.co.nickfine...
e^100 + 1 - e^100
Built-in Android calculator does.
They are incomparable. TI-89 has tons of features, but can't take a square foot to high accuracy.
But, I agree I almost never want the full power of Mathematica/sage initially but quickly become annoyed with calc apps. The 89 and hp prime//50 have just enough to solve anything where I wouldn’t rather just use a full programming language.
Thanks for the heads up, I will be testing it for a few months, to see if it can replace the TI-89 emulator as my main calculator.
Edit: that calculator gives a result of 0 on this test
https://en.wikipedia.org/wiki/Derive_(computer_algebra_syste...
Edit: and Maxima as well on the mac (to back up another user's comment)
It is a bit stronger than that. Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
> Some problems can be avoided if you use bignums.
Or that. My momentary existential angst has been assuaged. Thanks bignums.
The best (and most educational) expression of that angst that I know: https://mathwithbaddrawings.com/2016/12/28/why-the-number-li....
EDIT: let me guess - there is a proof, and it's probably a flavor of the diagonal argument, right?
For individual real numbers— There are of course provably uncomputable ones. Chaitin’s constant is the poster child of these, but you could just take a map of (number of Turing machine in any numbering of all of them) to (terminates or not) and call that a binary fraction. (This is actually not far away from Chaitin’s constant, but the actual one is reweighted a bit to make it more meaningful.) Are there unprovably uncomputable ones? At a guess I’d say so, but I’m not good enough to give a construction offhand.
[1] A countable union of (finite or) countable sets is finite. Rahzrengr gur havba nf sbyybjf: svefg vgrz bs svefg frg; frpbaq vgrz bs svefg frg, svefg vgrz bs frpbaq frg; guveq vgrz bs svefg frg, frpbaq vgrz bs frpbaq frg, svefg vgrz bs guveq frg; rgp. Vg’f snveyl boivbhf gung guvf jbexf, ohg vs lbh jnag gb jevgr gur vairefr znccvat rkcyvpvgyl lbh pna qenj guvf nf n yvar guebhtu gur vagrtre cbvagf bs n dhnqenag.
You lost me here.
Typically, since pre-WWW UseNet days it's been used as a standard "no-spoiler" technique so that those who don't want to see a movie twist, puzzle answer, etc don't accidently eyeball scan the give away.
BTW, you're welcome, glad I could help.
If hypercomputation is possible, then there might be a way to express some of those uncomputable numbers. They just won't be possible with an ordinary Turing machine.
(If description is all you need, then it's already possible to describe some uncomputable numbers like Chaitin's constant. But you can't reliably list its digits on an ordinary computer.)
As for the other interpretation, "have we conclusively proven we can't reach them with an ordinary computer", IIRC, the proof that there are infinite uncomputable numbers is as follows: Consider a finitely large program that, when run, outputs the number in question. This program can be encoded as an integer - just read its (binary or source) bytes as a very large base-256 number. Since the set of possible programs is no larger than the set of integers, it's (at most) countably infinite. However, the real numbers are uncountably infinite. Thus a real number is almost never computable.
Turing machine 1
Turing machine 2
Turing machine 3
...
Now construct a new Turing machine that produce a new number in which the first digit is the first digit of Turing machine 1, the second is the second digit of Turing machine 2, etc. Now add 1 (with wrap-around) to each digit.
This will generate a new number that cannot be generated by any of the existing Turing machines.
The bug with this argument (as ChatGPT pointed out) is that because of the halting problem, we cannot guarantee that any specific Turing machine will halt, so the constructed program will not halt, and thus cannot actually compute a number.
That's certainly true, but all numbers that can be entered on a calculator can be expressed (for example, by the button sequence entered in the calculator). The calculator app can't help with the numbers that can't be practically expressed, it just needs to accurately approximate the ones that can.
You're correct that the use of the calculator means we're talking about computable numbers, so that's nice - almost all Reals are non-computable but we ruled those out because we're using a calculator. However just because our results are Computable doesn't let us off the hook. There's a difference between knowing the answer is exactly 40 and knowing only that you've computed a few hundred decimal places after 40 and so far they're all zero, maybe the next one won't be.
I would guess that if you pulled in a random sample of 2000 users of pocket calculators and surveyed their use cases you would find a grand total of 0 of them in which the cost function evaluated on a hundredth-decimal place error is at all meaningful.
In other words, no, that difference is not meaningful to a user of a pocket calculator.
Otherwise I can't figure out what you mean.
e ** -10 is about 0.000045399929 and presumably you agree that's not zero
e ** -100 is about 3.72 times 10 ** -44, is that still not zero? An IEEE single precision floating point number has a non-zero representation for this, although it is a denormal meaning there's not very much precision left...
e ** -1000 is about 5.075 times 10 ** -435 and so it won't fit in the IEEE single or double precision types. So they both call this zero. Is it zero?
If you take the naive approach you've described, the answer apparently is yes, non-zero numbers are zero. Huh.
[Edited, fixed asterisks]
And for the record, since we're talking about hundred digit numbers, as an IEEE float that would mean 23 exponent bits and you'd have to go below 10e-4000000 before it rounds to zero. Or 32 exponents bits if you follow some previous software implementations.
Um, no. Have you confused your not-at-all hypothetical self? Are you mistaking the significand, aka the mantissa for an exponent? The significand in a 32-bit "single precision" IEEE float is 23 bits (with an implied leading 1 bit for normal values)
When I wrote that example I of course tried this, since it so happens that I have been testing some conversions recently so...
>> (e -1000)
f32: 0
f64: 0
real ~5.07595889754945676529180947957434e-435
That's the first thing I said in this conversation. I did not ever suggest that single precision was enough. A hundred digits is beyond octuple precision. Octuple has 19 exponent bits, and in general every step up adds 4 more.
And going further up the comment chain, the original version was your mention of computing 40 followed by hundreds of digits of precision.
I don't think it matters on a practical level--it's not like the cure for cancer is embedded in an inexpressible number (because the cure to cancer has to be a computable number, otherwise, we couldn't actually cure cancer).
But does it matter from a theoretical/math perspective? Are there some theorems or proofs that we cannot access because of inexpressible numbers?
[Forgive my ignorance--I'm just a dumb programmer.]
Personally, I do wonder sometimes if real-world physical processes can involve uncomputable numbers. Can an object be placed X units away from some point, where X is an uncomputable number? The implications would be really interesting, no matter whether the answer is yes or no.
Non-discrete real-number-based Fractals are a beautiful visual version of this.
A common rebuke is that the construction of the 'real numbers' is so overwrought that most of them have no real claim to 'existing' at all.
For instance, 1/(atan(1/5)-atan(1/239)-pi/4) outputs "Can't calculate".
Well alright, this is a division by zero. But then you can try 1/(atan(1/5)-atan(1/239)-pi/4+10^(-100000)), and the output is still "Can't calculate" even though it should really be 10^100000.
It does solve some real problems that I'd love to have available in a library. The discussion on the previous article links to some libraries, but my recollection is that the calculator code is more accessible to an innumerate person like myself.
Edit: the previous article under discussion doesn't seem to be available, but it's on archive.org[2].
[1] https://news.ycombinator.com/item?id=24700705
[2] https://web.archive.org/web/20250126130328/https://blog.acol...
For the curious, here is the source of ExactCalculator from the commit before all files were deleted: https://android.googlesource.com/platform/packages/apps/Exac..., and here is the dependency CR.java https://android.googlesource.com/platform/external/crcalc/+/...
Conal gave a beautiful case for how comp sci should be about pursuing truth like that, and not just learning the latest commercial tool. I see the same dogged pursuit of true, accurate representation in this beatiful story.
- https://www.typetheoryforall.com/episodes/the-lost-elegance-...
- https://www.typetheoryforall.com/episodes/denotational-desig...
I think the general idea of converting things from discrete and implementation-motivated representations to higher-level abstract descriptions (bitmaps to vectors, in your example) is great. It's actually something I'm very interested in, since the higher-level representations are usually much easier to do interesting transformations to. (Another example is going from meshes to SDFs for 3D models.)
You might get a kick out of the "responsive pixel art" HN post from 2015 which implements this idea in a unique way: https://news.ycombinator.com/item?id=11253649
I hated reading this buzzfeedy style (or apparently LinkedIn-style?) moron-vomit.
I shouldn't complain, just ask my nearest LLM to rewrite this article^W scribbling to a less obnoxious form of writing..
The article mentions that a CAS is an order of magnitude (or more!) more complex than the bifurcated rational + RRA approach, as well as slower, but: the complexity would be solved by adapting an open source solution, and the computation speed wouldn't seem to matter on a device like an Android smartphone. My HP Prime in CAS mode runs at 400MHz and solves every problem the Android calculator solves with no perceptible delay.
Is it a matter of NIH? A legal issue with the 3-clause BSD license I don't understand? Reducing binary size? The available CAS weren't up to snuff for one reason or another? Some other technical issue? Or, if not that, why not use binary-coded decimal?
These are just questions, not criticisms. I have very very little experience in the problem domain and am curious about the answers :)
I have yet to see a good to-do list tool.
I'm not kidding. I tried TickTick, Notion, Workflowy ... everything I tried so far feels cumbersome compared to how I would like to handle my To-Do list. The way you create, edit, browse, drag+drop items is not as all as fluid as I imagine it.
So if anyone knows a good To-Do list software (must be web based, so I can use it anywhere without installing something) - let me know!
They are extremely personal and any unwanted features end up as friction.
You'll never find a perfect Todo app because it will have an audience of 1 so wouldn't be made.
Other examples of Todo apps:
Things, 2Do, Todoist, OmniFocus, Due, Reminders (Apple), Clear, GoodTask, Notes, Google Keep
The list is literally neverending,
If I don't find such a software, I will write it myself. I actually already started:
https://x.com/marekgibney/status/1844077244903571549
I am developing it on the side, while I try to get by with existing solutions.
So your "settings" asking the user to design their own app!
That's the developer's job!
That's a "feature" that makes it more annoying for your first time user, which probably puts off a decent proportion of them.
The out-of-the box experience is what most people will use - they will not dive into endless settings and config
(ignoring the insane dev cost of supporting every possible feature combination)
Like others have said, the perfect to-do list is impossible because each person wants wildly different functionality.
My dream to-do list has minimal interaction, with the details handled like I have my own personal secretary. All I'd do is verbally say something like "remind me to do laundry later" and it would do the rest: Categorizing, organizing, prioritizing, scheduling and adding sub-tasks as needed.
I love the idea of automatic sub-tasks created at level which helps with your particular procrastination level. For example "do laundry" would add in "gather clothes, bring to laundry room, separate colors, add to washer, set timer, add to dryer, set timer, get clothes, fold clothes, put away, reschedule in a week (but hide until then). Maybe it's even add in Pomodoro timers to help.
LLMs with reasoning might get us there soon - we've been waiting for Knowledge Navigator like assistants for years.
What I would like is a very minimal layout. Basically with nothing on the screen. And I want to be able to organize my world by dragging, dropping, swiping recursive items.
The hard part is altering the routine.
Similar to my thoughts about Trello:
When I create an item "Supermarket" and then an item "Bread", I cannot drag and drop the item "Bread" into "Supermarket". But that is how I think. I have a lot of "items" and each item can contain other "items". I don't want any other type of object.
Another problem is that I cannot customize the layout. I can't remove every icon from the items in the list. I only want to see the item names, no other info like the icon that shows that there is a description or anything. But Trello seems to not support that.
https://chachatelier.fr/chalk/chalk-home.php
I tried to explain what was going on https://chachatelier.fr/chalk/article/chalk.html, but it's not a very popular topic :-)
It often pays off to revisit what the actual “why” is behind the work that you’re doing, and this story is a delightful example.
I wrote an arbitrary precision arithmetic C++ library back in the 90’s. We used it to compute key pairs for our then new elliptic-curve based software authentication/authorization system. I think the full cracks of the software were available in less than two weeks, but it was definitely a fun aside and waaaay too strong of a solution to a specific problem. I was young and stupid… now I’m old and stupid, so I’d just find an existing tool chain to solve the problem.
I played around with the macOS calculator and discovered that the dividing line seems to be at 1e33. I.e. 1e33+1-1e33 gives the correct answer of 1 but 1e34+1-1e34 gives 0. Not sure what to make of that.
That’s not too bad. They are probably using hand-rolled FP128 format for their numbers. If they were using hardware-provided FP64 arithmetic, the threshold would have been 2^53 ≈ 9E+15: https://en.wikipedia.org/wiki/Double-precision_floating-poin...
Now it seems to be revived as there were some updates to it, but those also removed one of my favourite features -> tapping equals button no longer repeats the last operation.
The real fun begins when you do geometry.
Find a representation of finite memory to represent points, which allows exact addition, multiplication and rotation between them, (with all the nice standard math property like associativity and commutativity).
For example your representation should be able to take a 2d point A, aka two coordinates, and rotate it around the origin by an angle theta to obtain the point B. Take the original point and rotate it by pi + theta, then reflect it around the origin to obtain the point C. Now answer the question whether B is coincident with C.
ar = Sqrt[ax^2 + ay^2]; atheta = ArcTan[ax, ay];
br = ar; btheta = atheta + theta;
cr = ar; ctheta = atheta + Pi + theta + Pi;
bx = br Sin[btheta]; by = br Cos[btheta];
cx = cr Sin[ctheta]; cy = cr Cos[ctheta];
In[2]:= bx == cx
Out[2]= True
In[3]:= by == cy
Out[3]= True
This seems so elementary that I think open source computer algebra systems can do it.Typically one would like to be able to calculate things without making error, which accumulates.
The symbolic representation you suggest use a growing memory to represent the point by all the operations which have been applied to it since the origin.
What we would rather do is define a set of operation that are closed for a specific set of points, which allows to accumulate information by doing the computation rather than deferring the computation.
One could for example think of using fixed point number to represent the coordinates, and define an extra point at the infinity to handle overflow. And then you have some property that you like and some that you like less. For example minimums distance which can define a point uniquely in continuous R^2, are no longer unique when you constrain yourself to integer grids by using fixed points.
Or you could use some rational numbers to store the coordinates like in CGAL (which allows you to know on which sides of the planes you are without z-fighting), but they still require growing memory. You can maybe add some rule to handle the underflow and overflows.
Or you can merge close points, but maybe you lose some information.
Or you can define the operations on lattices, finite automaton, or do some error correcting codes, dynamic recombining graphs (aka the ruliad).
It's an open problem, see https://en.wikipedia.org/wiki/Robust_geometric_computation for more.
Anyway, I wrote a program where you could enter an equation and it would draw an ASCII graph of the curve. I didn't know how to parse expressions and even if I had I knew it would be slow. The machine had a cassette tape under computer control for storing and loading programs. What I did was to take the expression typed by the user and convert each one into its tokenized form and write it out to tape. The program would then load that just created overlay which contained something like "1000 DEF FNY(X)=X^2-5" and a FOR loop would sweep X over the designated range, and have "LET Y=FNY(X)" to evaluate the expression for me.
As a result, after entering the equation, it would take about five seconds to write out the overlay, rewind a couple blocks, and load the overlay before it would start to plot. But once it started it went pretty fast.
I wouldn't expect, or use, a calculator for any calculation requiring more accuracy than the number of digits it can display. I'm OK with with iPhone's 10^100 + 1 = 1e100.
If I really needed something better, I'd try Wolfram Alpha.
As a developer, "infinite scroll to get more digits" sounds really cool. It sounds conceptually similar to lazily-evaluated sequences in languages like Clojure and Haskell (where you can have a 'virtually-infinite' list or array -- basically a function -- and can access arbitrarily large indices).
As a user, it sounds like an annoying interface. On the rare case I want to compute e^(-10000), I do not want to scroll for 3 minutes through screens filled with 0s to find the significant digits.
Furthermore, it's not very usable. A key question in this scenario would be: how many zeroes were there?
It's basically impossible to tell with this UI. A better approach is simply to switch to scientific notation for very large or very small numbers, and leave decimal expansion as an extra option for users who need it. (Roughly similar to what Wolfram Alpha gives you for certain expressions.)
One of the first ideas I had for an app was a calculator that represented digits like shown in the article but allowed you to write them with variables and toggle between symbolic and actual responses.
A use case would be: in a spreadsheet like interface you could verify if the operations produced the final equation you were modeling in order to help validate if the number was correct or not. I had a TI-89 that could do something close and even in 2006 that was not exactly brand new tech. I figured surely some open source library available on the desktop must get me close. I was wildly wrong. I stuck with programming but abandoned the calculator idea. Even nearly 20 years later, such a task doesn’t seem that much easier to me.
Per the article, it's completely possible. Frankly I'd say they found the obvious solution, the one that any decent programmer would find for that problem.
That statement seems to belittle the amount of effort and thought described in the article. And wildly contradicts my experience.
> 1 is not equal to 1 - e^(-e^1000). But for Richardson and Fitch's algorithm to detect that, it would require more steps than there are atoms in the universe.
> They needed something faster.
I'm disappointed after this paragraph I expected a better algorithm and instead they decided to give up. Fredrik Johansson in his paper "Calcium: computing in exact real and complex fields" gives a partial algorithm for the problem and writes "Algorithm 2 is inspired by Richardson’s algorithm, but incomplete: it will find logarithmic and exponential relations, but only if the extension tower is flattened (in other words, we must avoid extensions such as e^log(z) or √z^2), and it does not handle all algebraic functions. Much like the Risch algorithm, Richardson’s algorithm has apparently never been implemented fully. We presume that Mathematica and Maple use similar heuristics to ours, but the details are not documented [6], and we do not know to what extent True/False answers are backed up by a rigorous certification in those system".
1. I don't have problems like the IOS problem documented here. This requires me to know the difference between an int and a float, but pythons ints have unbounded precision(except if you overflow your entire memory), so that kind of precision loss isn't a big deal. 2. History is a lot better. Being able to scroll back seems like a thing calculators ought to offer you, but they don't. 3. In the 1-in-a-hundred times I need to repeat operations on the calculator, hey, we've already got loops, this is python 4. Every math feature in the windows default calculator is available in the math library. 5. As bad as python's performance reputation is, it's not at all going to be noticeable for simple math.
Sorry, why is this obvious? A basic int type can store the value of 1, let alone the more complicated Rational (BigNum/BigNum) type they have. I can absolutely see why you want symbolic representations for pi, e, i, trig functions, etc., but why one?!
In Polish language rational numbers are called something more like "measurable" numbers and in my opinion that's the last kind of numbers that is expressed in reality in any way. Those should be called "real" and real should be called something like "abstract" or "limiting" because they pop-up first as limits of some process working on rational numbers for infinite number of steps.
Don't get me wrong, the content is good and informative. But I just hate the format.
That reminds me when SideFX started putting memes into their official tutorial youtube channel. At least this is just a webpage and we can scroll through them...
Try the page down key.
Safari
> Typically scroll jacking is when you hook on scroll to forcefully scroll the page to something, but that's not happening here.
That's literally what's happening here. Open the web inspector, and set a breakpoint on the scroll event.
> Also I decided to try writing this thread in the style of a linkedin influencer lol, sorry about that.
The last sentence is: "(Also I decided to try writing this thread in the style of a linkedin influencer lol, sorry about that.)"
And I don't mind at all. Without this article, I probably will never know what's in the paper and how they iterated. I'll likely give up after reading the abstract -- "oh, they solved a problem". But this article actually makes much more motivating to read the original paper, which I plan to do now.
I've taken multiple numerical analysis courses, including at the graduate level.
The only thing I've learnt was: be afraid, very afraid.
HP scientific calculators goes back to the 60's and can presumably add 0.6 to 3 without adding small values to the 20th significant digit.
But
π−π = 0
I think I understand why, from the article, but wouldn't it be "easy" (probably not, but curious about why) to simplify the first expression to (1-1)π + 1 then 0π + 1 and finally just 1 before calculating a result?
That’s pretty much useless in modern world because the whole x87 FPU is deprecated. Modern compilers are generating SSE1 and SSE2 instructions for floating-point arithmetic, instead of x87.
Because it's such a difficult problem to solve that it required elite coders and Masters/PhD level knowledge to even make an attempt?
(Apple Finally Plans To Release a Calculator App for iPad Later This Year)[https://www.macrumors.com/2024/04/23/calculator-app-for-ipad...]
… and this is why interval arithmetic and arbitrary precision methods exist, so it gives guaranteed bounds on error instead of just hoping fp rounding doesn’t mess things up too bad. but obv those come w their own overhead: interval methods can be overly conservative, which leads to unnecessary precision loss, and arbitrary precision is computationally expensive, scaling non-linearly w operand size.
wonder if hybrid approaches could be the move, like symbolic preprocessing to maintain exact forms where possible, then constrained numerical evaluation only when necessary. could optimize tradeoffs dynamically. so we’d keep things efficient while minimizing precision loss in critical operations. esp useful in contexts where precision requirements shift in real time. might even be interesting to explore adaptive precision techniques (where computations start at lower precision but refine iteratively based on error estimates).
Seems like Apple got lazy with their calculator didn't even realize they had so many flaws... Math Notes is pretty cool though.
The link in the paper to their Java implementation is now broken: does anyone have a current link?
Now it does the running ticker tape thing, which means you can't use the AC button to quickly start over, because there is no AC button anymore!
I know it's supposed to be easier/better for the user, but they didn't even give me a way to go back to the old behavior.
while (Math.abs(term) > tolerance) {
term = ...;
sum += term;
}
return sum * 4;
Wouldn’t that return a value where the error of the result is 4x the requested tolerance? https://www.pacifict.com/story/
This is different from what the post (and linked paper) discuss, where the result will degrade to recursive real arithmetic, which is correct but only to a bounded level of precision. A CAS will always give a fully-exact (although sometimes very unwieldy) answer.
[1] See page 87 here: https://sites.science.oregonstate.edu/math/home/programs/und...
i.e it's more likely that I've made a few mm mistake when measuring the radius of my table than that I'm not using an precise enough version of Pi. The area of the table will have more error because one is squaring the radius, obviously.
It would be interesting to have a calculator that let you add in your estimated measurement error (or made a few reasonable guesses about it for you) and told you the error in your result e.g. the standard deviation.
I sometimes want to buy stuff at a hardware shop and I think : "how much paint do I need to buy?" I haven't planned so I'm thinking "it's about 4m by 5m...I think?" I try to do a couple of calculations with worst case numbers so I at least get enough paint and save another trip to the shop but not comically too much so that I have a tin of it for the next 5 years.
I remember having to estimate error in results that were calculated from measured values for physics 101 and it was a pain.
Why would we not expect it to work when we know how to build ones that do, and people use it as a replacement for math on paper?
e.g I measure an angle and I am not sure about whether it's 45 degrees or 46 then an answer like this is pointless: 0.7071067811865476
cos of 46 (if I've converted properly to radians) is 0.6946583704589973
so my error is about 0.01 and those long lists of digits imply a precision I don't have.
I think it would be more useful for most people to tell them how much error there is in their results after guessing or letting them assign the estimated error in their inputs.
Examples include finance and resource accounting. Mathematical proof (yes sometimes they involve numbers), etc.
Even in engineering and carpentry it’s not true. The design process is idealization, without real world measurements. It’s conceptually useful for precise numbers to sum properly on paper. For example it’s common to divide lengths into fractional amounts which are expected to sum to a whole.
> tell them how much error there is
But once again, we know how to build calculators that do most calculations with 0 error. So why are we planning for an estimation problem we don’t have?
Read the article. yes if you want to put out sqrt(2) in decimal form, it will be an approximate. But you can present it as sqrt(2).
We have accepted lack of perfection from calculators long ago. I cannot think of a use-case which needs it from anyone I know. Perhaps some limited number of people out there really need a calculator that can do these things but I suspect that if they do there's a great chance they don't know it can handle that sort of issue.
I have more trouble with the positions of the buttons in the UI than with sums that don't work out as expected. The effort to get buttons right seems far less to me.
I can think of useful things I'd like when I'm doing real-world work which this feature doesn't address at all and I wonder why such an emphasis was put on something which isn't really that transformational.
I understand that if you use American units you might be calculating things in fractions of an inch but since I've never had to use those units it's never been necessary to do that sort of calculation. I suppose if that helps someone then yay but I can only sympathise to an extent.
Where I have problems is with things that aren't precise - where the bit of wood that I cut turns out a millimetre too short and ends up being useless.
But I view math as more of a string manipulation function with position-dependent mapping behavior per character and dependency graphs, combined with several special functions that form the universal constants.
Just because data is stored in digitization as 1 and O, don't forget it's more like charged and not charged. Computers are not numeric systems, they are binary systems. Not the same thing.
Unlike software engineers who have already studied IEEE754 numbers, you can't expect a middle school student to know concepts like catastrophic cancellation. But a middle school student wants to poke around with trigonometric functions and pi to study their properties, but a true computer algebra system might not be available to them. They might not understand that a random calculator app doesn't behave correctly because it's not using the same kind of numbers discussed in their math class.
Or you can do what the Windows 11 calculator does and not even get 1+2*3 right.
But scientific does it correctly where it just appends the new expression onto the buffer instead of applying it
I'm also reading this on an sPhone but don't remember seeing anything that looked like, well... what you said
crag 'say (10*100) + 1 − (10*100)' #1
Raku uses Rats by default (Rational numbers) unless you ask for floating point.
If we have a "Big Integer" type which can represent arbitrarily huge integers, such 10 to the power 5000, we can use two of these to make a Big Rational, and so that's what bc has.
But the rationals aren't enough for all the features on your calculator. What's the square root of ten ? How about the square root of 40 ? Now, multiply those together. The correct answer is 20. Not 20.00000000000000001 but exactly 20.
>>> sqrt(10) * sqrt(10)
9.99999999999999999994
Android calculator, on the other hand, gets this one right.Yes, GP is entirely correct. I want to do something like the article, but the bc standard (POSIX) requires a decimal BigInteger representation.
I am glad you like my bc!
Amusingly one of the things I liked in bc was that I could write stuff like sqrt(10) * sqrt(40) and it works -- but even the more-bc-like command line toy for my own use doesn't do this, turns out a few months of writing the guts of a computable reals implementation makes (* (sqrt 10) (sqrt 40)) seem like a completely reasonable way to write what I meant and so "Make it work like bc" faded from "Important" to "Eh, whatever I'll get to it later".
If you'd asked me a year ago if "fix edge case bugs in converting realistic::Real to f64" would happen before "Have natural expressions like 1 + 2 * 3 do what is expected" I'd have said not a chance, but shows how much I knew.
> They realized that it's not the end of the world if they show "0.000000..." in a case where the answer is exactly 0
so... devs self-made a requirement, got into trouble (complexity) - removed the requirement, trouble didn't go anywhere
just keep saying "it's a win" and you'll be winning, I guess
No parens or anything like that, nothing nearly so fancy. Classic desk calculator where you set the infix operation to apply to the previous value, followed by the second value of the operation.
It was frankly an unexpected challenge. There's a lot more to it than meets the eye.
I only got as far as rational numbers though. PI accurate to the 8 digit display was good enough for me.
Honestly though, I think it was a great exercise for students, showing how seemingly simple tasks can actually be more complex than they seem. I'm still here thinking about it some twenty years later.
> We no longer receive bug reports about inaccurate results, as we occasionally did for the 2014 floating-point-based calculator
(with a footnote: This excludes reports from one or two bugs that have now been fixed for many months. Unfortunately, we continue to receive complaints about incorrect results, mostly for two reasons. Users often do not understand the difference between degrees and radians. Second, there is no standard way to parse calculator expressions. 1 + 10% is 0.11. 10% is 0.1. What’s 10% + 10%?)
When you have 3 billion users, I can imagine that getting rid of bugs that only affect 0.001% of your userbase is still worthwhile and probably pays for itself in reduced support costs.
I expected 1.1 (which is what my iOS calculator reported, when I got curious).
I do understand the question of parsing. I just struggle to understand why the first one is confidently stated to correctly result in a particular answer. It feels like a perfect example itself of a problem with unclear parsing.
I know adding % has multiple conventions, but this one seems odd, I'd interpret 1 + 10% as "one plus 10 percent of one" which is 1.1, or as 1 + 10 / 100 which happens to be also 1.1 here
The only interpretation that'd make it 0.11 is if it represents 1% + 10%, but then the question of 10% + 10% is answered: 0.2 or 20%. Or maybe there's a typo and it was supposed to say "0.1 + 10%"
(1+10)%
Which is 11% or 0.11
Its like: Hey little Bobby, now that you can count here are the ints and multiplication/division. For the rest of your life there will be things to learn about them and their algebra.
Tomorrow we'll learn how to put a ".25" behind it. Nothing serious. Just adds multiple different types of infinities with profound impact on exactness and computability, which you have yet to learn about. But it lets you write 1/4 without a fraction which means its simple!
1/4 = 0.25 exact
1/3 = 0.333... infinitely repeating approximation
1/1 a
1/2 ah
1/3 aH!
1/4 ahh
1/5 ah
1/6 ahH!
1/7 aHHHHHH!
1/8 ahhh
1/9 aH!
1/10 ah
1/11 aHH!
1/12 ahhH!
1/13 aHHHHHH!
1/14 ahHHHHHH!
1/15 ahH!
1/16 ahhhh
1/17 aHHHHHHHHHHHHHHHH!
1/18 ahH!
1/19 aHHHHHHHHHHHHHHHHHH!
1/20 ahh
https://en.wikipedia.org/wiki/Repeating_decimal#Table_of_val...That’s just not true for the vast majority of people.
If you really understand the existing math curriculum this should be high school level.
$ cat check_math.c
#include <stdio.h>
int main() {
// Define the values as float (32-bit floating point)
float one_third = 1.0f / 3.0f;
float five = 5.0f;
// Compute the equation
float result = one_third + five - one_third;
// Check for exact equality
if (result == five) {
printf("The equation evaluates EXACTLY to 5.0 (True)\n");
} else {
// Print the actual result and the difference
printf("The equation does NOT evaluate exactly to 5.0 (False)\n");
printf("Computed result: %.10f\n", result);
printf("Difference: %.10f\n", result - five);
}
return 0;
}
$ gcc -O0 check_math.c -o check_math; ./check_math
The equation evaluates EXACTLY to 5.0 (True)
Its easy enough to find an example where your typical FP operations doesn't work out.
I don't care if it gives me "Underflow" for bs like e^-1000, just give me a text field that will be calculated into result that's represented in the way I want (sci notation, hex, binary, ascii etc whatever).
All standard calculators are imitations of a desktop calculator, It's insane that we're still dragging this UI into desktop. Why don't we use rotary dial on mobile phones then?
It's great that at least OSX have cmd+space where I can type an expression and get a quick result.
And yes, I did develop my own calculator, and happily used it for many years.
TLDR: the real problem of calculators is their UI, not arithmetic core.
On another note. Since Calculator is so complex are there any open source cross platform library that makes it easier to implement?
Reusing an existing one? Maybe not.
Yes, it would likely be slower, but is a 1ms vs. 10ms response time in the calculator app really such a big deal? entering a correct calculation / formula on the smartphone likely takes much longer.
Won't fix: https://github.com/microsoft/calculator/issues/148
I don't see how one can expect them to take a report worded this way seriously. Perhaps if they actually reported the crash without the tantrum the team would fix it.
Does it mean that there are some "dangerous" numbers that can be used to flag someone?
You may well find yourself in the field of computing having to compute something!
The premise of the article is itself somewhat bogus, but I suppose there are programmers today who never had to work with a graphing calculator.
While RRA is an interesting approach, ultimately it wasn't sufficient.
Re-using an off-the-shelf CAS would have been the more practical solution, avoiding all the extra R&D on a novel number representation that wasn't quite sufficient to do the job.
Maybe I'll get back to the project and finish it this year.
That's actually a great error, I have made the mistake of expecting "-2 ** 2" would output 4 instead of -4 before.
i) Answer is 0 if you cancel out two expression (10^100)
ii) Answer is 1 if you compute 10^100 and then add 1 which is insignificant.
How do you even cater for these scenarios? This needs more than arithmetic.
But try it on iOS calculator, answer is 0.
Reason is when computing large numbers e.g. 100000........n + 1 - 100000........n, addition of 1 is pretty in-significant.
no, because only in our imaginations and in no place in the universe can we ignore significance of measurements. If we are sending a spaceship to an interstellar object 1 light year away from earth, and the spaceship is currently 25 miles from earth (on the way), you are insisting that you know more about the distance from earth to the object than you do if you think that that distance from the spaceship to the galaxy is 587862819274.1 miles
for the reasons in my comment, and in, according to you, nobody else's.
also, the commment I was replying to said "1 no matter what" and I was pointing out where it would matter what.
In the year he did this he easily could have just done some minor interface tweaks to a ruby repl which includes the BigDecimal library. In fact I bet this post to an AI could result in such a numerically accurate calculator app. maybe as a Sinatra single file ruby web app designed to format to phone resolutions natively.