At least for me the readings that it shows are not accurate (when verified with old style finger pricking glucose monitoring machine). These CGMs are good for knowing the general variability of glucose level in your blood based on your diet exercise etc; I don’t trust the absolute numbers from CGM like Freestyle Libre - haven’t used any other one though yet.
Apart from the latency of diffusion from bloodstream to interstitial fluid (and the lower levels in the interstitial fluid) the FDA requires that consumer devices be with 20% of the venipuncture level.
That means a lancet poke can be quite different from a meter like the freestyle, and both can be quite different from the level in your veins that a lab would get. So if your level is 200 one device can read 240 and the other 160 and both can be considered “correct”.
I found that the freestyle libre 2 and libre light are characteristically low while the FS 3 is characteristically high. So I use them for the shape of the curve, and that is useful.
It would be interesting to see whether a group of 20-100 people could manually calibrate their readings by fitting their CGM readings to their fingerprick glucose readers. I wonder what the accuracy would be after a very basic personal curve fit.
I do this with a lot of consumer measurement devices. Both for thermometers and scales (food, human, and cheap 0.1mg scales). As well as thermostats, like the kitchen oven. I also do it for my multimeters. I validate my volumetric measuring cups/spoons by weighing water in them but I don’t correct them, just return if they’re way off.
It’s okay if the reading is off as long as I can correct it the same way every time and get a pretty accurate result.
The entire system is too complicated, and the CGM too variable in accuracy, for such calibration to work in the way I think you are suggesting.
Each time the CGM is applied, the situation is different because of the exact position and various other factors. And the CGM is not 100% consistent.
You do/can calibrate the CGM as needed. For example, when the CGM first activates, standard practice is to check with a fingerprick to see how accurate the CGM is this time and (sometimes) calibrate. (As noted in other comments, the CGM and fingerprick are not detecting exactly the same thing.)
And the next time you apply the CGM (we use a Dexcom G6, which is changed every 10 days), any previous calibration is irrelevant. There's a lot of variability and many factors that can affect results (exact location, scar tissue from previous CGM application, recent exercise, a recent hot shower, etc.)
(I didn't explain that well, but hopefully you get the idea.)
I basically use an Excel sheet. Make a scatter plot of the "true" values on one axis, and the "measured (slightly wrong)" values on another axis. Then do best-fit to y=mx+b and manually adjust it according to that equation using my phone calculator in the future.
Some classically trained engineers may tell you the "true" value should always be plotted on the x-axis as it is often considered to be the more "independent" variable...but this is highly debatable, and you can skip some simple algebra later if you put the measured value on the x-axis. Then look at the shape of the scatter plot. Ideally it will be linear, so you ask Excel to do a linear curve fit (y=m*x+b). Write this on the scale, and now whenever you take a measurement on the scale, whip out your phone and do "measured_value * m + b". And that's your true value. If it's not a linear fit (quadratic, log, etc) ... that's interesting, and often it's likely "wrong", but also "it is what it is". Classically trained engineers will say you have to do a linear fit if that's what the theory says is appropriate, but for one-off home device calibration...do whatever works for you. Just as long as you don't overfit with some stupid 4, 5, 6, etc-term equation. Any reasonably simple equation with 2-3 terms is fine IMHO.
I use a set of heavy objects whose mass I know fairly precisely. They're not perfectly 10.000lbs, 20.000lbs, etc ... they're just "around 10lbs, around 20lbs" and I've used a good actually-calibrated scale (at work, some commercial business with calibrated scales that you can access, whatever) to weigh them and wrote their weights in sharpie on a piece of tape stuck to the objects. Ideally you'd go for around 10% increments. If the scale can weigh 400lbs, that would be every 40 lbs or so. But it really doesn't matter as long as you have enough good points around the range you truly intend to measure, and then a few outside of that target range at semi-regular intervals.
For my 0.1mg-resolution mass balance I have some actual calibration weights, but they're a relatively affordable OIML "M1" class, and did not come with expensive calibration certificates. The OIML tolerance ratings go E1, E2, F1, F2, M1, M2, M3 (from best to worst). For a 100g test weight, M1 precision gets you +/- 0.005g, guaranteed, for $50 ($135 if you want a calibration certificate). E1 gets you +/- 0.00005g at 100g test weight, for $500 ($1200 with cal cert). For smaller calibration weights like 10mg you'll generally want to go a step up from M1 (+/- 0.25mg) to F2 (+/- 0.08mg) for about $27.
For temperature, it's a bit trickier because the only "true" temperatures you can create are -6°F/-21°C and 228°F/109°C. If these temperatures are helpful to you, you can create them by pouring shitloads of salt in water and stirring+heating it until no more salt will dissolve and you just have a pile of salt in the bottom of the container. You can try to go for "0°C/100°C" using distilled water and it would probably be close enough but you can't know it exactly unless you use super pure de-ionized water and use extremely absurd lab technique (usually involving washing your glassware and tools with de-ionized water over and over for several days straight to get rid of trace contaminants).
So instead, to get "true" temperature in the range I care about, I use some thermocouples attached to a high-quality multimeter or oscilloscope. Then I calibrate these thermocouples using the method above, and average their reading for the oven temperature. This works and extrapolates well enough outside the range of calibration because the error of a thermocouple is basically guaranteed to be a very linear error.
In this link[0] topics 1-6 ("weeks") get into the fine details of all this and provide some worksheets/excel sheets already made up for this type of thing. If you're really getting into the weeds with this, understanding propagation of error[1] really helps but is super unnecessary for 99% of people unless they're doing actual engineering.
This is highly personal thing it is apparently very inaccurate for some people, I've never been below or over dangerous levels without it giving a warning. What has happend once or twice over the decade I've used it is that it will get stuck in a bad reading, so you do not see the variations. It has always got unstuck when I've gone below 3.5 mmol/liter or so.
There is generally a latency of a few minutes between blood and interstitial fluid (the CGM) readings- up to 15 minutes. You may find if you account for latency your consistency between the two increases