Hacker News new | past | comments | ask | show | jobs | submit | itpragmatik's comments login

Why is this a surprise/shock and news?! Obviously- every company wants to leverage data they have to train whatever llm model they may have.

They scraped user data. It's worse than that time political campaigns targeted voters.

For any web apps or API services, we monitor: - Uptime - Error Rate - Latency

Prometheus/Grafana


+1


At least for me the readings that it shows are not accurate (when verified with old style finger pricking glucose monitoring machine). These CGMs are good for knowing the general variability of glucose level in your blood based on your diet exercise etc; I don’t trust the absolute numbers from CGM like Freestyle Libre - haven’t used any other one though yet.


Apart from the latency of diffusion from bloodstream to interstitial fluid (and the lower levels in the interstitial fluid) the FDA requires that consumer devices be with 20% of the venipuncture level.

That means a lancet poke can be quite different from a meter like the freestyle, and both can be quite different from the level in your veins that a lab would get. So if your level is 200 one device can read 240 and the other 160 and both can be considered “correct”.

I found that the freestyle libre 2 and libre light are characteristically low while the FS 3 is characteristically high. So I use them for the shape of the curve, and that is useful.


It would be interesting to see whether a group of 20-100 people could manually calibrate their readings by fitting their CGM readings to their fingerprick glucose readers. I wonder what the accuracy would be after a very basic personal curve fit.

I do this with a lot of consumer measurement devices. Both for thermometers and scales (food, human, and cheap 0.1mg scales). As well as thermostats, like the kitchen oven. I also do it for my multimeters. I validate my volumetric measuring cups/spoons by weighing water in them but I don’t correct them, just return if they’re way off.

It’s okay if the reading is off as long as I can correct it the same way every time and get a pretty accurate result.


The entire system is too complicated, and the CGM too variable in accuracy, for such calibration to work in the way I think you are suggesting.

Each time the CGM is applied, the situation is different because of the exact position and various other factors. And the CGM is not 100% consistent.

You do/can calibrate the CGM as needed. For example, when the CGM first activates, standard practice is to check with a fingerprick to see how accurate the CGM is this time and (sometimes) calibrate. (As noted in other comments, the CGM and fingerprick are not detecting exactly the same thing.)

And the next time you apply the CGM (we use a Dexcom G6, which is changed every 10 days), any previous calibration is irrelevant. There's a lot of variability and many factors that can affect results (exact location, scar tissue from previous CGM application, recent exercise, a recent hot shower, etc.)

(I didn't explain that well, but hopefully you get the idea.)


Calibrating my scales and thermometers would be nice. What procedure do you use for it? is it documented online anywhere?


I basically use an Excel sheet. Make a scatter plot of the "true" values on one axis, and the "measured (slightly wrong)" values on another axis. Then do best-fit to y=mx+b and manually adjust it according to that equation using my phone calculator in the future.

Some classically trained engineers may tell you the "true" value should always be plotted on the x-axis as it is often considered to be the more "independent" variable...but this is highly debatable, and you can skip some simple algebra later if you put the measured value on the x-axis. Then look at the shape of the scatter plot. Ideally it will be linear, so you ask Excel to do a linear curve fit (y=m*x+b). Write this on the scale, and now whenever you take a measurement on the scale, whip out your phone and do "measured_value * m + b". And that's your true value. If it's not a linear fit (quadratic, log, etc) ... that's interesting, and often it's likely "wrong", but also "it is what it is". Classically trained engineers will say you have to do a linear fit if that's what the theory says is appropriate, but for one-off home device calibration...do whatever works for you. Just as long as you don't overfit with some stupid 4, 5, 6, etc-term equation. Any reasonably simple equation with 2-3 terms is fine IMHO.

I use a set of heavy objects whose mass I know fairly precisely. They're not perfectly 10.000lbs, 20.000lbs, etc ... they're just "around 10lbs, around 20lbs" and I've used a good actually-calibrated scale (at work, some commercial business with calibrated scales that you can access, whatever) to weigh them and wrote their weights in sharpie on a piece of tape stuck to the objects. Ideally you'd go for around 10% increments. If the scale can weigh 400lbs, that would be every 40 lbs or so. But it really doesn't matter as long as you have enough good points around the range you truly intend to measure, and then a few outside of that target range at semi-regular intervals.

For my 0.1mg-resolution mass balance I have some actual calibration weights, but they're a relatively affordable OIML "M1" class, and did not come with expensive calibration certificates. The OIML tolerance ratings go E1, E2, F1, F2, M1, M2, M3 (from best to worst). For a 100g test weight, M1 precision gets you +/- 0.005g, guaranteed, for $50 ($135 if you want a calibration certificate). E1 gets you +/- 0.00005g at 100g test weight, for $500 ($1200 with cal cert). For smaller calibration weights like 10mg you'll generally want to go a step up from M1 (+/- 0.25mg) to F2 (+/- 0.08mg) for about $27.

For temperature, it's a bit trickier because the only "true" temperatures you can create are -6°F/-21°C and 228°F/109°C. If these temperatures are helpful to you, you can create them by pouring shitloads of salt in water and stirring+heating it until no more salt will dissolve and you just have a pile of salt in the bottom of the container. You can try to go for "0°C/100°C" using distilled water and it would probably be close enough but you can't know it exactly unless you use super pure de-ionized water and use extremely absurd lab technique (usually involving washing your glassware and tools with de-ionized water over and over for several days straight to get rid of trace contaminants).

So instead, to get "true" temperature in the range I care about, I use some thermocouples attached to a high-quality multimeter or oscilloscope. Then I calibrate these thermocouples using the method above, and average their reading for the oven temperature. This works and extrapolates well enough outside the range of calibration because the error of a thermocouple is basically guaranteed to be a very linear error.

In this link[0] topics 1-6 ("weeks") get into the fine details of all this and provide some worksheets/excel sheets already made up for this type of thing. If you're really getting into the weeds with this, understanding propagation of error[1] really helps but is super unnecessary for 99% of people unless they're doing actual engineering.

0: https://pages.mtu.edu/~fmorriso/cm3215/laboratory_exercise_s...

1: https://pages.mtu.edu/~fmorriso/Pintar_Error_Analysis_or_UO_...


This is highly personal thing it is apparently very inaccurate for some people, I've never been below or over dangerous levels without it giving a warning. What has happend once or twice over the decade I've used it is that it will get stuck in a bad reading, so you do not see the variations. It has always got unstuck when I've gone below 3.5 mmol/liter or so.


It actually may actually be the other way around, at least for newer CGMs.

Try doing a few fingerpricks in a row. The variability will surprise you!


There is generally a latency of a few minutes between blood and interstitial fluid (the CGM) readings- up to 15 minutes. You may find if you account for latency your consistency between the two increases


Another reason they are sometimes different is because there is a lag in the CGM data. It's estimated the lag is about 12 minutes.


One of my 2023 resolutions was to learn how to build an iOS app. I am happy to announce that finally a very not so fancy app that I built is now available on iOS App Store. And when I say - I built it - it is truly by me. Everything - design (however ugly or beautiful) and code is implemented by me - not outsourced to anyone!

It lets you keep track of your vehicles and associated details like services. It will require you to create an account to access all functionality.

The name of the app is: Motor Vehicle Log

It’s an MVP and it’s currently free! It is only available in US and on iPhone with iOS 17+

If you have time, please download and install this app and give me any feedback you may have! Send your feedback to support@motorvehiclelog.com

https://www.motorvehiclelog.com/

Here is a direct link to the app at AppStore

https://apps.apple.com/us/app/motor-vehicle-log/id6475635799

Not sure if I will ever release a new version of this app but the feedback I will receive may motivate me to continue working on it through 2024 too!


Not sure the fascination about Go - one can write fully scalable functional readable maintainable upgradable rest api service with Java 17 and above.


I struggle with the type system in both, but today I was going through obscure go code and wishing interfaces were explicitly implemented. Lack of sum types is making me sad


Merry Christmas! Happy to have HN as one of my trusted resource on technology, new technology trend, and insightful information on varied topics other than technology!


Java 17, Hibernate, JPA, Spring Boot - all the way! Excellent combo! Reliable, high developer productivity, excellent community support, stable - have been using it for more than 8+ years now and won't think of using any other framework if I am going to build something in Java. Very easy to design, build, deploy, troubleshoot, maintain REST API Services! You won't go wrong at all if you choose Spring Boot.


When I read the heading I thought these are out of the box audit tables provided by Postgres. But the article then explains these are custom tables created by author - so this could very well be applicable to MySQL or any other databases


I started doing Java (and sql/Oracle) development in late 1990s and even today I am deploying REST APIs written in Java, MySQL to AWS. So, yeah, it has been about 23+ years so far!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: