Hacker News new | past | comments | ask | show | jobs | submit | readmodifywrite's comments login

Also, they are options. Basic pub/sub usage doesn't need any of them, though some are pretty nice to have.


Yup, it's the Hakko Omnivise. They might seem a little pricey, but they are worth every penny.


It really seems like it has to be something like that. The problem is there is no detail in the docs and no status bits in the chip. There's no way to know when the auto-cal runs.

One of the several things I did to eliminate the problem was to disable the auto-cal during a UART reception (the STM32 is the bus master so it knows when it will be receiving) and re-enable it when it is finished. It absolutely confirmed that is the glitch, but I don't think I'll ever get a true why unless an ST engineer wants to chime in!


This is a good resource, however, it didn't apply in my situation because it describes the manual calibration process, not the auto-cal (which the F0 probably doesn't even have).

I still haven't come across anything that explains in detail how the auto-cal works and precautions one needs to take when it is running. The reference manual section is something like one paragraph and can be summarized as: "You can turn this on and it will calibrate your clock. You can also turn it off."

If I had to guess, it probably does something similar to the manual process, but just in the MCU logic. It's the lack of detail that got me: I basically ran out of things to try on the UART itself and started looking around at other parts of the chip to see what could at least be indirectly related.


My guess is that the receiver clock glitches in some way when the MSI auto calibration runs, but it never showed up on the transmitter (and the device on the other side of the connection has never had a reception issue).

I ended up disabling the auto cal feature during a UART reception and then turning it back on when the reception is done.

SPI is definitely better as far as clocking, but MCU support as a SPI receiver is sometimes a lot less convenient to deal with.

A lot of UARTs have a synchronous mode which adds a dedicated clock signal - I've used that before out to a couple MHz.

In this application though, I'm only running 1 MHz so I really didn't think I should need a separate clock (and, it turns out, still don't).


According to the documentation there is no calibration as such, the MSI clock simply runs in a phase locked loop (PLL) configuration with LSE (32.768 kHz). For example in 1MHz mode the MSI is setup to run at approximately 1Mhz, this clock then goes into a downscaler which downscales by a factor of 31 to approximately 32kHz and this is compared to the LSE clock to generate feedback for the MSI clock. When locked the MSI runs at 1015.8 kHz (32.768 * 31) so out by 1.58%.

It's also possible that the design hasn't been thoroughly tested and the PLL doesn't lock in certain conditions which could leave you with an unstable clock.


The lack of status bits on the auto-cal is really unfortunate.

Turning it off during a UART transaction definitely "fixes" it.

I'm somewhat tempted to do the manual calibration the HSI instead.


Yeah a PLL without a status flag to indicate it is locked isn't good. I think there are also issues with stabilisation when using it with stop modes https://community.st.com/t5/stm32-mcus-products/msi-pll-mode...

If you really need the accuracy then regularly time the LSE clock using a timer clocked from MSI and apply the best trim values as described in this app note file:///home/tom/Downloads/an4736-how-to-calibrate-stm32l4-series-microcontrollers-internal-rc-oscillator-stmicroelectronics-1.pdf


The GIL is not there to prevent data corruption on shared objects - it only protects the interpreter's internal state. The fact that sometimes you can get away with it is an accident of the GIL's implementation, not a feature anyone should rely on. It also means that you cannot rely on that behavior not to change on a successive versions of CPython.

The only safe way to do shared state between threads in CPython is locks/mutexes/message passing/etc. Even things like a simple addition to an integer are absolutely not made thread safe by the GIL.


No, but at least you won't have two cores incrementing the same integer at the same time.


There are TNG episodes that specifically deal with that, and the problems it causes. I think ENT had a few as well, instances which led them to the creation of the Prime Directive.


I will have to check those out since I've not seen them. I'm open-minded to it, but also pretty confident that the weight of the logic will eventually fall on the side of being good and helpful to others if we ever make it to the stars.


That's kind of ironic, considering they don't provide a scheduled reboot feature for their own gear (which often needs it for IoT usage).


The CPU implementation is done by Qualcomm, Apple, and the like. The CPU design is done by ARM, predominantly in the UK. The fact that Softbank (Japan) owns ARM has no bearing on where and who is doing the actual technical work.


That's incorrect, Apple designs their own micro architecture that implements the ARM ISA. Qualcomm, HiSilicon, and Samsung use ARM's reference micro architectures, which are mostly designed in Austin TX now, although some have been from the ARM Sophia, France office. Samsung also made custom uarchs at their Austin office for their Exynos chips but recently shut the operation down.


Not true - just look at the latest chip from Apple.

https://en.wikipedia.org/wiki/Apple_A13

Yes it uses the arm instruction set, but it was completely designed by Apple. This is identical to Intel using the same 64 bit instruction set as AMD... you wouldn’t say AMD designs intels chips though.


Apple is largely an exception in the crowd of ARM SoCs manufacturers, though. A lot of companies just implements ARM cores, with their additional magic in other bits of the SoCs. The fact that Apple does it does not mean that ARM does not do any design.


I love that the example uses the exact same indentation format that Python already requires.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: