11. notice that there's a unicode rendering error ("'" for apostrophe) on kernel_initializer and bias_initializer default arguments in the documentation, and wonder why on earth for such a high-level API one would want to expose lora_rank as a first class construct. Also, 3 out of the 5 links in the "Used in the guide" links point to TF1 to TF2 migration articles - TF2 was released 5 years ago.
To add onto this I feel like one of the hard things about TF is that there is like at least 3 ways to do everything because they have supported multiple APIs and migrated to eager. So if you find an example or an open source project it might not be for the flavor of tensorflow that your codebase is in.
I feel like that with every single Google api doc. if there's a variable called x, the documentation will be "variable to store x". and you need to create/supply 5 different resources before you can create an x, but these will each require 5 further things to be figured out before you can create one of them.
Re 6: TF/Keras team motivates random people to write long tutorials and be featured in the official site and their tutorial be included in the official guides. I have seen a lot of subpar devs/AI people write subpar tutorials and brag on twitter how their tutorials are included in the official Keras site.
Honestly, this example holds true for roughly half of the Python ecosystem; and you can square the level of frustration if it's also anything coming from Google.
(This pattern is relatively easy to understand: smart people creating something get their gratification from the creation process, not writing tedious documentation; and this is systemically embedded for people at Google, who are probably directly incentivised in a similar way.)
TF's doesn't seem very good. I just tried to figure out how to learn a linear mapping with TF and went through this:
1. googled "linear layer in tensorflow" and got to the page about linear.
2. spent 5 minutes trying to understand why monotonicity would be a central tenet of the documentation
3. realizing that's not the right "linear" I couldn't think of what the appropriate name would be
4. I know MLPs have them, google "tensorflow mlp example"
5. click the apr '24 page: https://www.tensorflow.org/guide/core/mlp_core
6. read through 10[!] code blocks that are basically just boiler-plate setup of data and visualizations. entirely unrelated to MLPs
7. realize they call it "dense" in tensorflow world
8. see that "dense" needs to be implemented manually
9. think that's strange, google "tensorflow dense layer"
10. find a keras API (https://www.tensorflow.org/api_docs/python/tf/keras/layers/D...)