Hacker News new | past | comments | ask | show | jobs | submit | joeconway's comments login

% cloud cover by hour


You’re right, this one is missing. Although I’m skeptical of the utility (and accuracy) of something as precise as “percentage of cloud cover,” compared to what is available in the app, namely “clear, cloudy, mostly sunny, etc.”

Is there a meaningful difference between 25% cloud cover and 35%? Or is it better to just give names to the “conditions” at buckets of 25%, 50%, 75%, etc?


in my experience its more like: 0-5% - Sunny; 5-60% - Partly Cloudy; 60-100% - Cloudy;

I've not seen other descriptors, and to view it changing over time its just on the timeline as a sun, sun+cloud & cloud emoji's

It's not useful other than as a binary 'is it cloudy' which in Bay Area weather it is a meaningful difference between 30% and 60% coverage


The best cloud cover that I've found is yr.no

For example (and it always takes a bit for me to find it) https://www.yr.no/en/details/table/2-4887398/United%20States... (and https://www.yr.no/en/details/graph/2-4887398/United%20States... )

And here's their API - https://developer.yr.no/featured-products/forecast/

It breaks it down by overall, low, medium, and high clouds.


At the fringes, there is a meaningful difference. The center 50%, not really.

A 95% coverage day is a bit different than a 100% coverage day. Particularly, when it comes to rain and wind expectations.


How is that difference meaningful? What actions are you going to take differently at 95% cloud coverage vs 100%? I can't think of anything I'd do differently if I was expecting 100% cloud coverage tomorrow to wake up and find that it was actually at 95%.


Yeah, and I’d argue that if the difference is meaningful to you, then you probably want something more accurate than what a consumer-grade weather service can provide. So it would be borderline irresponsible of Apple to even give you the false confidence of some precise measurement of cloud cover.


I would expect certain segments of aviation to find the additional granularity critical.


Presumably they shouldn’t be using a consumer weather service and mobile app.


This guys PR team has been obviously working overtime in the last few months to make him the new musk. I’m taking note of all the organizations being paid to exhaustingly fawn over the cool new kid of death technology


Ocean carriers already optimize and it’s odd to have the article suggest they don’t. The company Sofar Ocean (https://www.sofarocean.com/products/wayfinder) does it in a particularly interesting way and actually has people using it, rather than a theoretical analysis of a small number of voyages


> Ocean carriers already optimize

Of course. But slow sailing is a co-ordination problem. You need an enforcer and a penalty system.

> Sofar Ocean .. does it in a particularly interesting way

What is the "particularly interesting" way? It seems like a "basic" optimizer over known data


They make and deploy their own weather monitoring buoys for weather prediction


This isn’t neovim specific but this was one of the more delightful and impactful discoveries https://shapeshed.com/vim-netrw/

Configuring netrw right and having my vimrc load a side pane with it on load, combined with configuring my tab key to cycle between windows... Perfect.


I immediately found two books I'd not come across before, that I am going to start today. This is a great site, thank you


That is so great to hear! :)

I am working on new bookshelf pages to expand that section. I want to add a more visual format and break it apart a bit more.

Check out the early mockup here: https://www.dropbox.com/scl/fi/mrvkev7pft5luvj10eqfc/Screens...


This is one of the better landing pages for a product I’ve seen, it’s so succinct. Nicely done. I’m going to actually evaluate your product today as a result


I actually gasped. Wow


It is still used exactly as much as it was before, and sadly won’t change as a result of IMO for quite some time.


If it's still used just as much, then why are atmospheric SOx levels falling, especially over the oceans? [1]

I'm not saying there's 100% adherence, but you're saying this isn't related at all?

[1] https://twitter.com/LeonSimons8/status/1669667629844267008


for this to be true and also for it to be legal to have all your windows tinted is sad and absurd.

I assume it’s legal as in east bay I see fully blacked out windows and windscreens every day


This is really interesting, thank you.

What would be the downside to padding all inputs to have consistent input token size?


Conceptually, to the best of my understanding, nothing too serious; perhaps the inefficiency of processing a larger input than necessary?

Practically, a few things:

If you want to have your cake & eat it too, they recommend Enumerated Shapes[1] in their coremltools docs, where CoreML precompiles up to 128 (!) variants of input shapes, but again this is fairly limiting (1 tok, 2 tok, 3 tok... up to 128 token prompts.. maybe you enforce a minimum, say 80 tokens to account for a system prompt, so up to 200 tokens, but... still pretty short). But this is only compatible with CPU inference, so that reduces its appeal.

It seems like its current state was designed for text embedding models, where you normalize input length by chunking (often 128 or 256 tokens) and operate on the chunks — and indeed, that’s the only text-based CoreML model that Apple ships today, a Bert embedding model tuned for Q&A[2], not an LLM.

You could used a fixed input length that’s fairly large; I haven’t experimented with it once I grasped the memory requirements, but from what I gather from HuggingFace’s announcement blog post[3], it seems that is what they do with swift-transformers & their CoreML conversions, handling the details for you[4][5]. I haven’t carefully investigated the implementation, but I’m curious to learn more!

You can be sure that no one is more aware of all this than Apple — they published "Deploying Transformers on the Apple Neural Engine" in June 2022[6]. I look forward to seeing what they cook up for developers at WWDC this year!

---

[1] "Use `EnumeratedShapes` for best performance. During compilation the model can be optimized on the device for the finite set of input shapes. You can provide up to 128 different shapes." https://apple.github.io/coremltools/docs-guides/source/flexi...

[2] BertSQUAD.mlmodel (fp16) https://developer.apple.com/machine-learning/models/#text

[3] https://huggingface.co/blog/swift-coreml-llm#optimization

[4] `use_fixed_shapes` "Retrieve the max sequence length from the model configuration, or use a hardcoded value (currently 128). This can be subclassed to support custom lengths." https://github.com/huggingface/exporters/pull/37/files#diff-...

[5] `use_flexible_shapes` "When True, inputs are allowed to use sequence lengths of `1` up to `maxSequenceLength`. Unfortunately, this currently prevents the model from running on GPU or the Neural Engine. We default to `False`, but this can be overridden in custom configurations." https://github.com/huggingface/exporters/pull/37/files#diff-...

[6] https://machinelearning.apple.com/research/neural-engine-tra...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: