Hacker News new | past | comments | ask | show | jobs | submit | rudedogg's comments login

Nice app, I might give it a try for my video collection.

Would you be willing to share the OS breakdown? I'm primarily interested in macOS and am wondering roughly what % of your sales are for that platform?


I've not kept statistics, especially that when people purchase they get access to all the OS versions. I'm guessing it's over 50% Windows but Mac is no less than 20% of the purchases. I don't sell my app through the Apple App store - it's just whoever stumbles across my website (I've shared it on Reddit a bunch of times over the years, but not aggressively). I think I mostly got lucky with naming - where "video hub" is close to "porn hub" and people find it unintentionally :)


These type inference landmines are all over the place with SwiftUI too. I run into them with View/Shape/Color frequently


Most police bikes I see in the US are Harleys, I don't think they're buying anything out of practicality. If they were, you'd see a lot more BMW police motorcycles.


Michigan State Police somewhat controversially switched to BMW motorcycles in 2012, and actually disbanded the motorcycle unit for awhile because of safety issues. Apparently it's returning this year, still using the BMWs.

https://www.freep.com/story/news/local/michigan/2024/02/15/m...


BMW models with direct shaft drive and horizontally opposed twin pots are better for safety in several ways.

Lower centre of gravity, some "extra" leg protection in sideswipes | converging hits on roundabouts (from the pots and pot crash bar protectors), and zero chance of chain wear with better direct power pickup | engine braking from the shaft.

These are all grounds for some heated debate wherever two wheels gather, but I'll swear by them.


US police buy almost exclusively domestic vehicles.


Not when it comes to motorcycles. BMW and Honda are both extremely common police bikes. LAPD’s motorcycle fleet used to be almost entirely Kawasakis.


That’s awesome. What distro did you go with? Do you have to help maintain them, or have they ever broken?


Windows for the majority of machines (we already own desktop licenses) but i keep 3-4 old laptops with ubuntu we nicknamed "crap outs" in case someone's PC dies, someone needs a laptop ASAP at a jobsite, or the IT gang needs to get fix someone's main PC. Our shops big label maker and 3d printer machine is currently 100% Ubuntu and is a favorite for a Lotta guys on the floor.


What are your thoughts on using Intel GPUs for deep learning?


Yeah, with a 2-3 year wait to know whether you succeeded or not, I doubt you'd put all your eggs in one basket.


Wow I had no idea FL Studio was written in Delphi.

Anyway, I went pretty deep on SwiftUI from the announcement, and started programming with Delphi 7 / Visual Basic 6. While I enjoy SwiftUI, it has some rough edges still, and it's been 3 or 4 years. Hopefully Apple can put things together this year, especially for macOS apps.

I agree that Delphi is leaps and bounds better than the web technology in popular use today, and every reason I've heard for why (increased DPI, variable screen sizes, etc.) are just poor excuses.

The web is where the money was, development approaches forked, and now the worse approach happens to be more popular.

I can't decide whether it's sad or funny what Microsoft is trying to do with Blazor. They solved their Desktop UI problems by offloading them to you with a browser (now you have the problems).


I’m a mac user at home, and I don’t get their AI story/path now that they’re not supporting AMD/Nvidia GPUs since the Apple Silicon transition.

Maybe they’ll manage to get LLMs running well locally with the new low-bit developments? Not my area. But for training/learning it seems like Apple is DOA. They have the same problem as AMD, no one is doing research with their hardware or software.

Intentionally shipping low RAM/unified memory quantities seems short sighted too. Maybe with a 16GB baseline they could do something special with local LLMs.


I think you are looking at a very narrow use case and deciding that because they do not make a system you'd be happy with for your niche use that they are DOA. Someone selling just under 6.5 million units of anything seems like the opposite of dead to me. Are there vendors selling more? Of course, but there are also vendors selling less. Not every Mac user cares about AI and training or fine tuning a local LLM.


Very true, my needs are niche for sure. But I’m more thinking about the near future. AI/LLMs are going to have some general applications that users are going to want, and will become the norm, and I think it’s clear that will shake out soon. Apple is at risk of being left behind because the only people working on that stuff for Apple, work at Apple. Hobbyists and researchers are on Linux/Windows for the most part. Software development doesn’t have such a large platform difference, lots of developers use macOS. But ML is different and I think they should care.


> But ML is different and I think they should care.

It’s totally this time I promise, just like, one more ~~lane~~ model.

I’m sure they do care. I wouldn’t be surprised if they land significant support for on-app processing of models, they’ve already got the chip, dropping in local models is a sensible next step, and if close to zero effort for them.

> LLMs are going to have some general applications that users are going to want, and will become the norm

I have yet to see anyone, in my personal or professional circles, use any LLM:

- for more than a week

- for anything more than cutesy trivial things.

I’m sure there’s people around stapling models into their toaster, but this is so far from the normal.


Part of Apple's problem is that they're expected to vendor support for third-party stuff. Who accelerates Pytorch or ONNX for Apple silicon, if not Apple?

They've done an okay job of that so far, but their flagship library is diverging pretty far from industry demand. At best, CoreML is a slightly funkier Tensorflow - at worst, it's a single-platform model cemetery. No matter what road they take, they have to keep investing in upstream support if they want Nvidia to feel the heat. Otherwise, it's CUDA vs CoreML which is an unwinnable fight when you're selling to datacenter customers.

I think it's possible for Apple to make everyone happy here by reducing hostilities and dedicating good work where it matters. Generally though, it feels like they're wasting resources trying to compete with Nvidia and retread the Open Source work of 3 different companies.


> an unwinnable fight when you're selling to datacenter customers.

didn't Apple pretty much throw in the towel in this market simply by choice of form factor for their computers? The sheer desperation of their users wanting a device in this space is shown in the "creative" ways to mount their offerings in a rack.

all of the user friendly things they've done by shrinking the footprint, making them silent, etc are all things a data center does not care about. make it loud with fans to keep things cool so they can run at full load 24/7 without fear of melting down.

so from that lead alone, we can make the next assumption in that Apple doesn't care about vs CUDA. as long as they can show a chart in an over produced hype video for a new hardware announcement that has "arrows go up" as a theme, that is ALL they care about.


I mostly agree, which is why I question their strategy of even "competing" at all. The existence of CoreML feels strictly obligatory, as if the existence of Pytorch and Tensorflow spurned FOMO from the C-suites. It's not terrible, but it's also pretty pointless when the competing libraries do more things, faster.

Users, developers, and probably Apple too would benefit from just using the prior art. I'd go as far as to argue Apple can't thread the AI needle without embracing community contributions. The field simply moves too fast to ship "AI Siri" and call it a day.

> The sheer desperation of their users wanting a device in this space is shown in the "creative" ways to mount their offerings in a rack.

Well you and I both know that nobody is doing that to beat AWS on hosting costs. It's a novelty, and the utility beyond that is restricted to the few processes that require MacOS in some arbitrary way. If we're being honest with ourselves, any muppet with a power drill and enough 1U rails can rackmount a Mac Mini.


>I mostly agree, which is why I question their strategy of even "competing" at all.

If it makes their camera "smarter", it's a win. If they can make Siri do something more than "start a timer", then it's a win. If they can have images translate text more accurately, it's a win. There's a lot of things that an on device AI could help users without having to do all of the power hungry creation of a model or the fine tuning. They can do that in the mothership, and just push models on their device.

Not everyone needs to do AI the way you are trying to do it


I think that's a mistaken way of viewing it. Apple's failure in the gaming space is entirely a matter of policy; you look over at the Steam Deck and Valve is running Microsoft IP without paying for their runtime. Some people really do get their cake and eat it too.

Any of the aforementioned libraries could make their camera smarter or improve Siri/OCR marginally. The fact that Apple wasted their time reinventing the wheel is what bothers me, they're making a mistake by assuming that their internal library will inherently appeal to developers and compete with the SOTA.

The reason why I criticize them is because I legitimately believe Apple is one of the few companies capable of shipping hardware that competes with Nvidia. Apple is their only competitor at TSMC, it's entirely a battle of engineering wits between the two of them right now. Apple is going nowhere fast with CoreML and Accelerate framework, but they could absolutely cut Nvidia off at the pass by investing in the unified infrastructure that Intel and AMD refuse to. It also wouldn't harm the customer experience, leverages third-party contributions to advance their progress, and frees up resources to work on more important things. Just sayin'.


I think knowing C makes you stand out, and demonstrates you understand how a computer fundamentally works.

https://pll.harvard.edu/course/cs50-introduction-computer-sc... is mostly C too.

I don't know C for what it's worth, but learned a different low-level language and found the experience to be enlightening, even after programming most of my life.


I think the OP in the linked thread is a complete hoax, and all the commenters are experiencing the random phantom/ghost input issues, googling it and hitting that thread. If you read carefully they all sound like their watch is receiving random inputs.


So not hoax then but scared users that need to help understand their device and it's honestly very bad bugs?

Not sure it does matter at this point, these people paid $800 and now feel their device is heavily insecure, they need help and a explanation from official side they understand.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: