Hacker News new | past | comments | ask | show | jobs | submit | spython's comments login

I think it's so that the fingers holding the device don't obstruct the view/ don't get counted as touch event.

I’d be surprised if thumb rejection/palm rejection isn’t close to perfect by iPadOS 18.

I’m forever triggering the camera app, while locking the device, on my iPhone 15 pro max. Every day, regularly.

I click on ads by mistake on my iPhone 15 pro max. Even the iPhone SE was better, with its bottom bezel. Thumb detection on iOS is very bad.

You're holding it wrong.

Yeah every hour on the YouTube app. Probably a google feature though. Ha

I'm always managing to tap the top of the phone resulting in either

- opening some app that recently used location services

Or worse

- making whatever app I'm in jump to the top of its page with no way to get back to where I was short of doing a load of scrolling


How is that possible? The camera icon is in the bottom left corner of the screen and you use the physical lock button on the side to lock the iPhone.

Don’t shame the man with the extra digit projecting from his thenar eminence.

If you reach for the top left corner your thumb will naturally come into contact with the bottom right corner of the screen, assuming you are holding the device one-handed (in your right hand).

I’m trying to picture this, but I can’t. If I hold and lock my iPhone with my right hand, I press the lock button with my thumb. If I try to reach the top left corner, I either do that with my left thumb (mostly) or index finger (sometimes), or with my right thumb (very awkward movement on a Pro Max). In none of these cases my right thumb comes in contact with the screen or even close to it. Maybe because I use the backside of my pinky finger to lock the phone in place.

I think he meant your palm hits the screen, which does happen to me sometimes.

The right hand large stretch with the thumb into the top left area - the pad below your thumb on your palm can make contact with the screen.


You can also trigger the camera by swiping right-to-left on the lock screen or from the notification pull-down.

How small are your hands?

Consider the possibility that the person here is not an adult Western man.

On top of that, just consider that there may be someone with a different body shape.

There is a really weird vibe that some folks put out trying to body shame over the internet.


I could have phrased it better but I was trying to be succinct.

Smaller hands are likely to struggle to one hand control the Pro Max.


Honestly I'm not sure you could have phrased it worse!

> Smaller hands are likely to struggle to one hand control the Pro Max.

Couldn't agree more.


"The phone is fine, it's your genetics that are the problem here"

I guess without /s people don't get the sarcasm these days.

Funny responses, but I’d say my hands are big.

Same.

Doesn't work well for me. Thumb gets too close to bottom, and now scrolling becomes zooming. All the time.

When I’m on boardgamearena in bed with my iPad pro on my belly, I often trigger touch events at the bottom of the screen from the folds of my shirt.

Well its not that they couldn't do it but the iPad has always had bezels and the apps are expecting bezels not thumb rejection. They could be but they're not built with an on screen safe area.

They extract value from an open source project, use the resources/bandwidth of plugin repositories, position(ed) themselves as WordPress affiliated (the branding can easily be understood as WPengine being core WordPress), and contribute nothing back. It is the “socialize the losses, privatize the profits” of open source.


> use the resources/bandwidth of plugin repositories

As does every single other host that offers WordPress, and every user.

> position(ed) themselves as WordPress affiliated (the branding can easily be understood as WPengine being core WordPress)

One: "WP" was explicitly allowed, by WordPress, for use of WordPress. Matt yoinked this after all of this started, in the last two weeks or so. He also tried to make it retroactive.

Two: nominative usage says that if you factually offer WordPress hosting (or MySQL hosting, or whatever), you are allowed to say so. It doesn't mean you are maliciously "positioning yourself" as "affiliated", in any way, shape or form.

> and contribute nothing back

Not true at all, despite Matt's venom. They contribute and maintain several of the most popular plugins, they contribute to the codebase (just not to Matt's liking), and sponsor conferences and community events - this all started around the time that they sponsored a WordPress conference to the tune of $75,000 and then were banned from attending it, which is odd, because supposedly WordPress (the open source project) and the Foundation are independent (per all of their own filings with regulatory bodies and the IRS), but they were banned because they were in a dispute with Automattic (CEO - Matt), so WordPress Foundation (President - Matt) decided so. To add insult to injury, banned, but they decided to keep the sponsorship money.


This looks really good! What is your process to get this kind of high quality LoRAs?


Thank you!

A reasonable amount of training images (50 or so), and then I train for 2,000-ish steps for a new style.

Many of them work well with Flux, particularly if they're illustration-based. Some don't seem to work at all, so I didn't upload those!


How long does this take, and on what equipment? It's amazing to me that you can do this from just 50 images, I would have thought tens of thousands.


It's very impressive. I aim for around 50 images if I'm training a style, but only 10 to 20 if training a concept (like an object or a face).

I have a MacBook Air so I train using the various API providers.

For training a style, I use Replicate: https://replicate.com/ostris/flux-dev-lora-trainer/train

For training a concept/person, I use fal: https://fal.ai/models/fal-ai/flux-lora-fast-training

With fal, you can train a concept in around 2 minutes and only pay $2. Incredibly cheap. (You could also use it for training a style if you wanted to. I just found I seem to get slightly better results using Replicate's trainer for a style.)


$2 for 2 minutes? Can't you get less than $2 for 1 hour using GPU machines from providers like runpod or AirGPU? I found it a bit expensive to use replicate and fal after 10 minutes of prompting.

I have not used runpod or airgpu, and not affiliated.


Yes, renting raw compute via Runpod and friends will generally be much cheaper than renting a higher level service that uses that compute e.g. fal.ai or Replicate. For example, an A6000 on fal.ai is a little over $2/hr (they only show you the price in seconds, perhaps to make it more difficult to compare with ordinary GPU providers); on Runpod an A6000 is less than half that, $0.76/hr in their managed "Secure Cloud." If you're willing to take some risk of boxes disappearing, and don't need much security, Runpod's "Community Cloud" is even cheaper at $0.49/hr.

Similar deal with Replicate: an A100 there is over $5/hr, whereas on Runpod it's $1.64/hr.

And if you use the "serverless" services, the pricing becomes even more astronomical; as you note, $1/minute is unreasonably expensive: that's over 20x the cost of renting 8xH100s on Runpod's "Secure Cloud" (and 8xH100s are extreme overkill for finetuning image generators: even 1xH100 would be sufficient, meaning it's actually 160x markup).


Wow, fantastic, thanks! I thought it would be much, much more expensive than this. Thanks for the info!


Happy to help! It's a lot of fun. And it becomes even more fun when you combine LoRAs. So you could train one on your face, and then use that with a style LoRA, giving you a stylised version of your face.

If you do end up training one on yourself with fal, it should ultimately take you here (https://fal.ai/models/fal-ai/flux-lora) with your new LoRA pre-filled.

Then:

1. Click 'Add item' to add another LoRA and enter the URL of a style LoRA's SafeTensor file (with Civitai, go to any style you like and copy the URL from the download button) (you can also find LoRAs on Hugging Face)

2. Paste that SafeTensor URL as the second LoRA, remembering to include the trigger word for yourself (you set this when you start the training) and the trigger word for the style (it tells you on the Civitai page)

3. Play with the strength for the LoRAs if you want it to look more like you or more like the style, etc.

-----

If you want a style LoRA to try, this one of SNL title cards I trained actually makes some great photographic images. https://civitai.com/models/773477/flux-lora-snl-portrait (the download link would be https://civitai.com/api/download/models/865105?type=Model&fo...)

-----

There's a lot of trial and error to get the best combinations. Have fun!


Have you tried img2text when training a style?

I want to make a LoRA of Peokudin-Gorskii photographs from the Library of Congress collection and they have thousands of photos, so I’m curious whether that’s effective for autogenerating the caption for images.


It's funny you should ask. I recently released a plugin (https://community-en.eagle.cool/plugin/4B56113D-EB3E-4020-A8...) for Eagle (an asset library management app) that allows you to write rules to caption/tag images and videos using various AI models.

I have a preset in there that I sometimes use to generate captions using GPT-4o.

If you use Replicate, they'll also generate captions for you automatically if you wish. (I think they use LLaVA behind the scenes.) I typically use this just because it's easier, and seems to work well enough.


That’s awesome! Thank you for the replicate link too. I didn’t know they also did LoRA training. They’ve been kind of hitting it lit the park lately.


Thanks for all this! I had created a SD LoRA of my face back in the day, time for another one!


Awesome! :)


In early photography history there was a pictorialist movement where people used special sort focus lenses to introduce more of a painterly quality. These lenses, such as Rodenstock Imagon still exist and are sought after.


What are some of the most helpful techniques and add-ons that you use?


Loras in general have been a game changer. I especially like the VantaBlack Loras.

Also controlnet is so useful.

I also generate 512x512 with 1.6 and then upscale to 1024 with iterative upscaling - adds a ton of detail. Then I can easily upscale to 4K.

If I do iterative upscale to 4K right away it takes like 20h on my m1. But that adds even more details.

And there are negative embeddings which are great eg badhandsv2

So those 4 have been the most impactful for me


It is by the same author who created Ottercast[0] and has apparently been developed during GPN[1], a Chaos Computer Club event in Karlsruhe, Germany that ended two days ago.

[0] https://github.com/Ottercast/OtterCastAudioV2

[1] https://entropia.de/GPN22


I read the name and thought "of course, who else?". They were also involved in hacking "hoverboards", for instance fitting the motors onto bobby cars.


I'll repeat my comment from the previous discussion, as ultrasound can be used to extract many things quickly:

Alternatively you could use a (quite affordable) ultrasonic machine designed for gentle cleaning of jewelry, dentures, glasses..

I've used one to extract fragrance from biological material for an artistic project[0], and it worked really well. Instead of having to wait for a few weeks for a tincture to finish, you put the same tincture (alcohol and material you want to extract fragrance from) into a plastic bag for just 15 minutes. Sure, it smells not quite the same, but the speed is often worth it. I've even heard about some guy trying to turn vodka into whiskey with an ultrasonic machine and wood chips.

There are quite a few ultrasonic machines on the market. I've tried EMAG and multiple Chinese no-name machines that are just as powerful but cheaper. Sadly the no-name machines are quite a bit louder - you can't stay in the same room while it's running basically. Still, they all work well for this kind of fast and dirty extraction.

[0] https://rybakov.com/blog/smelling_cz/ (second half of the page)


A long while back, I looked into how viable ultrasonic acceleration of the “aging” of Dit Da Jow (https://en.m.wikipedia.org/wiki/Dit_da_jow) would be and had come across a paper looking into it the same.

If I remember correctly, while ultrasonic treatment did a decent job of quickly extracting the various chemicals from their carriers, there were some caveats. The ratio of chemicals extracted could differ from normal, and it only did a partial job of accelerating the formation of very complex secondary compounds that form when the whole mix is properly aged.

So the difference in smell you found could be some chemicals being preferentially released over others, and/or the lack the secondary chemicals.


Indeed, it is after all more of a mechanical extraction than a chemical one. I remember seeing tiny bits of grass floating in the tincture after extracting the scent of fresh grass - the cavitation bubbles practically shattered the cell structure of the plant, and a dusty green soup was floating around the blades of grass.


https://www.youtube.com/watch?v=YlQT4ptwLKs might be interesting - it's been a while since I watched it, but IIRC this video covers a variety of ultrasonic infusion experiments. According to the autogenerated timeline in search results, it includes coffee as well.


Why would that be considered a flaw? The rationale behind it is the same as behind the inability to name your child XAe or any random assortment of letters – to not make the future difficult for your child, even if you as a parent might want it. It is also one of the last tools that prevents parallel societies from forming. Not that German schools are ideal - they are a bit stuck in the past - but the common experience they create in each new generation is valuable.


Because my children are likely going to have ADHD like I do, and public schools are hell for us. When I was a child I cried every day, hated my parents and didn't understand what did I do that I deserved to be forced into that prison. I am not going to put my children through that. I don't care about your social engineering goals, I care about happy childhood for my children.


You might consider alternative schools like Waldorf or Montessori for your kids. It's not like there is only one kind of school available.


These legally operate as groups of homeschooled children where I live, and there actually is only one kind of "school". Yes, I'm considering that. One thing I definitely don't want is an "institution" that feels like it's entitled to do or require children to do whatever it wants, even if it has generally enlightened opinions and methods. If my child can't come at all for half a year, that's what will happen.


I agree, and in some countries it does feel like your children get 'institutionalized'. Still, even there, there are differences between schools. What my parents did was to visit the schools, talk to the teachers, see whether they had a good or a bad feeling, and insisting on getting me into a specific class. It does work, but of course all the schools and classes should ideally be good.


That's a good thing to do for sure. The problem is - my parents did that, and visited more than once, but every time they did, the teachers put up an act of being larger than life and the most enlightened experts of child development under the sun. Unfortunately, they believed them because the act was so good, and because years of life under communism taught them that going against institutions leads you to bad places.

I don't have the latter problem - I got the opposite problem from that, sometimes I hate the system a little bit too much - but still, remembering how the teachers used to threaten us if we don't put up our best act, meaning never speak out about the lies they told when other parents visited, I can't really trust few visits to give me an accurate impression.


Diversity of thought helps solve problems. It’s useful for the state to allow a small % of experimentation to reduce risk.


> It is also one of the last tools that prevents parallel societies from forming.

Consider German history. They had the Nazis in power. Half of Germany had the Communists in power. So, you don't want Nazis and Communists home schooling their kids - you want those kids in regular school, where they can be taught about democracy and human rights. But if the Nazis or Communists are in power, you absolutely want to be able to homeschool, rather than have that garbage beat into your kids day after day at school.

So given their history, I can absolutely see why they should be afraid of home schooling. But I can also see why they should want the freedom to do it.


From my experience with attending four schools in two countries - it is often the quality and the personality of the teacher that influences what I was learning, not so much the curriculum. I see very little chance for radical thought being taught at German schools, unless 70%+ of school teachers get replaced.


Or you could use a (quite affordable) ultrasonic machine designed for gentle cleaning of jewelry, dentures, glasses..

I've used one to extract fragrance from biological material for an artistic project[0], and it worked really well. Instead of having to wait for a few weeks for a tincture to finish, you put the same tincture (alcohol and material you want to extract fragrance from) into a plastic bag for just 15 minutes. Sure, it smells not quite the same, but the speed is often worth it. I've even heard about some guy trying to turn vodka into whiskey with an ultrasonic machine and wood chips.

There are quite a few ultrasonic machines on the market. I've tried EMAG and multiple Chinese no-name machines that are just as powerful but cheaper. Sadly the no-name machines are quite a bit louder - you can't stay in the same room while it's running basically. Still, they all work well for this kind of fast and dirty extraction.

[0] https://rybakov.com/blog/smelling_cz/


This is what I was thinking. If it's mostly fine scale mechanical agitation one is after then there are many ways to do it. An ultrasonic cleaning machine being the closest in spirit to their approach.


Really nice! I did a quick test some weeks ago and apparently using google street view images to generate Gaussian splats works quite well: https://www.instagram.com/p/C2xozxVAdpG/

(I just took screenshots of the Street View, not the API, so the ground is falling apart a bit because of the street names.)


Agreed tons of potential here, however we may need to get our own geo data for this: Google was very clear during the API quota approval process that 3d tiles cannot be used for ML training or generation of derivative assets like Gaussian splats, at least with this specific product offering.


Have you looked into Mapillary? Could be a viable alternative, even though the image quality varies a lot.


Great idea... could even be generated "on-demand" only when the user requests high fidelity for a specific area!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: