Hacker News new | past | comments | ask | show | jobs | submit | party_possum's comments login

Climbing route setting (when done at a high level) is constantly in conversation with itself. Many climbers don't realize that there are trends or fads in movement styles that sweep across the industry. This has only picked up steam in an era of social media.

Just like movie dorks will happily spend hours explaining how an individual shot in a movie is actually an insider reference to another movie, and as a result a statement of intent for the movie as a whole, professional route setters will talk your ear off about the way one of their problems embraces or rejects specific kinds of movement trends of the last 6 months.

That intentional rejection is interesting. Many route setters, especially for competitions, are in constant search of novelty. One kind of perfect problem is something that looks confusing and impossible, up until you see it done, at which point it seems almost obvious. It's the feeling of solving a sudoku. But critically, they want climbers to be initially confused.

I wonder if AI might actually be better than humans at sequencing these kinds of problems. Humans bring so much context and experience and expectation to the process that we are easily tricked. AI just looks through a few terabytes of video and says "What about this?".


> I wonder if AI might actually be better than humans at sequencing these kinds of problems

Probably a deeply unpopular take here, but without knowing anything about climbing routes, I'm gonna say no. I'm not saying that they won't have excellent quality output that might even solve problems that human output can't, but the process of creating something is meaningful, even commercially. Surely this will be useful in some respects, but I just don't buy the idea that humanity is destined to passively consume automated algorithm-generated utility products-- especially creative ones-- no matter how smooth, cheap, and clever they might be.


Well, the tricky bit here is that the route setter, a human, is the one actually solving the problem. So the problem as set is (and must be) a human creation first. This is especially true in outdoor climbing, where the first ascent process might involve the installation of anchor fixtures, or the removal of poorly-secured features for safety. You'd need some pretty wild sensor suites to correctly differentiate between a really good hold, and a dangerous flake that will peel off the wall if the slightest force is applied to it. The AI just generates potential solutions to the problem once the holds are found/placed. Certainly, there's some interesting conversations about how satisfying it is to solve a rubick's cube using somebody's algorithm vs. just figuring it out, but its not like the computer is inventing a rubick's cube.

Embedded in your comment is the idea that AI might create boulder problems or routes in climbing gyms, and the human (or eventually robot) just follows that plan in bolting the holds to the wall. I expect that for a long time, AI generated climbing routes would rarely be good, but would consistently be physiologically impossible, feature uninteresting movement, or be too easy.

Its easy enough to shotgun holds up onto the wall based on some imagined sequence, the real skill of route setting is to (as the GP pointed out) figure out what's physically possible and also fun and challenging.


> Embedded in your comment is the idea that AI might create boulder problems or routes in climbing gyms, and the human (or eventually robot) just follows that plan in bolting the holds to the wall.

This would follow the exact path image GenAI evolved through.

Step 1: Teach a model to recognize objects from noisy data.

Step 2: Reverse-feed that model random noise and force it to hallucinate that noise back into likely objects.

As there's probably physics simulation at some point in this particular scenario, there'd probably also be step 3 of simulating a climb through the generated path to validate feasibility / specific qualities.

It doesn't sound impossible.


Its certainly not impossible, but that physics simulation is the biggest obstacle that I can see.


Spotting a refugee from rockclimbing.com on hacker news was not on my bingo card for the day. But I guess if I'm here (writing novels about route setting) then I shouldn't be surprised other people are too.


Outside of a short-lived usurper on instagram selling pet food dishes, I am still the only petsfed on the internet, since 1997.

What's really wild to me is how somebody would recognize a mid-tier poster from a website I thought effectively defunct for nearly 10 years now.


It just goes to show how impactful online communities are capable of being. rockclimbing.com was in it's heyday right as I was discovering climbing. I was a bored kid constructing my entire identity around climbing and there was no other place to do that outside of the gym. No mountain project. No youtube. No social media. I spent a lot of hours lurking those forums. There are only a handful of users I could still name, but I bet I would recognize a lot of them.


> You'd need some pretty wild sensor suites to correctly differentiate between a really good hold

Ah, I see.

> The AI just generates potential solutions to the problem once the holds are found/placed

Yeah and I think that's really going to be the sweet spot for generative tools for the forseeable future.

> Its easy enough to shotgun holds up onto the wall based on some imagined sequence, the real skill of route setting is to (as the GP pointed out) figure out what's physically possible and also fun and challenging.

Right right. I have a feeling that making a more convincing substitute is primarily a matter of having less access to data than say, paintings and photography which are certainly not less nuanced than this creative task. But as I said, a lot of people care about how something was made, too. I'll bet that's going to be a much bigger factor, at least in marketing, than many realize in the near future.


> the process of creating something is meaningful, even commercially.

That's true, but why does it mean that the answer to the more or less objective question "will AI actually be better than humans at sequencing these kinds of problems?" (As stated it's not really objective, but one could easily come up with metrics like, say, total time to a correct solution, or time spent observing the route or other climbers, or ….) One can imagine other, related questions that are less objective (like "will it be a good idea to integrate this AI assistance into climbing competitions?"), but, to me, the answer to the (implicit) original question has nothing to do with whether or not the activity is meaningful, or with humans' destiny one way or the other.


Sure, they may well be quantitatively better. If you were to create metrics to measure the number of mistakes or weird spots or overly annoying things, it's quite possible that the output from the algorithm could score better than human output, and the throughput would obviously be incomparable. But, whether or not something is qualitatively better is far more subjective-- it's influenced by our culture, our experiences, and everything else that creates the lens through which we see the world. Something's origin absolutely affects the way people experience it, be it a physical object, story, experience, etc. Don't get me wrong-- I realize there's real value in affordable quantity with with mediocre quality-- how many restaurants in the world are McDonald's? But then again, how many restaurants in the world aren't McDonald's? If McDonald's could sell you a ribeye comparable to Capital Grille, I'd be astonished if it put a dent in Capital Grille's bottom line. Applebees, however...


I think your arguments against AI being better at creating routes reduce to:

1. You don’t like what that would mean about the destiny for humanity.

2. A human making it makes it inherently better.

3. If it’s lower quality, it’s lower quality.

I get why this would lead to strong beliefs. But these arguments aren’t very convincing.


I think you probably need to re-read what I wrote. I'm not against AI at all-- I use it all the time. I just don't think it's going to be qualitatively better than human creative output because how something was made matters to people. I also think the tech world thinks they're a lot better at judging creative output than they are.


Right, those are important considerations, but I don't think that they're the same consideration. If you ask whether a celebrity chef is better at cooking a certain dish than I am, then the answer (no matter the dish, for I'm not much of a chef) is almost certainly "yes." If I answered instead no, that the provenance of a dish matters, that the celebrity-chef culture is bad for fine dining, and that my wife would rather have the dish made by me, then I think I'd be regarded as missing the point of the question, even if my objections were true on their own.


Well, I think that celebrity vs husband is not a good analog for human vs generated. I also think that in this case, origin has broader connotations than provenance, in that it includes context. What if the celebrity chef was your wife's brother? What if you had a degenerative neurological disease? What if it was the dish you cooked for her on your second date? Anything can seem very cut and dried if you impose artificial restrictions and examine it in a vacuum. And, because you can't easily define and quantify things like context, the engineering and tech worlds tend disregard them because they don't really factor into the engineering equation... and that's why designers exist.


You mean AI might be better at setting or at reading? I feel like your question is getting misinterpreted.


One of the last bastions of ugly/functional design I encounter on a regular basis is restaurant/bar Point of Sale systems. Look over the host's shoulder at a busy restaurant and you'll see something that looks like it just woke up from a nap it took in 1995. Do an image search for "restaurant POS" and you'll see what I mean.

Even this is changing though. Online SaaS POS platforms have to be "beautiful" and "modern" to sell to startup restauranteurs, so they're slowly just becoming fashionable websites. But in most large/busy/established restaurants, you'll see people running software that would make an art school grad weep.


I think I know what you mean. I guess one reason could be what a restaurant, especially an established one, prioritizes.


The idea of incorporating actual hold data and "recognizing" specific holds is interesting, but I'm not sure it completely solves the problem.

The "Boss" from Pusher is arguably the most famous climbing hold ever made. For a decade or more, every gym had one, but they were all unique. Lots of them had micro chips that became critical to usage of the hold. Some had decent texture and some were glassy smooth from years and years and years of use. A lot of the accidental variation in new holds has gone away as the industry has standardized around a handful of industrial fabricators like Aragon, but even over the course of a single indoor boulder problem's life, the accumulation of chalk, sweat, and shoe rubber can have a significant impact on how a hold climbs.

I guess the real question is, do these changes just make routes harder or do they make them fundamentally different? Do they actually change the set of moves that constitutes the easiest way to the top? To be honest, I'm not entirely sure. But it's something interesting to think about.


Exactly, holds will evolve as they get used and more polished, even indoors. Climbing a Moonboard with a new set of holds is quite different than climbing on one with older more polished holds, even if it's the exact same problem and the same holds.

It's an interesting project and it could be fun to watch, but it's completely useless.


Couldn't you reverse-reason about that?

To me, the customer here would be climbing gyms, offering a service to climbers.

   1. Set up camera on routes
   2. Record all climbs
   3. Reason through hold details
   4. Generate potential movements
   5. Show climbs vs ghost movements
   6. Feedback to tune model
3 being accomplished by reasoning "if a movement should be possible using the identified hold, but no one successfully does it, the hold must be misidentified or have different properties."


But what is the point? Finding the optimal movements that are needed to complete a climb is not useful if you are not strong enough to execute it.


The point in this thread seemed to be "real world holds have different properties, and that defines possible approaches to holds."

To which I pointed out that, with enough data, you could reason backwards to figure out their properties.

Assuming that's solved, if the question is "What is the point?" then I'd answer the same point as golf swing analysis -- structured comparison feedback for continual improvement.

"Have you thought about trying X move at Y point?" or "You're trying X move at Y point, but here's how you differ from someone successfully doing it" both seem useful feedback.

And essentially what's manually generated now, from someone watching and then providing feedback.

With regards to strength, hell, if you wanted to get fancy you could also deduce a specific user's strength, comparing their moves against others' moves on the same features.


Huh. I recognize it but didn't know its name. (I don't know any of the names.) Route setting sounds fascinating.


It would be really interesting to build an instance of this model trained on world cup footage for a few reasons.

- Like all of these things, your training data matters and the internet is awash with videos of people climbing badly. A lot of people specifically post "I can't climb this, what am I doing wrong?" videos. World cup climbers are, by the nature of the competition, extremely talented and technically proficient climbers. Even when they fail, they fail in smart interesting ways.

- There's lots of high quality video footage out there. Heck, the problems are even set with visual clarity in mind which would help when parsing that footage. There's potentially enough video to train instances on individual climbers. You could run side by sides like "How would Tamoa climb this and how would Janja climb this?".

- World cup problems are stylistically distinct. They involve lots of moves "typical" climbers will never ever encounter. Many climbers will look at a typical gym problem and think "I have an idea of how to climb this" but will look at a world cup problem and just think "????????". An app that told you how a problem like that should be climbed might be useful.

There are drawbacks too.

- World cup climbers are outliers, whose physical ability (strength, flexibility, etc.) give them access to kinds of movement that other climbers just don't have. No amount of "knowing the sequence" will get me up a climb that requires a full bat hang (look it up) because I just don't have the ankle strength to do the movement.

- World cup "style" is only commonly used at high level comps and in very large commercial gyms. It's probably not extremely relevant to a typical climbing session.

- World cup problems are very hard. Mostly v10 and up? It would be hilarious to watch a model trained on genetic monsters crushing the world's hardest boulder problems try to tell a doughy office worker (me) how to climb v2.


This was exactly my thought. See https://www.instagram.com/p/Cvg3bJkp30a/ on a super-hard slab problem at Innsbruck 2023 (Sorato Anraku couldn't do it, so you know it's hard...)


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: