Hacker News new | past | comments | ask | show | jobs | submit login

> I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before. This makes me kind of sad, because the current world is so interconnected, that we rarely see such novelty with their tendency to "fall in the rut of thought" of those that came before. The internet is great, but it also homogenizes the world of thought, and that kind of sucks.

I think this is true only if there is a novel solution that is in a drastically different direction than similar efforts that came before. Most of the time when you ignore previous successful efforts, you end up resowing non-fertile ground.






Right, the other side is when you end up with rediscoveries of the same ideas. The example that comes to my mind is when a medical researcher found the trapezoidal rule for integration again[1].

[1]: https://fliptomato.wordpress.com/2007/03/19/medical-research...


That's not really a problem.

In one hand, it shows the idea is really useful on its own.

And on the other hand, it shows that currently forgotten ideas have a chance to being rediscovered in the future.


It is not a problem if you are a student learning how to solve problems. Solving previously solved problems is often a good way to learn - because there is a solution you know you are not hitting something that cannot be solved, and your teacher can guide you if you get stuck.

For real world everyday problems normally it is an application of already solved theory or it isn't worth working on at all. We still need researchers to look at and expand our theory which in turn allows us to solve more problems in the real world. And there are real world problems that we pour enormous amounts of effort into solving despite lacking theory, but these areas move much slower than the much more common application of already solved theory and so are vastly vastly more expensive. (this is how we get smaller chip architectures, but it is a planet scale problem to solve)


If I have seen what others have seen, it is by replicating the work of giants

I agree -> even if someone spends their time "rediscovering" an existing solution, I think that the learning experience of coming up with a solution without starting from current best solution is really valuable. Maybe that person doesn't reach the local maximum on that project, but having a really good learning experience maybe enables them to reach a global maximum on one of their next projects.

If I want some novel ideas from a group of people, I'm going to give them the framework of the problem, split them into groups so that they don't bias each other, and say: go figure it out.


It is, because you are wasting time reinventing the wheel. Also if something is already well researched, you might miss intricacies, traps, optimizations etc. previous researchers have stumbled upon.

It isn’t necessarily “wasted” time. There are more ways to look at it, as well as 2nd order and 3rd order effects (and so on).

It’s a powerful skill to be able to try to solve things from first principles. And it’s a muscle you can strengthen.

It would be a bit silly to never look anything up, but it isn’t so black and white.


You need to be able to do both.

Only reading the existing literature is not good enough.

The capacity to create ideas is also something that needs to be practiced.


This is a good example of how the most obvious intuition can be wrong, or at best incomplete.

I find a lot of the time concepts and patterns can be the same in different fields, just hidden by different nomenclature and constructed upon a different pyramid of knowledge.

It's nice we have a common language that is mathematics, the science of patterns, to unify such things but it's still going to be a challenge because not everyone is fluent in all the various branches of mathematics.

It's even mind-blowing how many ways you can approach the same problem with equivalencies between different types of mathematics itself.


I think that shows how great the trapezoidal rule is. I feel like this is brought out too many times, that now it is used to make fun of people. It is 18 years old at this point.

I mean, it sort of deserves being made fun of. 18 years ago Google existed, surely you'd search for "area under a curve" before going through all the effort of writing a paper reinventing integrals?

Edit: actually the paper was written in 1994, not sure what the "18 years" was referring to. But still, peer review existed and so did maths books... Even if the author can be excused somewhat (and that's already a stretch), peer reviewers should definitely not let this fly.


Unfortunately quite common to see serious mathematical issues in the medical literature. I guess due to a combination of math being essential to interpreting medical data and trial results, but most practitioners not having much depth of math knowledge. Just this week I came across the quote "Frequentist 95% CI: we can be 95% confident that the true estimate would lie within the interval." This is an incorrect interpretation of confidence intervals, but the amusing part is that it is from a tutorial paper about them, so the authors should have known better. And cited by 327! https://pmc.ncbi.nlm.nih.gov/articles/PMC6630113/

Does it make you less of a peer to others who found it before ? At leas the author showed ability to think creative for himself , not paralyzed by the great stagnation like the rest of us.

Herself. Mary Tai.

And what makes you less of a peer is not knowing the basics. And being so unaware of apparently not knowing the basics, and/or uninterested, that you don't bother to check something that is highly checkable.


Even worse: you didn't just think so in private, but you decided to publish your 'great' discovery.

The blame is on the reviewers.

This is why peer review exists. One can not known everything themselves. It's fairly common for CS paper submissions to reinvent algorithms and then tone down the claims after reviewers suggest that variants already exist.


> highly checkable

in 1994?


Calculus textbooks existed in 1994. It just took me 30 seconds to find “area under a curve” in the index of the one I own, and another 30 seconds to find the section on numerical integration, which describes the trapezoidal approximation.

So you already know the particular area of the larger topic of mathematics that you need to look for, you already have a textbook for that particular subject in your possession, meaning you don't need to go to the library and somehow figure out the right book to choose from the thousands the 510 section; you know what you are looking for exists, and then you aren't surprised you can find it?

I know how to find the area under the curve, but there's so much biology I don't know jack shit about. Back in 1994, It would have been hopeless for me to know the Michaelis-Menten model even existed if it had been relevant to my studies in computer science. That you can right click on those words in my comment in 2025 and get an explanation of what that is and can interrogate chatgpt to get a rigorous understanding of it shouldn't make it seem like finding someone in the math department to help you in 1994 was easier than just thinking logically and reinventing the wheel.


There is thing called "higher education". Ostensibly one of its chief purposes is to arm you with all that interconnected knowledge and facts that is useful in your chosen field of study. To the boot, you get all of that from several different human beings who you can converse with, to improve the scope and precision of the knowledge you're receiving. You know, "standing on the shoulders of the giants" and all of that stuff?

ostensibly.

> So you already know the particular area of the larger topic of mathematics that you need to look for

So did the author of the paper. The paper’s title itself mentions the area under a curve. It would not have been difficult to find information about how to calculate an approximation of the area under a curve in the library.


I'd argue this is an argument against purely peer review, as her peers also weren't mathematicians.

Some of us when learning calculus wonder if we'd been alive before it was invented, if we'd be smart enough to invent it. Dr. Tai provably was. (the trapezoid rule, anyway) So I choose to say xkcd 1053 to her, rather than bullying her for not knowing advanced math.


> Dr. Tai provably was.

No, we have no proof of that. We just know that she published a paper explaining the trapezoidal rule.

(A) That approximation for 'nice' curves was known long before calculus. Calculus is about doing this in the limit (or with infinitesimals or whatever) and also wondering about mathematical niceties, and also some things about integration. (B) I'm fairly certain she would have had a bit of calculus at some point in her education, even if she remembered it badly enough to think she found something new.


I mean, it's possible she reinvented the wheel because what she really needed in her life is for the math department to laugh at her, but that seems far fetched to me.

The "18 years" probably refers to the date since the linked blogpost was published, March 2007.

This is the the strongest argument for not shaming reinvention...

Unless the victims are world-class..? (Because it's not entirely not self-inflicted)

https://news.ycombinator.com/item?id=42981356

Shades of the strong-link weak-link dilemma too


> This is the the strongest argument for not shaming reinvention...

Sounds like a pretty weak argument? I'm sure there are some good arguments for re-invention. But this ain't one of them.

Basically, re-invention for fun or to help gain understanding is fine. But when you publish a 'new' method, it helps to do a bit of research about prior work. Especially when the method is something you should have heard about during your studies.


there might be a decent amount of "survivorship bias" too. meaning you only hear of the few events where someone starts from first principles and actually finds, be it luck or smarts, a novel solution which improves on the status quo, but i'd argue there are N other similar situations where you don't end up with a better solution.

That being said, I so disagree with just taking the "state of the art" as written in stone, and "we can't possibly do better than library x" etc.


Plenty of "state of the art", at least a decade ago, that was not very state of anything.

I think bias is inherent in our literature and solutions. But also, I agree that the probability of a better solution degrades over time (assuming that the implementations themselves do not degrade - building a faster hash table does not matter if you have made all operations exponentially more expensive for stupid, non-computational, reasons)


In 1973 Clifford Cock solved the problem of public keys first time in history that no one in GCHQ managed to solve in the past 3 years. He jolted down the solution in half hour after hearing about it then wondered why is it such a big thing for everyone else. A fresh view unclouded by prejudice can make all the difference.

Or worse, pursuing something already proven not to work.

Viewed through the lens of personal development I suppose one could make an argument that there wasn't much difference between rediscovering an existing valid or invalid solution. Both lead to internalisation of a domain's constraints.

Outside of math and computational science, nothing is proven to not work because scientific research doesn't work in proofs. Even in math and computational science, there are fields dedicated to researching known proven wrong logic because sometimes there are interesting findings, like hypercomputation.

That's how the best discoveries are made.

Or how a lot of time is wasted. For example on perpetual motion machines and infinite data compression.

A lot of major scientific discoveries were made while people were trying to turn base metals into gold; also known as alchemy.

Some examples include discovering phosphorus, the identification of arsenic, antimony, and bismuth as elements rather than compounds, and the development of nitric acid, sulfuric acid, and hydrochloric acid. Alchemy ultimately evolved into modern chemistry.

I think the key is that thinking that something is a waste of time is the type of mentality that prevents individuals from pursuing their interests to the point where they actually make important discoveries or make great inventions.

If you put enough time and energy into anything you're bound to learn a lot and gain valuable insights at the very least.


The problem is that when the proof is wrong, as in this case a related conjecture held up for 40 years, which is not a "proof" per se, but still ostensibly an extremely high reliability indicator of correctness.

Another example is when SpaceX was first experimenting with reusable self landing rockets. They were being actively mocked by Tory Bruno, who was the head of ULA (basically an anti-competitive shell-but-not-really-corp merger between Lockheed and Boeing), claiming essentially stuff along the lines of 'We've of course already thoroughly researched and experimented with these ideas years ago. The economics just don't work at all. Have fun learning we already did!'

Given that ULA made no efforts to compete with what SpaceX was doing it's likely that they did genuinely believe what they were saying. And that's a company with roots going all the way back to the Apollo program, with billions of dollars in revenue, and a massive number of aerospace engineers working for them. And the guy going against them was 'Silicon Valley guy with no aerospace experience who made some money selling a payment processing tool.' Yet somehow he knew better.


All the cases you bring up are not "proofs": a conjecture is very much not one, it's just that nobody bothered to refute this particular one even if there were results proving it isn't (cited in the paper).

Similarly, ULA had no "proof" that this would be economically infeasible: Musk pioneered using agile ship-and-fail-fast for rocket development which mostly contradicted common knowledge that in projects like these your first attempt should be a success. Like with software, this actually sped things up and delivered better, cheaper results.


The Apollo missions, of which Boeing was a key player, were also a 'ship and fail fast' era. It led to some humorous incidents like the strategy for astronaut bathroom breaks to simply be 'hold it' which was later followed up by diapers when we realized on-pad delays happen. Another one was the first capsule/command module being designed without even a viewport. Of course it also led to some not so humorous incidents, but such rapid advances rarely come for free.

In any case Musk definitely didn't pioneer this in space.


> Of course it also led to some not so humorous incidents, but such rapid advances rarely come for free.

Luckily, you can run a lot higher risks (per mission) when going unmanned, and thus this becomes a purely economic decision there, almost devoid of the moral problems of manned spaceflight.

Manned spaceflight has mostly been a waste of money and resources in general.


The first man on Mars will likely discover far more in a week than we have in more than 50 years of probes.

There's a fundamental problem with unmanned stuff - moving parts break. So for instance Curiosity's "drill" broke after 7 activations. It took 2 years of extensive work by a team full of scientists to create a work-around that's partially effective (which really begs a how many ... does it take to screw in a light bulb joke). A guy on the scene with a toolkit could have repaired it to perfection in a matter of minutes. And the reason I put drill in quotes is because it's more like a glorified scraper. It has a max depth of 6cm. We're rather literally not even scratching the surface of what Mars has to offer.

Another example of the same problem is in just getting to places. You can't move too fast for the exact same reasons, so Curiosity tends to move around at about 0.018 mph (0.03 km/h). So it takes it about 2.5 days to travel a mile. But of course that's extremely risky since you really need to make sure you don't bump into a pebble or head into a low value area, meaning you want human feedback with about a 40 minute round trip total latency on a low bandwidth connection - while accounting for normal working hours on Earth. So in practice Curiosity has traveled a total of just a bit more than 1 mile per year. I'm also leaving out the fact that the tires have also, as might be expected, broken. So it's contemporary traveling speed is going to be even slower.

Just imagine trying to explore Earth traveling around at 1 mile a year and once every few years (on average) being able to drill hopefully up to 6cm! And all of these things btw are bleeding edge relative to the past. The issue of moving parts break is just an unsolvable issue for now and for anytime in the foreseeable future.

----------

Beyond all of this, there are no "moral problems" in manned spaceflight. It's risky and will remain risky. If people want to pursue it, that's their choice. And manned spaceflight is extremely inspiring, and really demonstrates what man is capable of. Putting a man on the Moon inspired an entire generation to science and achievement. The same will be true with the first man on Mars. NASA tried to tap into this with their helicopter drone on Mars but people just don't really care about rovers, drones, and probes.


For the cost of sending a guy, you can probably just send ten probes.

You get extremely diminishing returns with probes. There's only so much you can do from orbit. Rovers are substantially more useful, but are extremely expensive. Curiosity and Perseverance each cost more than $3 billion. As the technology advances and we get the basic infrastructure setup, humans will rapidly become much cheaper than rovers.

A big cost with rovers is the R&D and one-off manufacturing of the rover itself. With humans you have the added cost of life support, but 0 cost in manufacturing and development. The early human missions will obviously be extremely expensive as we pack in all the supplies to start basic industry (large scale Sabatier Reactions [1] will be crucial), energy, long-term habitation, and so on.

But eventually all you're going to need to be paying for is food/life support/medicine/entertainment/etc, which will be relatively negligible.

[1] - https://en.wikipedia.org/wiki/Sabatier_reaction


Yeah, but then you are going to get a very little return from those 10 probes.

Sending a person there for a one way mission would probably give us more data than 100 probes. And I have a feeling that there are a lot of people willing to go on a such a mission.


I don't share your optimism.

Have a look at https://www.nasa.gov/humans-in-space/20-breakthroughs-from-2... and keep in mind that those are those are already the highlights. The best they could come up with.


What sort of things would you expect on the list? A lot of those are critical prerequisites for humanity's advancement. They also left out some really important stuff like studies on sex in space, exercise in space, effects of radiation in space (as well as hardening electronics), and so on.

A space station on Mars would probably not provide much more than that so should be a low priority, but obviously the discoveries to be made on land trounce those to be made in space.


Eventually you cannot run high risks in unmanned. If a rocket fails getting a satellite to orbit just build a new one. However missions to the outer planets are often only possible once every several hundred years (when orbits line up) and so if you fail you can't retry. Mars you get a retry every year and a half (though you get about a month). If you want to hit 5 planets that is a several hundred year event. And the trip time means if you fail once you reach the outer planet all the engineers who knew how the system works have retired and so you start from scratch on the retry (assuming you even get the orbits to line up)

> However missions to the outer planets are often only possible once every several hundred years (when orbits line up) and so if you fail you can't retry.

Just send ten missions at the same time. No need to wait until you fail.


Ten that fail in the exact same way is a real possibility.

Sure, it's better to frame it as "reintroduction": for those early attempts to be succesful with Soviets pushing on the other side as well, it is a strategy that works the fastest.

Thanks for the funny incidents as well, and my empathy for the not so funny ones!


Also, SpaceX was exactly one failed launch (with every prior one being a failure) from bankruptcy.

Had that one also been a failure, he wouldn't be running the US government and we'd all be talking about how obviously stupid reusable rockets were.


Had they received the same grant money as Boeing ($4.2b vs $2.6b), it wouldn't have been such a close call.

I'd also note that they were also late by 3 years or so: this did not produce miracles, it was just much cheaper and better in the end than what Boeing is still trying to do.


He is talking about Falcon 1, not CCDev. There was no close call at CCDev, nor any grant money for Falcon 1.

Thanks for the correction/clarification.

Still, I would be surprised if SpaceX did not greatly benefit from knowledge gained in Falcon 1 development when building their Falcon 9 rocket and then optimizing it for reusability — they started development of Falcon 9 while Falcon 1 was still operating.


This illustrates beautifully how stupid labeling ideas stupid is.

To know that that an idea or approach is fundamentally stupid and unsalvageable requires a grasp of the world that humans may simply not have access to. It seems unthinkably rare to me.


On the other hand, I knew from the beginning that the Space Shuttle design was ungainly, looking like a committee designed it, and unfortunately I was right.

(Having a wing, empennage and landing gear greatly increased the weight. The only thing that really needs to be returned from space are the astronauts.)


It was designed to support a specific Air Force requirement: the ability to launch, release or capture a spy satellite, then return to (approximately) the same launch site, all on a single orbit. (I say 'approximately' because a West Coast launch would have been from Vandenberg Air Force Base, returning to Edwards Air Force Base.)

The cargo bay was sized for military spy satellites (imaging intelligence) such as the KH-11 series, which may have influenced the design of the Hubble Space Telescope. Everything else led on from that.

Without those military requirements, Shuttle would probably never have got funded.

I'm listening to "16 Sunsets", a podcast about Shuttle from the team that made the BBC World Service's "13 Minutes To The Moon" series. (At one point this was slated to be Season 3, but the BBC dropped out.) https://shows.acast.com/16-sunsets/episodes/the-dreamers covers some of the military interaction and funding issues.


You're saying the same thing he is, but with more precise examples. There were also plenty of more useless requirements which is what he was getting at with it being 'designed by committee.' It was also intended to be a 'space tug' to drag things to new orbits, especially from Earth to the Moon, and this is also where its reusable-but-not-really design came from.

It's also relevant that the Space Shuttle came as a tiny segment of what was originally envisioned as a far grander scheme (in large part by Werner von Braun) of complete space expansion and colonization. The Space Shuttle's origins are from the Space Transportation System [1], which was part of a goal to have humans on Mars by no later than 1983. Then Nixon decided to effectively cancel human space projects after we won the Space Race, and so progress in space stagnated for the next half century and we were left with vessels that had design and functionality that no longer had any real purpose.

[1] - https://en.wikipedia.org/wiki/Space_Transportation_System


Let alone on launch. It's amusing that NASA is supposed to be this highly conservative safety-first environment, yet went with a design featuring two enormous solid rocket boosters. We knew better than this even during the Saturn era was very much a move fast and break things period of development.

There is nothing wrong with solid rocker boosters for that application. The issue is they failed to figure out figure out the limits and launched when it was too cold. (they also should have investigated when they saw unexpected non-fatal seal issues)

Solid boosters are more complex and so Saturn could not have launched on time if they tried them. So for Saturn with a (arbitrary) deadline not doing them was the right call. Don't confuse right call with best call though: we know on hindsight that Saturn launched on time, nobody knows what would have happened if they had used solid boosters.


I wasn't referencing Challenger in particular. I'm speaking more generally. SRBs are inherently fire and forget. This simply increases the risk factor of rockets substantially, and greatly complicates the risks and dangers in any sort of critical scenario. In modern times when we're approaching the era of rapid complete reuse, they're also just illogical since they're not meaningfully reusable.

The SRBs were resued. Like everything on the shuttle there was far more rebuilding needed before they were reused, but they were resued.

Yeah, but that qualifier you put there means I think you need to frame it as "reused." They dragged a couple of giant steel tubes out of the ocean after a salt water bath and then completely refurbished and "reused" them. It's technically reuse, but only just enough to fit the most technical definition of the word, and certainly has no place in the modern goal of launching, landing, inspecting/maintaining (ideally in a time frame of hours at the most), and then relaunching.

The only real benefit of SRBs is cost. They're dirt cheap and provide a huge amount of bang for your buck. But complete reuse largely negates this benefit because reusing an expensive product is cheaper, in the longrun, than repeatedly disposing (or "reusing") a cheap product.


Do we know that the economics work for SpaceX? It's a private company and it's financials aren't public knowledge, it could be burning investor money? E.g. Uber was losing around 4B/yr give or take for a very long time.

You can't know anything for certain but most of every analysis corroborates what they themselves say - they're operating at a healthy (though thin) margin on rocket launches and printing money with Starlink.

The context of this of course is that they've sent the cost of rocket launches from ~$2 billion per launch during the Space Shuttle era, to $0.07 billion per launch today. And the goal of Starship is to chop another order of magnitude or two off that price. By contrast SLS (Boeing/NASA's "new" rocket) was estimated to end up costing around $4.1 billion per launch.


To be fair cost per launch was in that ballpark already ($$0.15-0.05) with Ariane, Atlas and Soyuz non-reusable vehicles. SpaceX maintains the cost just about to undercut the competition.

I think they maintain the price there. They'll want to drive the cost as low as possible, because price - cost = profit for them. A penny saved is a penny earned.

They undercut every other launch provider. There's no way they're burning investor money to achieve that at the scale of their operations. This is all due to the cost savings of reusable F9. If they wanted to they could jack up their rates and still retain customers and still be the cheapest. There is no reason to believe they are unprofitable.

Most of their income comes from government subsidies and grants. So, it is rather funny to see the owner of the company running around the government and "cutting" costs.

SpaceX's total funding from government grants and subsidies is effectively $0. They do sell commercial services to the government and bid on competitive commercial contracts, but those are neither grants nor subsidies.

Ummm... you know the government granted them a bunch of money to go to the moon, right?

No, they didn't. The government wants to get to the Moon via the Artemis program (which will never go anywhere, but that's a different topic) and so NASA solicited proposals and bids for a 'human landing system' [1] for the Moon. SpaceX, Blue Origin, and Dynetics all submitted bids and proposals. SpaceX won.

Amusingly enough Blue Origin then sued over losing, and also lost that. They were probably hoping for something similar to what happened with Commercial Crew (NASA's soliciting bids from companies to get astronauts to the ISS). There NASA also selected SpaceX, but Boeing whined to Congress and managed to get them to force NASA to not only also pick Boeing, but to pay Boeing's dramatically larger bid price.

SpaceX has since not only sent dozens of astronauts to the ISS without flaw, but is now also being scheduled to go rescue the two guinea pigs sent on Boeing hardware. They ended up stranded on the ISS for months after Boeing's craft was deemed too dangerous for them to return to Earth in.

[1] - https://en.wikipedia.org/wiki/Starship_HLS


If they can keep raising money from investors, that seems proof enough to me that the economics must be good enough.

Ie investors would only put up with losing money (and keep putting up money), if they are fairly convinced that the long run looks pretty rosy.

Given that we know that SpaceX can tap enough capital, the uglier the present day cashflow, the rosier the future must look like (so that the investors still like them, which we know they do).


The economics very likely didn’t work. It’d be irresponsible for a launch company to model Starlink without a customer knocking on your door with a trailer full of dollars to sponsor the initial r&d and another bus of lawyers signing long term commitments. Vertical integration makes the business case much more appealing.

Does SpaceX have any investors other than Musk? I thought he bootstrapped it.

Musk owns 42% of SpaceX's total equity and 79% of the voting equity.

The non-Musk shareholders range from low-level SpaceX employees (equity compensation) through to Alphabet/Google, Fidelity, Founders Fund.

There are actually hundreds of investors. If you are ultra-wealthy, it isn't hard to invest in SpaceX. If you are the average person, they don't want to deal with you, the money you can bring to the table isn't worth the hassle–and the regulatory risk you represent is a lot higher


Thanks, that's interesting!

> Musk owns 42% of SpaceX's total equity and 79% of the voting equity.

How much of their balance sheet is debt vs equity?

Eg in theory you could have lots and lots of (debt) investors and still only a single shareholder.


> How much of their balance sheet is debt vs equity?

I believe it is almost all equity, not debt.

There is such a huge demand to invest in them, they are able to attract all the investment they need through equity. Given the choice between them, like most companies, they prefer equity over debt. Plus, they have other mechanisms to avoid excessive dilution of Elon Musk's voting control (non-voting stock, they give him more stock as equity compensation)


> Given the choice between them, like most companies, they prefer equity over debt.

What do you mean by 'most companies'? Many companies use debt on their balance sheet just fine, and even prefer it. Banks, famously, have to be restrained from making their balance sheet almost all debt.


The easiest way to get upside exposure in Starlink and wider spacex is to buy alphabet.

I think Musk had a better imagination and the money to fund that imagination without constraints or internal politics.

I'm not sure this is true, though I think it looks true.

I think the issue is that when a lot of people have put work into something you think that the chances of success yourself are low. This is a pretty reasonable belief too. With the current publish or perish paradigm I think this discourages a lot of people from even attempting. You have evidence that the problem is hard and even if solvable, probably is timely, so why risk your entire career? There are other interesting things that are less risky. In fact, I'd argue that this environment in of itself results in far less risk being taken. (There are other issues too and I laid out some in another comment) But I think this would look identical to what we're seeing.


Right. FWIW, Feynman predicted that physics would become rather boring in this regard, because physics education had become homogenized. This isn't to propose a relativism, but rather that top-down imposed curricula may do a good deal of damage to the creativity of science.

That being said, what we need is more rigorous thinking and more courage pursuing the truth where it leads. While advisors can be useful guides, and consensus can be a useful data point, there can also be an over-reliance on such opinions to guide and decide where to put one's research efforts, what to reevaluate, what to treat as basically certain knowledge, and so on. Frankly, moral virtue and wisdom are the most important. Otherwise, scientific praxis degenerates into popularity contest, fitting in, grants, and other incentives that vulgarize science.


I think that's why most innovative science today happens at the intersection of two domains– That's where someone from a different field can have unique insights and try something new in an adjacent field. This is often hard to do when you're in the field yourself.

But how can you ever discover a novel solution without attempting to resow the ground?

Run a mile in 4 minutes. Eat 83 hot dogs in 10 minutes.

Everything is impossible until someone comes along that's crazy enough to do it.


So if there's no solution in a particular area, you won't find it? You may be on to something there! :-)



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: