Hacker News new | past | comments | ask | show | jobs | submit | more eig's comments login

This sort of behavior is only going to worsen in the coming decades as academics become more desperate. It's a prisoner's dilemma: if everyone is exaggerating their results you have to as well or you will be fired. It's even more dire for the thousands of visa students.

The situation is similar to the "Market for lemons" in cars: if the market is polluted with lemons (fake papers), you are disincentivized to publish a plum (real results), since no one can tell it's not faked. You are instead incentivized to take a plum straight to industry and not disseminate it at all. Pharma companies are already known to closely guard their most promising data/results.

Similar to the lemon market in cars, I think the only solution is government regulation. In fact, it would be a lot easier than passing lemon laws since most labs already get their funding from the government! Prior retractions should have significant negative impact on grant scores. This would not only incentivize labs, but would also incentivize institutions to hire clean scientists since they have higher grant earning potential.


My recommendation is for journals to place at least equal importance to publishing replications as for the original studies.

Studies that have not been replicated should be published clearly marked as preliminary results. And then other scientists can pick those up and try to replicate them.

And institutions need to give near equal weight to replications as to original research when deciding on promotions. Should be considered every researchers responsibility to contribute to the overall field.


We can solve this at the grant level. Stipulate that for every new paper a group publishes from a grant, that group must also publish a replication of an existing finding. Publication would happen in pairs, so that every novel thing would be matched with a replication.

Replications could be matched with grants: if you receive $100,000 grant, you'd get the $100,000 you need, plus another $100,000 which you could use to publish a replication of a previous $100,000 grant. Researchers can choose which findings they replicate, but with restrictions, e.g. you can't just choose your group's previous thing.

I think if we did this, researchers would naturally be incentivized to publish experiments that are easier to replicate and of course fraud like this would be caught eventually.

I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.


Replication is over-emphasised. Attempts to organise mass replications have struggled with basic problems like papers making numerous claims (which one do you replicate?), the question of whether you try to replicate the original methodology exactly or whether you try to answer the same question as the original paper (matters in cases where the methodology was bad), many papers making obvious low value findings (e.g. poor children do worse at school) and so on.

But the biggest problem is actually that large swathes of 'scientists' don't do experiments at all. You can't even replicate such papers because they exist purely in the realm of the theoretical. The theory often isn't even properly written down! They will tell you that the paper is just a summary of the real model, which is (at best) found in a giant pile of C or R on some github repo that contains a single commit. Try to replicate their model from the paper, there isn't enough detail to do so. Try to replicate from the code, all you're doing is pointlessly rewriting code that already exists (proves nothing). Try to re-derive their methodology from the original question and if you can't, they'll just reject your paper as illegitimate criticism and say it wasn't a real replication.

Having reviewed quite a lot of scientific papers in the past six years or so, the ones that were really problematic couldn't have been fixed with incentivized replication.


So then, how on earth does this stuff even get published? What exactly is it that we're all doing here?

If a finding either cannot be communicated enough for someone else to replicate it, or cannot be replicated because the method is shoddy, can we even call that science?

At some level I know that what I'm proposing isn't realistic because the majority of science is sloppy. P-hacking, lack of detail, bad writing, bad methods, code that doesn't compile, fraud. But maybe if we tried some version of this, it would cause a course correction. Reviewers, knowing that someone actually would attempt to replicate a paper at some point down the road, would be far more critical of ambiguity and lack of detail.

Papers that are not fit to be replicated in the future, whose claims cannot be tested independently, are actually not science at all. They are worth less than nothing because they take up air in the room, choking out actual progress.


That correct. Fundamentally the problem is foundations and government science budgets don't care. As long as voters or Bill Gates or whoever believes they're funding science and progress the money flows like water. There's no way to fix it short of voting in a government that totally defunds the science budget. Until then everyone benefits from unscientific behaviour.


> can we even call that science?

The amazing thing is that it all works out in the end and science is still making (quite a lot of) progress.

That's also the reason why we shouldn't spend all of our time and money checking and replicating things just to make sure noone publishes fraudulent/shoddy results. (We should probably spend a little more time and money on that, but not as much more as some people here seem to suggest).

Most research is in retrospect useless nonsense. It's just impossible to tell in advance. There is no point in checking and replicating all of it. Results that are useful or important will be checked and replicated eventually. If they turn out to be wrong (which is still quite rare), a lot of effort is wasted. However, again, that's rare.

If the fraud/quality issues get worse (different from "featuring more frequently and prominently in the news"), eventually additional checks start to make sense and be worth it overall. I think quite a lot of progress is happening here already, with open data, code, pre-registration of studies, better statistical methods, etc, becoming more common.

I think a major issue is the idea that "papers are the incontestable scientific truth". Some people seem to think that's the goal, or that it used to be the case and fraud is changing that now, however, this was never the case and it's not at all the point of publishing research. I think a major gain would be to separate in the public perception the concepts, understanding and reputations of science vs. scientific publishing.


> many papers making obvious low value findings (e.g. poor children do worse at school) and so on.

Why are these obvious low value papers a) getting grants, b) getting published, c) not permanently damaging the researchers' careers?

If you do bad work you eventually get fired, why don't we do the same thing with research academics who do bad work?


Isn't that the point that if they couldn't have been fixed that they were problematic in the first place?


There would still be incentives for collusion (I "reproduce" your research, you "reproduce" mine), and researchers pretending to reproduce papers but actually not bothering (especially if they believe that the original research was done properly).

Ultimately, I'm not sure how to incentivize reproduction of research: it's very easy to fake a successful reproduction (you already know the results, and the original researcher will not challenge you), so you don't want to reward that too much. Whereas incentivizing failed reproductions might lead some scientists to sabotage their own reproduction efforts in ways that are subtle enough to have plausible deniability.

Proceeding by pairs is probably not enough. You probably need 5-6 replications per paper to make sure that at least one attempt is honest and competent, and make the others afraid to do the wrong thing and stand out.


You could randomize replications a bit, take away the choice. Or make it so that if you replicated one group's result, you can't replicate them again next time. The key is a bit of distance, a bit of neutrality. Enough jitter to break up cliques.

I don't work in academia but in my experience professors are basically all intellectually arrogant and ego-driven, and would relish having time and space to beat each other at the brain game. A failed replication is their chance to be "the smarter guy in the room" and crack open some long-held belief. A successful replication would probably happen most of the time and be far more boring.

I could imagine, if such a thing were mandated and in place for a while, one could build her career on replications, as a prosecutor or defense. She would publish new research solely to convince her colleagues that she is sharp enough to play prosecutor or defense.

Anything has got to be better than what we have now, where apparently you can cheat and defraud your way through an entire decades-spanning career.


The tricky thing with randomizing is that science gets very specialized, both with equipment required and knowledge. So there may only be a handful of people whose work you can competently replicate.

And those same people are reviewing the papers you publish and will not hesitate to sabotage your career if you have made them look bad by failing to replicate their papers.


It is much much harder to sustain a conspiracy among many distributed people over time, than it is to fake your own research results.

Making fraud much less convenient will greatly reduce the amount of it.


So does increasing the penalties.

If you publish a paper with fraudulent data, methods, or results, and you received any state or federal funds for it, there should be prison time. You stole taxpayer money.

I'm not saying for when people are wrong, I'm saying for when you can prove someone knowingly lied. It won't catch anyone, and you need to bar to be high enough that people don't go to jail for being bad scientists, but right now there is zero social, professional, or legal risk is just lying your ass off to get the next grant and keep the spice flowing.

Nobody's going to do that when changing the numbers in your Excel sheet carries a risk of a decade or two in a minimum security prison.


I think it would be better to have separate grants for replication studies. If something becomes a mandatory administrative burden, people will see it as low-prestige work and try to avoid it. And the kind of people who are good at novel research are often also good at ignoring duties they don't like, or completing them with a minimal effort if forced to.

But if there is separate funding for replication studies, it will become something people compete for. Some people will specialize on replicating others' work, and universities will pay attention, as they care about grant overheads.


> But if there is separate funding for replication studies, it will become something people compete for.

It would need to be very good funding on par with what's offered for "novel research".

In addition, we would need increased prestige (e.g. awards, citations) for replicated studies as well for this to be effective. For many academics funding is merely a means to that end.


Another reason for doing this is that if the people doing replication also do original research then calling out someone’s work as bad incentivizes them to sabotage your work when they inevitably review your papers.

You can avoid that to some extent by having replication and original work be separate specialities - and making sure that replication gets prestige so good people do it.


> I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.

It might actually improve the pace of science, if the half eliminated were not replicable and the remaining half were written by researchers knowing that they would likely face a replication attempt.


> I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.

Undoubtedly this is true. The problem, like with advertising, is identifying _which_ half to cut.


That doesn’t fix the incentives, people will just commit fraud when replicating results so they can perform more original research.


Don't require the replication to be successful. Failure to replicate a result is just as valuable as successfully replicating a result.


It is a lot easier to just falsely prove the experiment since the data is already there and the publisher of the paper is not going to push back if you confirm it.

Why go through all the work of actually proving/disproving the experiment when you can just change tweak the numbers of the original experiment, say you actually reproduced the experiment, and then move on?


> say you actually reproduced the experiment

And get a nice $100k too for close to zero effort!

(Having in mind: "plus another $100,000 which you could use to publish a replication")


Would this not incentivise the forming of groups that replicate each others work. If you're already committing wilful fraud on your own papers, why wouldn't you commit a bit more for another researcher willing to do the same for you? With >2 parties, it won't be immediately obvious that this trading has occured.


This stuff happens in Computer Science too. Back around 2018 or so I was working on a problem that required graph matching (a relaxed/fuzzy version of the graph isomorphism problem) and was trying algorithms from many different papers.

Many of the algorithms I tried to implement didn't work at all, despite considerable effort to get them to behave. In one particularly egregious (and highly cited) example, the algorithm in the paper differed from the provided code on GitHub. I emailed the authors trying to figure out what was going wrong, and they tried to get funding from me for support.

My manager wanted me to right a literature review paper which skewered all of these bad papers, but I refused since I thought it would hurt my career. Ironically the algorithm that ended up working the best was from one of the more unknown papers, with few citations.


Beautiful. And thanks for the testimony. Ironically, this may have helped your product or research: Yes you spent more time on the BS, but in the end you found and used an algorithm both better and more obscure. While your competitors struggled with worse ones. Messed up incentives again.


Name and shame :)


Calling out bad work is career suicide. You are defecting on your tribe. That’s half of the problem.


can't you do it anonymously?


You should be able to build an entire career out of replications: hired at the best universities, published in the top journals, social prestige and respect. To the point where every novel study is replicated and published at least once. Until we get to that point, there will be far fewer replications than needed for a healthy scientific system.


> social prestige and respect

This one is the showstopper. No matter what you do with rules and regulations, if people aren't impressed by it at a watercooler conversation, or when chatting at a cocktail party at a conference, or when showing a politician around in your lab then nothing else matters.

How prestigious something is is not a lever you control.


There absolutely exist skilled scientists that would happily make a living unglamorously replicating studies, if the money was there.

Prestige is a nice motivator, but making a living at all is always the baseline, and is often sufficient.


There needs to be a new norm where prestige and respect is conveyed by having a replicated result. Not just the initial paper.


Similarly, ambitious and difficult experiments that don’t pan out should also be richly rewarded. You just did all of science the service of clearly marking that tempting path with a big “don’t bother” sign, thus saving resources and pointing the ship a little closer to the direction of truth.


Yeah, this is something I don't fully understand. It's work to format and package everything for publication - and it's work which by then may have lost funding since it's failed. And by which time you might be discouraged. BUT like you say all the science has been done, and getting one more serious publication out of it should be rewarding. It's also a chance for the scientist to show that they were serious, competent, diligent in doing the work. It should count as well as a standard publication. It's a chance to collect consulting contracts later. Etc. Management and support should encourage these publications.


Haven't you now created an incentive for replication fraud?


Replications are not very scientifically useful. If there were flaws in the design of the original experiment, replicating the experiment will also replicate the flaws.

What we should aim for is confirmation: a different experiment that tests the underlying phenomenon that was the subject of the first paper.


Replications don't frequently get published but they do get attempted, because any decent researcher is going to replicate a result they rely on to build the next step. Unfortunately, you can get stuck in the mud as I did and be unable to replicate the prior findings. Is it technique or were the original results in error? We'll never know.

Building more results without replications is what caused the psychology crisis. Apparently every lab accepted the p<0.05 results or stated correlations of prior studies and just ran more studies until they got their own that was publishable. Since everyone "knew" that the prior result was true, like priming or whatever, they could conclude anything they wanted, because ex absurdum quodlibet.


Reproducibility should be a fundamental quality of published experiments.

If the published work under specifies the experiment such that it is unreproducible, that means the results can’t be reliably extrapolated because there are unstated conditions.


Well, yeah, but there are established techniques that are still very finicky. For example, staining frozen sections with fluorescent antibodies can go wrong in many ways and favors the experienced. Electron microscopy can take a lot of training to get right, and also requires careful staining techniques to get meaningful results. RNA work (e.g. FISH) is very sensitive to the presence of RNase which is ubiquitous and difficult to exclude from preparations. So a procedure can be specified that is reproducible but getting the same conditions is more difficult than, say, using Nix.


This happens more often than one would expect.

No researchers are going to invest the time needed to replicate complex neurobiological experiments such as those Masliah conducted.

However, if the results are sound, others will be able to build upon them, and this happens a lot more often than fraud does.


I'd be careful about that. Faking replications is even easier than faking research, so if you place a lot of importance on them, expect the rate of fraud in replication studies to explode.

This is a very difficult problem to solve.


To elaborate, you need to give equal value to well implemented studies failing to replicate a result, as you do to studies replicating a result.


Well of course, but I don't think that would necessarily help much. The point is that you don't really need to do anything: you know what the results should be, and you know you are unlikely to get pushback, so there's only an incentive to do the strict minimum to create plausibility that you ran the experiments.

Basically, I think there is a sizable risk that a large number of replications would be fraudulent or half-assed, which dilutes their value. Paradoxically, the more this policy suppresses fraud or mistakes in original research, the less people will perform replication in good faith.

I could be wrong, but people are endlessly creative at subverting systems when the stakes are high, so I'm wary of simple solutions. To be fair, it's probably better than the current system, just not as much as we'd like.


The problem with putting the onus on the journals is there is no incentive for them to reward replications. Journals don't make money on replicated results. Customers don't buy the replication paper they just read the abstract to see if it worked or not.

I do like the idea of institutions giving tenure to people with results that have stood the test of time, but again, there is no incentive to do so. Institutions want superstar faculty, they care less about whether the results are true.

The only real incentive that I think can be targeted is still grant money, but I would love to be proved wrong.


If all that's true, we should just shut down all the science institutions across the board. They're worth nothing if they are not vigorously pursuing the truth about the world.


> And then other scientists can pick those up and try to replicate them.

unless there are grants specifically for that purpose, then it's not going to happen; and it's hard to apply for a grant just to replicate someone else's results verbatim. (usually you're testing the theory but with a different experiment and set of data which is much more interesting than simply repeating what they did with their data; in fact replicating it with a different set of data is important in order to see if the results weren't cherry-picked to fit the original dataset).


I think it’s a great idea. It would also give the army of phds an endless stream of real tangible work and a way to quickly make a name for themselves by disproving results.


journals have zero incentives to care about any of this.


It seems surprisingly hard to counter scientific fraud via a system change. The incentives are messed up all the way around.

If the older author is your advisor and you feel one of their juniors is cutting corners or the elder is cutting corners, you better think twice about what move will help your career. If confirming a recent result counts toward tenure, then presto you have an incentive for fraudulent replication (what's the chance it's incorrect anyway? The original author is a big shot.) Going against the previous acclaimed result takes guts especially in a small field where it might kill your career if YOU got it wrong somehow - So you need to have much stronger results than the original research, and good luck with that. We might say "this is perfect work for aspiring student researchers, and done all the time" - to reimplement some legendary science experiment - but no, not when it's a leading edge poorly understood experiment, and not when that same grad student is already running to try and produce original research themselves.

The big funders might dedicate money to replicated research that everybody is enthusiastic about (before everyone relies on it). But some research takes years to run. Other research is at the edge of what's possible. Other research is led by a big shot nobody dares to take on. Etc etc. So where is the incentive then? The incentive might be to take the money, fully intending to return an inconclusive result.

Some research is taken on now. But only AFTER it's relied on by lots of people. Or much later when better ideas had the time to emerge on how to test the idea more cleverly i.e. cheaper and faster. And that's not great because costly in all the wasted effort by others, based on a fraudulent result. And all the mindshare the bad result now has.

This is messed up.


While Akerlof's Market for Lemons did consider cases where government intervention is necessary to preserve a market, like with health insurance markets (Medicare), he describes the "market for lemons" in the used car market as having been solved by warranties.

If someone brings a plum to a market for lemons, they can distinguish the quality of their product by offering a warranty on its purchase, something that sellers of lemons would be unwilling to do, because they want to pass the cost burden of the lemon onto the purchaser.

The full paper is fairly accessible, and worth a read.

Not sure how this could be applied to academia, one of the problems is that there can be significant gaps between perpetrating fraud and having it discovered, so the violators might still have an incentive to cheat.


> if everyone is exaggerating their results you have to as well or you will be fired.

Is this really the case the though? Isn't the whole point of tenure (or a big selling point at least) insulating academics from capricious firings?

The big question I have is that there are names on these fraudulent papers, so why are these people still employed? If you generate fictitious data to get published, you should lose any research or teaching job you have, and have to work at McDonald's or a warehouse for the rest of your life. There are plenty of people who want to be professors that we can eliminate the ones who will lie while doing it without losing much (perhaps anything). If your job was funded by taxpayer funds there should be criminal charges associated with willfully and knowingly fabricating data, results, or methods. At that point you're literally lying in order to steal taxpayer funds, it's no different than a city manager embezzling or grabbing a stack of $20 bills out of the cash register.


Well you aren’t going to get tenure unless you distort your results and it’s hard to change established habits.

That, and you select for the kind of people who are willing to fake results to further their own careers.


Yeah I agree with you, I guess I just keep coming back to "make the punishment so eye-wateringly harsh very few people are stupid enough to try it."

Most types of fraud carry a $100-250k monetary penalty and up to 20, 25, 30 years in prison.

The number of people willing to fabricate research data decreases dramatically if you're going to have to pay the grant back from your $20/hr warehouse job after you spend the better part of a decade in a minimum security prison.


The flip side is that if punishments are eye watering harsh then people will be even less willing to inflict them.

Bear in mind also that the vast majority of academic fraud isn’t cut and dried easily proved. It’s p hacking or “accidental” flaws in an analysis, or forgetting to mention some important detail.


I wonder if there are any studies on whether fraud increased after the Bayh-Dole Act. There's certainly fraud for prestige, that's pretty expected. But mixing in financial benefits increases the reward and brings administrators into play.


> ... as academics become more desperate.

Yes and ... we're already there.


The incentive structures in science has been relatively stable since I entered the field in 1980 (neuroscience, developmental biology, genetics). Quality and quantity of science is extraordinary, but peer review is worse than bad. There are almost no incentives to review the work of your colleagues properly. It does not pay bills and you can make enemies easily.

But there was no golden era of science to look back on. It has always been a wonderful productive mess—much like the rest of life. As least it moves forward—and now exceedingly rapidly.

Almost unbelievably, there are far worse crimes than fraud that we completely ignore.

There are crimes associated with social convention in science of the type discussed by Karl Herrup with respect to 20 years of misguided focus on APP and abeta fragments in Alzheimer’s disease:

https://mitpress.mit.edu/9780262546010/how-not-to-study-a-di...

This could be called the “misdemeanors of scientific social inertia”. Or the “old boys network”.

There is also an invisible but insidious crime of data evaporation. Almost no funders will fund data preservation. Even genomics struggles but is way ahead in biomedical research. Neuroscience is pathetic in this regard (and I chaired the Society for Neuroscience’s Neuroinformatics Committee).

I have a talk on this socio-political crime of data evaporation.

https://www.youtube.com/watch?v=4ZhnXU8gV44&embeds_referring...


you don't need regulation for a stable durable goods market. income and credit shocks cause turnover of good quality stock in the secondary market.


It could also have a chilling effect on a lot of breakthrough research. If people are willing to put out what they mostly think is right, it might set back progress decades as well.


BS governmental desperation to show any "result" (even if it is fake) is what brought us here. As scientist have to show more fake results to get more grants.

Removing the government from science could help, not the other way around.


Good luck with that sentiment here.

People just went through the last five years and will go to their graves defending what they saw first hand. To admit that maybe those moves and omissions weren’t helpful would be to admit their ideology was wrong. And that can not be.


You know you wrote this on the Internet? The thing the government created to do science?


a) anything that the government done for the internet was before science was corrupted by government b) what the government did for the internet was 1% of at the very, very best. And there is a chance so close to 100% that a similar thing would have been done without government even the 1% does not matter.


HN has always had limitations to all posts- see second paragraph of the FAQ:

> Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.

I personally appreciate that political news is limited on HN. It's reasonable to create communities where things stay technical, just like many people don't discuss politics in the workplace.


> In 2025, creators will also be able to use Dream Screen for generating 6 second standalone video clips for their Shorts, like a cinematic underwater reveal of the Golden Gate Bridge.

While this is clearly an announcement for investors (see how they bring up the Transformers paper again), I fail to understand the value add for Youtube content.

Just like scrolling through AI generated Facebook photos is not engaging, so too will be the glut of AI generated Youtube shorts.


Super excited about this! I hope more people will pick it up in the hobbyist space now that Fusion costs money.

I'm not sure what the popularity of these different CAD softwares are. I've seen quite a few hobbyists use OnShape recently, and a few people use OpenScad. I don't think I've seen another FreeCad user in real life though.


I use FreeCAD on a very regular basis and can understand why it's not more popular: it's very powerful but has some very sharp edges that will often have me using it in a state of near rage. Topological naming comes to mind but there are other various issues that I've hit like a brick wall (in that you can't work around the bugs/limitations so much as you must rework your design to avoid them which can be tedious and frustrating) when designing something non-trivial.

That said, each release continues to improve it just has further to go than most open source projects.


OpenSCAD is definitely very popular in the maker/microcontroller/electronics world, which is both a good and bad thing, because it is accessible but also limited/frustrating. It enables some good stuff on Thingiverse but it becomes extremely mathematics-focussed quite quickly.

I do wish more of the code-CAD people would look at Replicad, Build123D and CadQuery.

I personally like FreeCAD a lot, but I won't push people onto it; if they like TinkerCad that's fine.


I got into making all kinds of stuff because of OpenSCAD. It's just enough for 3D printing functional mechanical parts. It's still my first go-to for designs. The downside is OpenSCAD doesn't support import or export of STEP files... So I've also added FreeCAD to my toolbox. But I really wish OpenSCAD would/could do whatever refactor it needed to support STEP.


Yes -- the STEP thing was a big part of why I wanted to switch.

I actually switched via CadQuery: a few minutes with that made it clear that the bits I didn't understand (edges, faces, planes, all that stuff that freaked me out) were simple and logical and had a sort of common sense integrity, and that I might as well try to learn them in the context of FreeCAD.

Had Build123D existed at that point, or Replicad, maybe I'd have pushed on for longer. Build123D is my "fallback toolbox" at this point.

I don't think OpenSCAD can produce STEP, ever. Importing it is another matter; that's a one-way meshing operation. But creating it means having a kernel that understands more than CSG operations -- a bRep kernel like OpenCASCADE, that FreeCAD/Replicad/CadQuery/Build123D etc. use.

You can of course run your OpenSCAD in FreeCAD, but certain operations (hulls, Minkowski I think?) end up as meshes, because there is no easy equivalent. Still, that's better than every operation ending up a mesh.


I just looked at those other code CAD programs, and I don't see the appeal over OpenSCAD.

I have no interest in browser based CAD programs because as models become complex, that platform is too limited in performance.

Python and stateful CAD drawings sound like a nightmare to me.

OpenSCAD has limitations for sure, but I think a better tool will look different.

I do wish OpenSCAD used a more general purpose programming manager.


Replicad is quicker to render complex things than OpenSCAD -- significantly quicker. It uses an emscripten port of OCC.

It's also embeddable as a library, which means being able to make web-based object customisers: client-side, script-driven tools that don't require CAD knowledge for the user. Like the Thingiverse customiser but on steroids. It's a fascinating project.

And I think it's not the statefulness that is the significant thing about CadQuery and Build123D. It's the access to a bRep kernel, so you can do operations with faces and vertices, you can reflect (analyse, measure) the model, etc.

Being able to do operations on a generated face or edge means not needing to know (or recalculate) the location of that face in 3D space; it saves you so much in the way of maths.

If you have very simple (or very mathematical!) models, OpenSCAD can help. But once things get complex you just have file after file of variable definitions.

Functional flows on vertexes, edges and faces created by previous operations is much closer to a code equivalent of GUI CAD.


Replicad is quicker to render complex things than OpenSCAD -- significantly quicker. It uses an emscripten port of OCC.

OpenSCAD integrated manifold into its codebase though you would need to use a development build to actually use it since the last release is in 2021. I heard manifold is significantly faster than CGAL.


That's good to know.


> Python and stateful CAD drawings sound like a nightmare to me.

Please correct me if I’m wrong, but it doesn’t appear stateful to me. The context managers mostly make the organization of objects be reflected in the organization of the code.

They’re stateful in the sense that some bits are part of a larger assembly, but I think that’s inherent in the domain. The features of the object have to relate to each other so it knows how to stitch the object together (eg which side of a face is external and which is internal).


If OpenSCAD had STEP file support, I could do all my design work in it. But it can't, so I can't.


OpenSCAD basically has no tools to aid complex modeling. You have to know trigonometry and often use pen and paper to calculate points.

Build123d has stateless algebra mode. And you could replace math with simple construction elements and simply ask intersection points.


OpenSCAD is a counterfeit CAD! It doesn't Aid your Design so much as render one the user has to already understand. I do like it for simple parametric changes to existing models though.

I wish we had something like it that could be used to create freeCAD macros, as in "Here's a sketch, which FreeCAD translates to OpenSCAD arrays, then runs a script that can do stuff with this model as input"


Is that really "counterfeit"? As you mentioned, CAD is Computer-Aided Design, and OpenSCAD is certainly aiding in the design process by interpreting higher level commands about where to place geometry.

I have a lot of criticisms for OpenSCAD but I wouldn't call it a counterfeit, it's just a code-based approach to constructing something vs. a GUI-based approach.


It's more of a joke/exaggeration, but it does explain why I find it to be so hard to use.

It's much more of a one way conversation, if you can't imagine all the rotations to make a part do something, trial and error is very slow.

Whereas in GUI CAD you mostly only have to be able to think in 2D.

And without a constraint solver, you have to have a much deeper understanding of all the spatial relationships involved.


Right -- OpenSCAD is an object compiler. You give it code, it gives you an object.

Your object is not something that can then be used to iterate on, except by placing it in space and adding or subtracting other stuff to/from it.

Have you looked at Build123D or CadQuery?

Both are Python packages (different API styles, compatible underpinnings) that do OpenSCAD-type things, but using the OpenCASCADE bRep kernel, so it is less "counterfeit" -- if you want to do something based on a face or edge or vertex that was the product of a previous operation, you can. Both have some constraints support.

In many ways they are both just a prettier alternative to the FreeCAD Python APIs -- indeed there was a CadQuery workbench for CadQuery 1.x.


The problem with anything other that OpenSCAD is it's somewhat nonstandard and often has sandboxing issues.

It's like the BASH of 3D, if I'm doing anything with code CAD, it's probably trivial enough that just using what everyone else uses makes sense, even if almost any other alternative is much nicer.


I agree that it's more difficult to manage Build123D or CadQuery due to their status as Python packages with heavier dependencies. (Less of a problem with Replicad, which is a client-side JS package)

This is a little bit of why I jumped to FreeCAD from OpenSCAD -- the existence of prebuilt distributions of FreeCAD, and the realisation that I'd always be able to script FreeCAD if I needed it.

Though I think Build123D has the beginnings of momentum (I also think it's not hard to see why):

https://github.com/phillipthelen/awesome-build123d

But OpenSCAD is a terrible, obstinate "standard" choice; I wish it were not seen that way, don't you? Because it's holding everyone back.

(This does make me realise that maybe working on an Electron-based Replicad desktop app would be a good use of my time.)


Yeah, it's not a great standard choice, pretty much any alternative is going to be nicer.

But the difference is relatively small, because I'd rather be using GUIs for anything nontrivial.


I like the idea of OpenSCAD but the language is too functional/immutable for my taste. It's interesting but having to rethink even algorithms with simple loops gets very tiring over time.

A debugger would be very helpful to be able to step through the code.


There is now a Python-enabled version:

https://pythonscad.org/

Using the # operator to make things transparent red helps a lot when stepping/iterating through code.


I tried that but couldn’t make it work on my M1 MacBook. Not sure why.


Please check in with the developer --- probably best to create an issue at Github:

https://github.com/gsohler/openscad/issues


The rendering is also very slow, even on powerful machines.


Have you tried a recent (nightly) build and enabled Manifold?


JSCAD is a thing:

https://openjscad.xyz/

But I really only fight with it because I know JS moderately well.


Have you looked at Replicad?

https://replicad.xyz

Similar principles, but a bRep kernel so a much richer API.


A few weeks ago I was planning to design a model I could send to a local 3d printer to replace a broken piece in the house for which I knew it would be impossible to find something that would fit exactly.

I looked around through a couple of open source/free offerings and all found them frustrating. Either the focus on easy of use was too limiting, the focus was too much on blob, clay-like modeling rather than strong parametric models (many online tools), or they were too pushy to make you pay, or the UI was not intuitive (FreeCAD).

OpenSCAD was the one which allowed me to get the model done, and I loved the code-first, parametric-first approach and way of thinking. But that said I also found POV-Ray enjoyable to play around with around the 2000s. Build123D looks interesting as well, thanks for recommending that.


The major advantage of Build123D for your use case -- sending it to someone else to fabricate it -- is STEP output support.

This really expands your options for what you can make and who you can ask to make it. There are now some online fabrication places that will do CNC from mesh formats, but really the only way to have proper control is sending them a STEP file.


> now that Fusion costs money

I know they've been obnoxiously chipping away at the features available in their Personal edition and introducing artificial limitations. But my free installation still works and I haven't seen any indications that it's going away.


Fusion as a CAD engine is great. I've not used the CAM side, and while I used to use Eagle a lot I've tried to invest more energy into Kicad. The online limitations are frustrating though. Randomly and inconsistently not being able to export STLs because of a "translation service error" (when it could 2 minutes ago), or the inability to make drawings with the free edition. I mostly use it because there isn't anything else half as good for OS X that works offline.


I used it to do some sheet metal modeling, then sent the models off to a laser cutting/bending service that shipped me the pieces. Then I went back to Fusion to 3d print some brackets/scaffolding using the same sheet metal models as a reference, to assemble the pieces into the finished product. This was during a 3 month leave from work, starting from zero knowledge beforehand. It was probably the most fun I've had in years, and mostly thanks to how slick Fusion is and how many tutorials there are out there.

There are some export formats that it uses cloud machines for, which I think is silly and arbitrary. It's probably done that way to upsell their premium product for faster wait times or unlimited quota. For my uses I was able to select formats that didn't require the cloud.

Fusion is much more polished compared to FreeCAD and so I'm not sure if I'll ever end up making the switch. But I'm glad to see a free alternative, just in case.


Most of the common translation options should work offline (ie Fusion is capable), but Fusion sometimes gets stuck in a weird state where it insists it needs connectivity. Perhaps it's a quota thing but I've never found it to be consistent. This happens fairly often with STLs for 3D printing.

Once it's gotten into that hole it will often refuse to export any other format until connectivity is restored, even if the app is restarted. It's known behaviour, for example the official guidance is that changing binary to ascii might help, or you shouldn't export directly to a slicer when offline, or don't use certain menus. But it seems like a wontfix.


Fusion 360 CAM is great for me (hobbyist doing CNC with wood and other materials). It's handled some pretty tough jobs, like a full topo map of california. It's why I pay for the product. I tried the electronics stuff in Fusion and decided not to use it because it didn't work nearly as well as Kicad.


I'm also happy for this. I'm an EE with limited MCAD experience, so I usually hop onto Onshape when I need a custom trinket to 3D print. I did use FreeCAD for a small fixture for my day job earlier this year and I was pleasantly surprised. For someone with no experience, it worked very well and when I lose access to Onshape I'll definitely pick up more with FreeCAD.


Hope that the new changes make freecad a little more accessible. Coming from Fusion I really tried to make it work for me but the UI is so awkward and abstruse I quickly gave up.


Beautiful, this person can do some really good web design.


Cool visualization!

It would've been nice to not assume only one lottery winner. People tend to pick numbers that are meaningful for them: birthdays, favorite numbers, lucky numbers. Thus it actually significantly increases your EV if you pick unusual numbers, which is not reflected here.


So, for Mega Millions:

For 1-70: Consider numbers above 31 (days of month)

For 1-25: Similarly, numbers above 12 might be less common (months of year)

What other numbers above 31 would you want to avoid? 33, 44, 50, 69, 70? And you might want to avoid sequences as well.



Maybe this is in the supplement of the whitepaper [0], but I would have loved to see more analysis of how novel the designed proteins really are.

In the whitepaper they mention that they are novel compared to other in silico design techniques, but to my knowledge other binders to VEGF and Covid spike protein exist and would already be found in the PDB database that Deepmind trained the model on.

This is not to minimize the results- if the history of ML is anything to go by, even if AlphaProteo does not currently beat the best affinity found by in vitro screens, I do not doubt that it soon will!

[0] - https://storage.googleapis.com/deepmind-media/DeepMind.com/B...


Might depend on what your measure of 'novelty' is in protein structure. A single residue change (for example) would not normally be considered a novel structure - it's just a mutation.

However, a new fold - that is, the shape that the backbone folds into - would be novel. Potentially also novel would be 'chimeric' structures with parts from other structures, as with chimeric domain swaps.

There was a structure designed by the Baker lab called 'Top7 - https://pubmed.ncbi.nlm.nih.gov/14631033/ that I remember as ground breaking at the time :) (in the ancient days of 2003 it seems ...)


Exactly. If the proteins suggested in this paper are very similar to known good binders in PDB then I am much less impressed by the results. You could argue they are generating a structure from the training set.

I want more info about how novel these proteins are.


They must be somewhat novel in that the wet lab work verified up to 10x stronger binding as predicted. I agree it would be interesting to see how they compare to known binding proteins


we've been able to design tight binders for quite some time now- the issue with synthetic designs is that they tend to bind a little too tightly. You want to have a reasonable off-rate and the ligand protein should do more than just bind, it needs to effect some sort of response from the bound protein.

When you look at these synthetics they often maximize for interactions of hydrophobic areas on the surface.


This essay resonated with me, speaking about the "mediocrity" of our future relationship with computer assistants.

I loved the paragraph about feeling "scammed" though I would've called it being "faked". The AI doctor can never use the stethoscope around her neck. She is hijacking totems of professionalism to appear more comforting, without the capabilities to back them up. The fake veterinarian can suggest a diagnosis but can't actually treat anyone. The real-estate chatbot cannot try to help a domestic violence victim.

Maybe that's why I interact with AI assistants the same way I interact with psycopaths. I'm comfortable interacting with them in jobs where the law will incentivize them to behave well. But for things like teaching, or medicine, or personal matters, I prefer someone with empathy.


“It” feels more appropriate than “she” for an ethereal entity.

Besides, attributing a gender to an AI can be misleading and anthropomorphizes a non-human entity, creating unrealistic expectations from the start.


That seems like an odd thing to point out considering that the point of GP's post surely indicates that he agrees with you.


In case anyone is looking for the youtube video, it's pretty impressive:

https://youtu.be/abi84lnjNV4


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: