Hacker News new | past | comments | ask | show | jobs | submit login

We've talked before about our respective molecular graphics background.

I started with Unix on a Personal IRIS as an undergrad working in a physics lab which used it for imaging capture and analysis. I was the nominal sys admin, with one semester of Minix under my belt and just enough to be dangerous. (I once removed /bin/cc because I thought it was possible to undelete, like on DOS. I had to ask around the meteorology department for a restore tape.)

The summer before grad school I got a job at the local supercomputing center to work on a parallelization of CHARMm, using PVM. I developed it on that PI, and on a NeXT. That's also when I learned about people at my future grad school working on VR for molecular visualization, in a 1992 CACM article. So when I started looking for an advisor, that's the lab I chose, and I became the junior co-author and eventual lead developer of VMD.

With a Crimson as my desktop machine, a lab full of SGIs and NeXTs, and the CAVE VR setup elsewhere in the building. Heady times.

I visited SGI in 1995 or so, on holiday, thinking that would be a great place to work. They even had an Inventor plugin for molecular visualization, so I thought it would be a good lead. I emailed and got an invited to visit, where the host kindly told me that they were not going to do more in molecular visualization because they wanted to provide the hardware everyone uses, and not compete in that software space.

In the early 1990s SGIs dominated molecular modeling (replacing Evans & Sutherland), so naturally the related tools, like molecular dynamics codes, also ran on them. But we started migrating to distributed computing, where it didn't make sense to have 16 expensive SGIs, leaving them more as the head .. which as you pointed out, was soon able to run just fine on a Linux machine.




Pardon my ignorance, but what is so unique or special about molecular visualisation compared to, say, Quake - or CAD? If you’ll permit me to reduce it down to “just” drawing organo-chem hexagons, lines, and red/grey/black spheres connecting those lines (and a 360-degrees spin animation for the investor-relations video) - where’s the room for the rest of CG? E.g., texture-mapping, fragment shaders, and displacement mapping?


Quakes came out in, what, 1996? And was written by some of the foremost practitioners of computer graphics?

We were a couple of physics grad students working on a side project in late 1993. My background was a semester course based on Foley & van Dam. Hardware gave us a 5-10 year lead over what we could have done with consumer tech.

There wasn't really a "rest of CG". Only the highest-end SGI machines at the time had hardware texture mapping - most did it in software (see https://en.wikipedia.org/wiki/Extreme_Graphics).

We aren't talking 2D organo-chem hexagons, but 3D spheres and cylinders. Back around 1995 I posted some benchmarks to Usenet about the different approaches I tried (including NURBS), but I can no longer find a copy of it.

The straight-forward way is to render the spheres as a bunch of triangles, so, what, 50 polygons per sphere? Times 100,000 spheres = 5 million polygons. That was large for the time, but doable. Plus, during movement we used a lower level of detail.

What was Quake's polygon count?

Oh, and we're displaying animated molecules, including interacting with a live physics simulation, so no pre-computed BSP either.

Rastering spheres quickly on a PC was also possible then, which was RasMol's forte, but it was flat compared to having a couple hardware-based point lights plus ambient lighting.

Interestingly, AutoCAD (RIP Walker) tried to get into molecular modeling, but it didn't work out. https://www.fourmilab.ch/autofile/e5/chapter2_82.html


3D games make a lot of smokescreen and mirrors to make you believe you see a lot.

Not looking it up right now but the original Q1 had a very low poly count.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: