Lately I've been wanting a VR tool that does interactive 3D projections of 4D geometry. It's already so counterintuitive, I'm curious if the additional information available when projecting to 3D rather than 2D would allow better intuition for 4D transforms, volume-as-surface, etc. For some reason 4D geometry has always been an annoying obsession for me, like an itch that I can't scratch.
Have you seen 4D toys (http://store.steampowered.com/app/619210/4D_Toys/)? Its definitely a toy and not a serious visualization tool, but it still might help you build some intuition about 4D space.
From the readme it looks like the visualisation is off to the side in a self contained box. What I'd really want to do is have walk/fly through capability to visualise projections of higher dimensional data. Is this possible with your tool?
Second question: how many data points can a visualisation be before you start to see noticeable lag?
A user can actually reach out and grab the graph and scale/position it to their preferences. You can actually walk/fly through your visualizations and get intimate with each data point.
Re: second question, I'm unsure, but I'll ask the team for some stress tests.
I partially disagree. Whilst I don't have VR goggles myself, and agree with you in that so far, I haven't seen a reason for the purchase at all. A tool like this for visualisations though, is a use case that I see as possibly beneficial.
With that being said, it has to be done right. Not only that, I'd expect to be 'VR-only', not even just 'VR-first'. A visualisation tool that I'm expecting would have difficulty conveying the same data on a flat screen. Sure, one could argue that you're still only looking into flat screens in the goggles, but the multiple axes of movement allowed by both the head translations as well as hand movements give you superior control vs just mouse input.
> Sure, one could argue that you're still only looking into flat screens in the goggles, but the multiple axes of movement allowed by both the head translations as well as hand movements give you superior control vs just mouse input.
If you're looking at two screens with identical images, then that might be a valid argument, but VR headsets provide stereoscopic viewing by presenting different images of the virtual scene from each eye's viewpoint-- this is known as binocular disparity. It's the same principle used in 3D TVs and anything that requires you to wear special glasses.
The "head translations" you're talking about gives the visual system depth cues via motion parallax, where objects in the foreground appear to move faster than those in the background when the head is moved from side-to-side.
These two things together (stereoscopy and motion parallax) yield a very strong sense of "3D depth", called stereopsis. Having controllers with six degrees of freedom (6DOF: translation along and rotation about the x-, y-, and z-axes) to manipulate and interact with 3D data should be superior, as it is no longer necessary to map 2D mouse inputs to 3D operations which would also decrease cognitive load, in theory.
How would AR make this more interactive? I haven't seen one AR device with an interaction model as good as VR. I also don't know why you'd want to set the real world while visualizing data. Maybe so you could interact with it with other people IRL that are also wearing AR headsets?
Calcflow team member here. We are working on something very special involving AR :) Should be released in coming weeks. We think EDU users may benefit greatly from AR collaboration.
Vr is for toys. The context switch between whatever you are working on and visualizing it in vr is too expensive. Or at least you'd need radical advantages from vr to make the trade-off worthwhile.
A combined AR/VR headset would be amazing for this. I would love AR for a workspace, I can do my work and interact with my team, and then switch to VR to do something more complex.
I've hacked together some small VR visualizations for multidimensional data with particularly hard to understand structure and my experience with that was:
1. Moving through the data set in room scale VR is completely different from seeing and manipulating a 2D projection of it on a screen. Exploration is much more intutive (just move your head!) and the perfect depth information that you perceive feels almost like an additional input channel to the brain.
2. Collaboratively analyzing visualized data with VR would require hacks that render abstract avatars of co-workers and remove a lot of the informational content of direct communication. With AR and a direct line of sight all of these obstructions just go away.
As someone building an open source data tool using VR tools I like it that people like CalcFlow are experimenting with this. this is my project: https://github.com/zubairq/gosharedata
I agree that the project is cool. But I just want to give the answer that always aught to be given whenever someone on HN says "I wish we had X when I was learning this stuff", where X usually is some visual way of illustrating a concept. And that is simply that there really isn't any good substitute to learning mathematics to simply doing a bunch of problems with good old dry symbols. Visualizations are great, and they can aid intuition. But that is not the be all and end all. Struggle and doing a large number of problems really is necessary in order to internalize this stuff to such a degree that you maintain a deep understanding ten years after you were first exposed to it.
Sorry for the skepticism, but what’s the advantage of this being in VR? Seems like a 3D visualization that I can see on a monitor / iPhone just as well?
Digging a bit, it appears this is funded by an ICO, or at least created by a company currently running an ICO? Too bad that basically short-circuits to "smells fishy" to me right now; hopefully they can pull through and build a track record of credibility.
Nanome Team member here. Can't blame you for the skepticism, the blockchain space is full of nonsense these days. We are, in fact, running a token sale for our decentralized R&D platform, Matryx. Calcflow will be one of the first dApps to integrate Matryx. By open sourcing calcflow, we hope to establish our commitment to open-access STEM and build said "track record."
So, a 3d graphing program, competing with the TI-84+ from the project's README...
Is going to be notably tied to a custom eth token?
That idea does seem to contribute to the nonsense you mention.
Especially considering the README complaining about competitors being "unintuitive", but then just slapping `blockchain` on a visualization program
- - -
Also pardon my pedanticness, but its "Source available", most people consider "Open Source" (as a branding) to be open to access, changes, and use. You have released the code, sure. But you are massively restricting how that code is used (including preventing the program being used with other versions of the same program??)
Do you know why the project is released under a custom license, rather than something familiar? If this could be summarized in a few simple sentences I'd appreciate it. Ain't nobody got time to read all that legalese.
For example: is the purpose of the license to prevent commercial re-use? In other words: what does this custom license accomplish that no normal OSI-approved license could?
The primary function is to prevent commercial reuse. We work with a lot of institutional and enterprise clients and it's a bit complicated to protect their purchase of our software while allowing others to modify the source + grant them free personal use.
I just posted on the main thread, but you can make your project work with mouse and keyboard as well as VR, as I have done with my project here: https://github.com/zubairq/gosharedata