Hacker News new | past | comments | ask | show | jobs | submit login

> It would make a lot of sense to have something like StarLisp or APL for CUDA right now. Trying to do data parallelism in C is about the most brain-damaged idea ever. I don't know if anyone is working or interested in that, though.

You may well be right, but I challenge you to prove it. I myself am very interested in whether this would work. I have spent many sleepless nights programming GPU algorithms, and wondered if ideas from other programming languages and paradigms (especially functional programming) could be applied to make it easier and more elegant.

The nested data-parallelism approach does look promising on paper, and many people are well aware of the theoretical possibility of this working on GPUs (including people at Nvidia itself), but so far nobody succeeded to make it practical.

So, doing data parallelism in CUDA may be brain-damaged, but the flexibility and performance it delivers is mind-blowing. If you can achieve something comparable using a higher-level functional approach, I will be among your first users, and I'll tell everyone I know.

MultiLisp (AFAIK) is traditional task-parallelism rather than data-parallelism, and would not work well on a GPU.




I'm not talking about data parallelism being bad, but C being a bad data parallel language.


Of course... read my reply again please.

I am not disagreeing with you, I am saying that I would love to see a better data-parallel language than C (or CUDA to be precise), but it does not exist right now (at least not a practical one that would run on a real GPU with reasonable performance and allow non-trivial nested and hierarchical algorithms).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: