Hacker News new | past | comments | ask | show | jobs | submit | zyklonix's comments login

We might as well embrace them: https://github.com/DivergentAI/dreamGPT


Consider adding visuals on each node. Check out: https://explorer.globe.engineer/


Very cool website! Adding visuals dynamically is costly and complex, but I'll try harder.


I am fascinated with this. What is it used for?


I use it to play music.

Short demo: https://youtu.be/G1ftvw-Y6pk

Open mic set: https://youtu.be/nKFK_OhQv3k


Did you transform a coat stand into an awesome futuristic instrument??? So glad I asked. Brilliant work. This should be in the HN front page!


Yup. That's exactly what I did.

I made an album for falling asleep to that you can find on various streaming music services. Just search "Autonomous Drone Lullabies" by Stefan Powell. It's an album that was made autonomously using an algorithm in Pure Data. It's also here: https://stefanpowell.bandcamp.com/album/autonomous-drone-lul...


This is seriously so cool. Thank you.


Purchased! Excellent find on HN today.


Right?! I'm about to purchase a copy too.


Reminds me of Electronicos Fantasticos (Japan) who re-purposes old items (TV's old fans, barcode scanners etc...) and makes electronic music. Super interesting stuff!

https://www.youtube.com/watch?v=A0VYsiMtrNE


These are some great insights! Thanks for summarizing them. It will save me a lot of time in the future. I would love to see this digital ad! Is it public?


perhaps this link works https://t.me/dubec/300

* the images are generated with midjourney * the transition in this particular one is the export of processing * the track is from Babe Roots' incredible Sufferation Time - https://baberoots.bandcamp.com/track/sufferation-time-babe-r...

the p5js part was more challenging, as I had to only draw every 5th pixel every frame, in order to get some decent performance. but since this is a transition, you can't mention it.

the whole loop is not much, but took a good 6 hours to get together. then another 4-5 go rewrite it for web.

repo https://github.com/stelf/dubsinth


This reminds me of the famous "King - Man + Woman = Queen" embedding example. The fact that embeddings have semantic properties in them explains why simple linear functions would work as well.


Hallucinations are essential for divergent thinking. Not everything is solved following goal driven approaches. Check out DreamGPT: https://github.com/DivergentAI/dreamGPT



Thanks for the post - I put together the tool above. I tried to strike a balance between being concise but also capturing all the important details. For that reason, the tool is hit or miss on longer (> 45 min) videos - the summary on this video is good but I've seen it omit important details on other long videos. The tool also captures relevant screenshots for each section.

Hopefully it's helpful. You can summarize additional videos by submitting a youtube URL in the nav bar or the home page. Also, feedback welcome!


Are you using LLM to summarize? If yes can you share the prompts used?


Any idea how the author was able to have multiple desktop screens? What app is he using for this? I thought it only allowed you to mirror one screen per Mac and from my experience it is quite laggy.


This is the problem(with Apple); this device, like iOS devices, is fenced off, so you cannot run vscode etc if you don’t connect a MacBook. For this device it’s actually worse than an iPad to disallow that: it’s 4x as expensive as a MacBook Air here but I cannot run most apps I want on it while that would make me buy one today. I don’t want another iPad (which I bought because it’s nice and small and great battery, but if I cannot code on it normally, what’s the point).


>is fenced off, so you cannot run vscode

No, the only reason you can't run vscode is that no one has put in the effort to port it. It's a problem of financial incentives and not a problem of "fences."


"fences" is definitely not the best analogy. "spikes and minefields" would be more appropriate as Apple has explicit rules against apps that compile and run code, third party extensions would also probably be prohibited, and the terminal would have little use.

At the end of the day what's possible under the current rules aren't that different from just running it in Safari, so why bother ?


So visionOS is more open than iOS? That would be good. I read it was similar.

Edit: so it is indeed similar/same; cannot run normal desktop software even though desktop cpu.


iOS could get vscode too

Edit: Desktop software can be ported to other operating systems. Just because an operating system can't run a different, desktop operating system's software that doesn't mean it can't run desktop software. The CPU is used in iPads too. It is used between 3 different form factors.


Only the ‘shell’; not all the rest that make it practical. There are a lot of blog posts (a yearly one here) of people trying to code on iPads; they all end badly because Apple allows nothing. It’s fenched off. Nothing to do with lack of incentive to port Xcode; there are plenty of code apps on iPad, they just cannot run real envs (docker, anything other than toy interpreters etc) without rooting.


Port it? What's different with the hardware on this device that requires ports of existing software? Is it a completely new CPU with new architecture?


It is a new operating system with a different api and security model. Binaries on one operating system with the same processor does not necessarily run if you run it on another operating system using the same hardware.


Interesting, VisionOS. Strange choice when they could've just wrapped the "windows" in iOS and leave the rendering to the device.

Probably what they do and it's most likely more marketing than a really new OS with breaking changes to how apps run, but I haven't looked into it.


I use the iPad Blink Shell app, it can have multiple windows and renders using the VisionOS UI, making it sharper than mirroring a Mac's display.

It helps that I have optimised a great part of my workflow due to me liking tablets.

Like others have said, AVP is like strapping an iPad on your head - I happen to like my iPad, too.


That's great! It even supports VS Code. How did you manage to use the mouse? My AVP refuses to pair with a bluetooth mouse.


It only supports the Apple Magic Trackpad: https://support.apple.com/en-us/HT213998


Does the "multiple window" support of Blink work seamlessly on the helmet?


Looks like iPad apps!

You can tell by the icon floating at the top right of the windows, which appears for "compatible apps" aka iPad apps.


Yes you can only mirror one screen from a Mac, but he's using the AVP as the computing device itself.

So it's not multiple desktop screens it's multiple windows from locally running apps.


Thanks for the feedback. You do bring a good point there. I have updated the readme with a screenshot and more details on what is doing.


Thanks a lot! I highly recommend that you give it a try. You should be able to run it on any PC/Mac. No GPU is required. It's fascinating the quality of the ideas that it generates. You can see a sample of what you get just on the first step ("dream" phase): https://github.com/DivergentAI/dreamGPT/blob/main/docs/img/o...


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: