This is so cute and cool! While I don't know, whether I want to play long Quake sessions on my watch, it shows that the Apple watch is a quite powerful compute device. Actually, it should be more powerful than most workstations of the 90ies, as it has a dual-core 64bit Processor.
It also shows, how much the watch and some other Apple devices are held back by the software restrictions. Basically any software released in the 90ies should run easily on the watch. Of course, due to the small screen size and the lack of a keyboard, a totally different UI would be needed, but if people were really free to experiment and distribute, a huge field of new software could be opened.
TBH I would say in the case of the watch, it's held back much more by energy budgets than software; the software is just enforcing that. It's cute that you can run Quake on it, but for how long before it killed the battery? 20 mins? You can see Apple pushing the bounds of what it can get away with and still maintain acceptable battery life from generation to generation.
Yeah you can’t directly use Metal or MetalKit on the watch, so you can’t do completely custom rendering of anything in the GPU (which is why this port has to use software rendering), but even if you could would probably just be blowing through the battery in under an hour.
SceneKit is available. Not sure what the battery life is like when it’s used for anything other than displaying a simple 3D model briefly (e.g. like the fitness medals).
Who is stopping you from doing what exactly? What would be true if you were not stopped? Who is not free to experiment and distribute? What does the huge field consist of?
I second this - people keep making this claim, but I haven’t heard anyone articulate what would come into existence, especially since whatever it is could presumably be made on Linux.
> To continue on this topic, blue light filters may look pleasing to some but it’s not sure whether they do have an effect
I don't know if they have an effect or not, but I much prefer having it on. It feels easier on the eyes. When I turn it off I immediately feel a kind of discomfort.
I don't believe it actually has an effect on my health. But the warmer colors definitely have an effect on my comfort.
I had constant migraines until I started wearing blue light coated (plain glass, no prescription) glasses. Before that I was wearing dark glasses indoors on gloomy days.
Certain lighting (especially fluorescents) and just about any screens always set me off.
True. We can tolerate orange screenshots if it makes some people feel better. Even if it’s a placebo effect.
One study found out that people using such blue light filters tend to have more screen exposure at night. Perhaps they think the filter allow them to watch more the screen, which may not be good.
> Lastly, we analyzed the use of blue-light filters, according to an answer to a simple question: whether participants did or did not use a specific filter for filtering blue-light on their screens. Only 10.6% of the sample (N1⁄474) reported using filters, whereas 622 parti- cipants did not. The most prevalent means of filtering blue light were f.lux (Windows) and Twilight (Android) software. No significant dif- ferences were observed for all sleep-related variables mentioned in the previous analyses. A statistical trend was found for the duration of sleep on workdays (489 vs. 461 minutes, t1⁄43.595, p1⁄40.058, Cohen's d1⁄40.23), meaning those that used blue-light filters slept in average approximately 28 minutes longer on a workday than those that did not use any means of filtering blue-light. No other differences were observed. An interesting finding, however, is that the group of people who use filters had more (albeit without statistical signifi- cance) total screen exposure (8.6 vs. 8.3 hrs) on average and more screen exposure on PC (4.3 vs. 3.6 hrs) and mobile devices (3.6 vs 3.4hrs), but less exposure to TV (0.7 vs. 1.2 hrs).
The article is quite positive about blue lights and presents excuses for its findings in the discussion. They ask for a better study with more participants and more samples.
People don't use filters just for sleep-related benefits. My optician recommended me to stay away from blue light and sunlight because of my eye conditions.
> Extract game music from the gog game files:
bchunk -w “game.gog file location” “game.cue file location” track
(Music tracks will extract in to current working directory (track02 -track11.wav).)
Assuming for the sake of argument that playing back mp3s during gameplay is beyond this device, resampling the wav files to halve the sampling rate when extracting them certainly isn't. You'd get half the storage back, the audio presumably wouldn't suck any worse than it already does on that little device, and it should all work since the 44.1Khz sampling rate isn't baked into the wav file standard (but who cares, you've got the Quake source code anyhow).
It'd be an interesting exercise to see how much you could lower the sample rate without it sucking. On second thought, you could convert stereo to mono first, without even worrying about the sampling rate, and get half the space back. But I'm sure you could do more.
How about Quake on Amazon Echo + Alexa? Build it like a Text Adventure but use NLP + Computer Vision to describe the scene and you shoot and move by describing what you want. Great for people with poor vision or has a nostalgia for the 80s.
Quake 1 port for Apple Watches that uses software rendering and has mostly working audio playback. Runs shareware and registered versions of the game with optional “cd” audio.
This port started from the original Quake Watch port by Tomas "MyOwnClone" Vymazal.
Changes by ByteOverlord:
Save and load game on watchOS
Music playback ("cd" audio)
Camera look and tweaked controls
Autosaving options
Map quick select and cheats screens
Automatic native resolution on watches
How long can I go before I have to rebuild the app to my watch?
This is one of those things I'd love to install and play around with like, twice a year, but I suspect I have to reinstall it every 7 days to keep it working unless I pay an Apple dev tax.
I suppose because I've seen my Apple Watch struggle with the most basic of things. Like displaying a text message, or scrolling through a full set of apps. It's never felt like a speedy device, and seeing something render so smoothly like this gave a contrast to how I usually feel. That help?
The first gen watch was very much like the first gen iPhone - a cool tech demo on underpowered hardware.
It was supported far too long, and with its final updates it was barely suitable as a timepiece. It couldn’t handle the most basic of tasks, like receiving messages or playing MP3s on your AirPods.
Could be, the date I mentioned is when I could afford buying a P166, and I didn't bother to check the interwebs for the release date, now I have gone and found it, depending on the exact variation from P5 or P6, it could have been released between 1995 and 1997.
Indeed. Currently playing Prodeus which is made by a couple of people using unity. Runs with about ~100 FPS whereas Doom Eternal hits about twice as that while looking much, much better.
In fact, there's a recent game called HOAT or something. Built by a single guy using the Quake engine. Had to return it because it was running with 30fps on my machine.
I have some experience with ML and I have no idea what you're talking about. It kinda sounds like neural architecture search and sparse models created using weight pruning. Lots of people are working on both of those things, but IMO the latter (if that's what you mean by "Can you add weights non uniformly?") is a dead end for most use cases where you have some sort of accelerator or deep learning instructions available like on every modern desktop, laptop, phone, and server. Models that use their weights "efficiently" in terms of FLOPs tend to perform like crap on real hardware.
I’m unfortunately the sickest I’ve been in years, so this will have to wait. Maybe it’s part of why my comment sounded strange.
There is an idea here, and it’s a mistake to dismiss it out of hand. Adding weights non uniformly during training (not after) is the key to smaller models that outperform present day GPT3.
A sketch of the algorithm is to start with a 2x2 block of weights, sum the gradients across 10 training steps, then subdivide the quadrant with the highest delta.
Doing this recursively is prohibitive, which is where megatexture comes in.
Many advantages. At runtime you don’t need weight compression because you can simply switch to a lower miplevel if running on a phone. Different accelerators during training can focus on different areas of the network. Weight dropout is an automatic feature. Etc.
If you’ll excuse me, it’s back to hugging the porcelain bowl.
You mention this technique has been shown to outperform GPT3, do you have a citation for that? Would love to read more details about this interesting concept.
"Adding weights non uniformly during training (not after) is the key to smaller models that outperform present day GPT3." seems to greatly imply a certainty of result that has already been discovered. Though this commenter is familiar to me, and I know he has made silly claims in other threads throughout the years here.
I ended up calling an ambulance so I’ll postpone this until later. Feeling a little better but a full explanation will have to wait.
The answer is that of course it’s not proved yet, since no one has implemented it (or at least efficiently). It’s fine to be skeptical.
Current techniques are blocked by the technical challenge of getting 10GB+ to fit on a pod. Very few people have those skills. If there’s even a chance that this will work, it’s worth exploring, so I will be.
Sounds kinda like progressive growing except you're not doubling the resolution uniformly. See ProGAN and its successors. You'd still need to add a large block of weights at a time for performance reasons.
Edit: Ah I checked your profile and you already know all this. You probably should have mentioned that lol
Network compression is already a thing that is studied and forms of it are already used in production neural net models where latency / cost is important.
The way pruning works is not like how a "megatexture" works.
For those that prefer not to leave a mess on their file system, nor have default permissions changed and security degraded, nor have Google Analytics snooping on them, would prefer access to upwards of six times as many packages, would prefer an easy choice between binary install and full source build including dependencies built from source, and would prefer a more recent version (1.6 < 1.6.1), and/or are a recovering alcoholic, innoextract has been available on MacPorts[1] since January 2019.
> For those that prefer not to leave a mess on their file system ... innoextract has been available on MacPorts[1] since January 2019.
Speaking of making a mess, instead of somewhat vague sniping, you could have made it clear that your beef is with Homebrew, rather than risking the impression that it's with this neat Quake 1 port. Because at the moment you have the top rated comment and it looks like you're slagging off this project rather than making a potentially valid argument for Macports over Homebrew.
The project author probably chose Homebrew because they already knew how to use it, it works, and they were more interested in making their Quake port than with evaluating different package managers. People make these kinds of choices all the time, particularly with side projects where they only have limited time to work on them.
> it looks like you're slagging off this project rather than making a potentially valid argument for Macports over Homebrew.
It looks like you're personally attacking me rather than speaking to any argument I may have made.
> The project author probably chose Homebrew because they already knew how to use it, it works, and they were more interested in making their Quake port than with evaluating different package managers. People make these kinds of choices all the time, particularly with side projects where they only have limited time to work on them.
As I made no mention of the author nor the project, and no one but the author knows the reasons for their choices, this is a mind-reading fallacy wrapped in a straw man.
And in general it does everything as user not su for a few years now. (Actually a good idea if using it before M1 to brew dump, remove installs and brew, then reinstall from the dumped brew file.)
The issue is that by default on install Homebrew uses Google Analytics.
> And in general it does everything as user not su for a few years now.
Homebrew always did everything as the admin user and not as root, and that was always the problem. It mungs permissions of /usr/local; all files, not just directories. And it's not so much of a package manager as it is a needless frontend for git. MacPorts, alternatively, respects default file system ownership, is a full-featured package management system, and like pkgsrc, is based on the FreeBSD ports system.
And when uninstalled, MacPorts doesn't leave a mess. With these 3 commands, it leaves no trace (with the exception of the /opt directory in the event it is used for something else):
sudo port -vfp uninstall --follow-dependencies installed
sudo port -vfp uninstall all
sudo rm -rf /opt/local /Library/Tcl/macports*
If the day ever comes, good luck uninstalling Homebrew.[1][2]