Hacker News new | past | comments | ask | show | jobs | submit login
Quake III bounty: we have a winner (raspberrypi.org)
288 points by benn_88 on March 31, 2014 | hide | past | favorite | 82 comments



"We need plenty of space to build the kernel. Compiling will take around 12 hours, so it is helpful to overclock the Pi for this task."

Or, you know, use a cross-compiler?


You probably already know this, but setting up a cross-compiler toolchain is a super annoying task, and is very dependent on the distro you're using. I'm sure the author knows about cross-compiling, but he probably preferred clear instructions that work for any RPi user.

There might be even stronger reasons to not use cross-compilers such as weird bugs or compiler version compatibility issues.


Indeed. Either you give distro-specific instructions (long) or you make people build a gcc cross-compiler from scratch (also long) or you gloss over that bit (prone to failure).

Many Pi users' only linux system is the Pi.


Virtualbox is free. Debian is free. Code Sourcery ARM/GCC Lite is free. I've set up ARM crosscompile environments this way in under an hour. This isn't really a superhard thing to do.

[EDIT] Vendors do this too. I went to a workshop on the Freescale i.MX line and guess how everyone got the development system? A VMware image on a thumb drive. You don't have to be a hero when it comes to cross-compiling, just get the work done.


This is anecdotal evidence from my own experience, so take it with a pinch of salt. I work on an open hardware project[1], I believe that we lose a significant portion of users (~50%) at each step deeper in to the development process. It seems that the steps are along the lines of: binaries - source releases - code from GitHub - updating firmware - compiling firmware - debugging hardware.

I think it would be useful to have a Virtualbox image available with everything set up for cross compiling for the Pi, BBB, etc. The thought of setting up non-standard toolchains seems to put off a lot of users.

I've also been working on a project[2] involving the BeagleBone Black and I saw a sharp increase in users when I began distributing an OS image with everything pre-installed.

[1] https://github.com/greatscottgadgets/ubertooth [2] https://github.com/dominicgs/USBProxy


I'm surprised it hasn't been done already. Wasn't the RPi supposed to be an "educational" product, or is this just a case of "shut up and give my my toys already"?


The whole intention of the Pi as an educational tool was to provide a clean-slate platform where users could experiment without worrying about breaking things on their OS install or having to sludge through layers of virtualisation, build systems and compiler configs before doing any programming for the target system. If what you're suggesting is that RasPi users in this original target group ought to be plumbing together virtualised cross-compilation for their Pi with their own hands, that would defeat the purpose.

Of course not everyone who wants to play with early versions of the new Q3A port falls in that category. But the fact that the Rasbpian developers themselves have steered clear of x86->RasPi cross-compilation suggests that it's not necessarily straightforward even for experienced people.


"We want to break the paradigm where without spending hundreds of pounds on a PC, families can’t use the internet. We want owning a truly personal computer to be normal for children, and we’re looking forward to what the future has in store"

That's from RPi's mission statement. It's on their website.

RPi had nothing to do with being a cheap Linux platform for hardware hackers and people who wanted a cheap XBMC or MAME box. But now it's 99% that.


"RPi had nothing to do with being a cheap Linux platform for hardware hackers and people who wanted a cheap XBMC or MAME box."

Indeed; I didn't suggest otherwise. Mind you, getting families on the Internet for $50 or $300 wasn't really in the original mission either:

"The idea behind a tiny and cheap computer for kids came in 2006, when Eben Upton, Rob Mullins, Jack Lang and Alan Mycroft, based at the University of Cambridge’s Computer Laboratory, became concerned about the year-on-year decline in the numbers and skills levels of the A Level students applying to read Computer Science. From a situation in the 1990s where most of the kids applying were coming to interview as experienced hobbyist programmers, the landscape in the 2000s was very different; a typical applicant might only have done a little web design.

Something had changed the way kids were interacting with computers. A number of problems were identified: the colonisation of the ICT curriculum with lessons on using Word and Excel, or writing webpages; the end of the dot-com boom; and the rise of the home PC and games console to replace the Amigas, BBC Micros, Spectrum ZX and Commodore 64 machines that people of an earlier generation learned to program on.

There isn’t much any small group of people can do to address problems like an inadequate school curriculum or the end of a financial bubble. But we felt that we could try to do something about the situation where computers had become so expensive and arcane that programming experimentation on them had to be forbidden by parents; and to find a platform that, like those old home computers, could boot into a programming environment. From 2006 to 2008, we designed several versions of what has now become the Raspberry Pi; you can see one of the earliest prototypes here."

http://www.raspberrypi.org/about

It was meant to be a direct, hands-on programming environment with nothing valuable to break for programming beginners, thus David Braben and all the BBC Micro nostalgia.


Well, then I'm glad we're all busy porting Quake to this device instead of all the noble things you listed above.


The point wasn't to port Quake (it already ran on the device), the point was to port the recently open-sourced BCM21553 graphics driver to run on the Pi's VideoCore IV. Getting Quake III running on the resulting stack was just the defined endpoint.


And, once again, who does this satisfy other than the people seeking a cheap XBMC platform?


People who just want a cheap XBMC platform already had the proprietary driver.

This satisfies people who want to be able to read, understand and hack on the video driver.


...to make XBMC and Quake faster.


I hope your OS image is GPL compliant.


It's probably a good thing to do if you're doing a lot of ARM kernel development, but for a oneshot thing? For someone unfamiliar with those tools?

It would be interesting to race someone unfamiliar with that toolchain installing it and cross-compiling versus someone just following the instructions.


If it really takes 12 hours to compile the kernel and application on the target, I think no amount of instructions would be slower.

And it's a one-shot thing, why make people compile the kernel at all?


Start the compile, eat dinner, go to bed, breakfast, go to school, eat lunch, done. Beats fighthing with a cross compiler toolchain for a one-off job by miles.

(Reminds me of a povray benchmark I did some time ago: start the render, go on a holiday trip, wait some more, done: https://groups.google.com/d/msg/de.comp.os.linux.misc/XmdkN1... )


In the 90s I built a FreeBSD firewall using discarded PC parts. It took 10.5 hours to build world and kernel. There were 2 power outages that forced me to start over each time. I bought my first personal UPS to fix that problem. I would learn how to cross compile instead of waiting 12 hours.


At the moment this is the winning result of a competition (first person to run Qiii with the open source drivers at 1080x? At 20 fps) so there are further optimisations to be made.

Once the code has been tweaked there'll probably be an image for it somewhere.


Considering this really isn't a, "have to have it right now" type feature, it's simply easier to kick off the compile and walk away.


That's great.

I haven't looked but are there documentations to allow 14 year olds who have never done this before?


So not hardware coding by any means, but I use VirtualBox to build LibreOffice on Ubuntu.


> cross-compiler from scratch (also long)

Building GCC shouldn't take more than a couple hours on a modern machine, and it's a one-time cost for a drastic speedup of every subsequent RPi build. So the aggregate time should still be considerably less than 12 hours.


I wonder why its so hard? I once built a modern gcc + glibc in my home directory on an older linux (to be able to run modern programs), and it was mostly "relocatable" or "portable" (what I mean is that you could copy it around and run it from other directories).

Couldn't somebody build a full cross-compiler toolchain as "relocatable" binaries, depending on an older kernel, and then just offer that as a binary download to run on most recent distributions? It's not a typical way to distribute a linux application, but it should work in principle.


I suggest looking at LinuxFromScratch[0], especially II.5. It's hard to do a "real" cross-platform compiler because your target system might not only be a different architecture but also has different libraries on the system with which the compiler has to work. All in all, doing it right and being able to ship binaries is a lot of work and constant maintenance as your target system, in this case Raspbian, also changes their libraries.

[0] http://www.linuxfromscratch.org/lfs/view/stable/


Some issues:

Your GCC build has to match the userspace C library (uclibc or other). If it doesn't you'll need to do all the path passing and usually link manually as well.

It has also to match the kernel somewhat (not so critical for userspace apps, but if you want to add a module or something it's critical)

So, basically what you "should do" is build the whole system together (cross-compiler + userspace + kernel), some tools do that, building first a "raw" cross-compiler, then the C library and kernel, then a full compiler with all the options applied.


Here is a great explanation of why setting up a cross compiler is so difficult.

http://www.airs.com/blog/archives/492


I use crosstool-ng to do the heavy lifting for me. [0] But lately I've been trying out musl-cross to get nice, self-contained static binaries. [1]

[0]: http://crosstool-ng.org/

[1]: https://bitbucket.org/GregorR/musl-cross


Crosstool-ng is nice, but if you have even a mildly out of the ordinary arch/libc/kernel version/etc combination it is still a massive pain in the ass. Typically I'd rather build the kernel on device overnight instead of spend manhours getting crosstool-ng to work as I want so that I can build on beefy hardware.

It tends to only be worth it if you are going to be using it more than a few times.


Wouldn't an emulator actually be faster on high-end hardware? Then you can run native tools at much faster than Pi execution speed, right?


"You probably already know this, but setting up a cross-compiler toolchain is a super annoying task"

Yes, I agree 100%, building a cross-compiler is a very complicated task.

But I think with the weight of the Raspberry Pi projects this should have been easier. This is a good project http://buildroot.uclibc.org but unfortunately it's very unstable.


Maybe it's because I'm a tub of noob (EE here, I only write code when my hardware demands it) but setting up crosscompiling the Android kernel was physically painful, and it's one of the best-documented things out there.


Cross compiling the Android kernel is fairly straightforward. Download the whole Android source tree, source build/envsetup.sh, lunch the target you want, export ARCH=arm, SUBARCH=arm, CROSS_COMPILE=arm-eabi- and then build away in the kernel repository. Honestly, the easiest cross-compiling I've had to do this side of Go.


It's easy because the cross-compiler is included, and everything has been set up to make it easy


Sure, but most things are easy because someone else has done the heavy lifting. I was just pointing out how easy building the Android kernel is, something he described as "physically painful".


An alternative to a cross compiler is to just use distributed compilation and set the current box to 0 so all jobs get distributed to other faster x86 boxes. In the past I regularly used this to build arm on a x86 and more fun building for OSX ppc on x86. ./configure and basic make stuff occur on the "slow" arm/ppc, but the actual build jobs are sent out to the farm for building.


Just to separate the problems:

Building a cross-compiling toolchain is hard. Totally understand that. Android [0] and Linaro [1] both provide pre-made pre-tested builds you can just pull and use. I recommend using them.

Building the kernel itself. As mdwrigh2 pointed out, this is actually pretty easy. Also the kernelnewbies community is here to help!

[0] http://s.android.com/source/building-kernels.html

[1] https://launchpad.net/linaro-toolchain-binaries


A cross compiling toolchain is fairly easy. You should try plan9's. What is hard is coping with the layer upon layer of the Linux toolchains where the solution is considered by many to be worse than the original problem.


You've got twelve hours. Today's 4/6/8 core CPU's with an SSD can build a Linux kernel in what, 5 minutes tops?

I built one on an old Xeon in something like 4 minutes and it took 3x as long to upload/download the files to the system I wanted to use it on than it did to build it.


Compiling a vanilla kernel on x86 hardware is one thing, building an ARM kernel is quite another in terms of performance, or at least that's been my experience.

To give you an idea of how bad the difference is, to compile the Linux kernel in defconfig, using 2 distcc boxes each with AMD FX8320 3.5ghz (8 core) + 32gb ram + 1gb LAN. The x86 stuff boot off RAID1+0, the the PI obviously off its sdcard, here's what happens:

  * Compile for armv6 on pi host + distcc compile farm: 19 min 11 sec
  * Compile for armv6 on qemu host + distcc compile farm: 29 min 36 sec
  * Compile for armv6 on x86_64 hardware + cross toolchain + distcc compile farm: 1 min 59 sec
  * Compile for x86_64 defconfig on off-the-shelf x86_64 hardware (also fx8320) + distcc compile farm: 1 min 31 sec
That's compiling "linux-3.6.9999-raspberrypi.tar.bz2" with -j20 across the board. I realise there's a number of problems with this "test" - obviously the last two are unfair, the PIs laughable shared bandwidth across network & usb gimp it horribly. I have no idea why qemu is so bad though, I pointed it at an img taken from a running PI vOv and I've probably done a million other things wrong getting the numbers.

But to anyone intending to compile armv6 crap on a pi or qemu based compile farm: unless it's a one off thing or your time is worthless, don't do it. Invest the time getting crossdev, crosstool-ng etc + dist cc working properly. I only bothered because I was looking for a better way to build stuff for Android and used the setup as a testbed.


From the article: "If you cross-compile you can get better frame times."


I'm not familiar with the fine print, but given the nature of the challenge it certainly seems a lot cooler to me that it can be done fully by the Raspberry Pi.


They explicitly recommend using a cross-compiler.


Very cool - hope this becomes a template for other companies as to how to properly handle open sourcing the various important parts of the stack in the future. Seems like this is a very smart move by Broadcom - as demonstrated by the community response, which is mostly positive.


Intel / AMD should be the template - actually hiring FOSS hackers to make you a proper driver for whole-stack support.

It is nice that they released an older chips driver code, it is better than nothing (and better than just programming manuals) but we have companies like Qualcomm with the Atheros driver, Intel with their wifi, gpu, and ethernet drivers, and AMD with their chipsets, gpus, etc all contributing engineers in the kernel to make foss drivers, and we shouldn't give any company too much credit for doing any less than the same.


It is nice, but a little late for many people. Couple of years back when I was investigating the RaPi for building a video recorder, the lack of open video drivers prevented me (and apparently many in the forums) from integrating camera modules. I ended up using the TI boards, which have open source stacks since a long time now.


The RPi video core driver might not be fully open, but it has a usable camera API and video compression API.

Which boards are those? I didn't find a single board that had an open video driver - not Allwinner based ones, not Beagle Bone Black (no encoding/decoding hardware at all).


I am not familiar with Beagle Bone, but if it doesn't have encoding/decoding hardware, how did you expect a driver for it?

The APIs in RPi that you mention are high-level APIs. Before Broadcom open-sourced the implementation (which they called IDL I think), it was not possible for RPI users to interface a custom camera module with RPi (without signing NDAs with Broadcom).

After some investigation, I went with LeopardBoard. I haven't progressed far on the project [1], but AFAIK, it and other boards had completely open stacks.

[1] I am stuck with a simple issue: Not able to get the right serial cable to connect to the board.


I did not expect drivers for Beagle Bone, but I preemptively responded to "But BeagleBone is better" responses, which I've gotten every single time when discussing RasPi limitations.

> it was not possible for RPI users to interface a custom camera module with RPi

That's correct. But the RPi's camera modules (standard and noir) are very well supported (even if source is not all open), and they work very well - reasonable quality 5MP@15fps, FullHD@30FPS, 720p@60fps, with access to the encoding/decoding pipeline that allows you to e.g. insert an overlay before h264 encoding the original stream (in fact, that's the most cost effective way to add overlays to an already encoded 1080p h264 video even if you don't connect a camera).

I find that it's pretty hard to beat RPi on price, support, functionality/price, community, openness etc. in general - to the point that if the RPi is not the right solution for a project, it is unlikely that any of its competitors (low power sub $100 cpu+gpu+network on a small form factor) is.


Quote "Very cool - hope this becomes a template for other companies as to how to properly handle open sourcing the various important parts of the stack in the future."

I DON'T it was A) Not a driver (They should have made the driver and released it B) Use the actual chipset not one removed chipset and have people hack on it for money.


How very cool. I didn't see any mention of the frames per second that can be expected other than the 133 FPS in the screenshot, but I assume that's not from a Raspberry Pi!


The screenshot comes from here:

http://blogs.wefrag.com/cedzekill3r/2010/04/24/fps-engine-id...

Definitely not a Pi.


Haha, is that the day HN meets NoFrag ? :)


IIRC it was highly possible to get 60FPS even on old hardware ~10 years ago.


I remember playing Quake III on a 450Mhz with 60FPS. So yes, it might well be possible.


I remember playing Quake 3 on a Pentium 2 233 MHz in 1999 and a Voodoo 3D graphic card (30+ fps).

This was the first game that had no software renderer.

All other games (til ~2001) including Unreal Tournament just run fine on a Pentium 133 MHz with a software renderer (exceptions were also Outcast and Ultima 9).


> "This was the first game that had no software renderer."

Descent 3 came out a few months earlier, and also had no software renderer.


I had a pentium 233 and couldn't run Half Life at any playable FPS.

Hell, I had to run Quake 2 at 512x384. SiN was only playable at 384x288, and looked awful.


I played Half Life 1 on a Pentium 133 at 640x480 in software mode, run fine (Win95). It also depends on your memory. I had 40MB, but I remember that my initial 8MB RAM were not enough to play even Age of Empire back in 1997 (horrible mouse lags on bigger mid-sized maps).


I think my machine had 32mb, so that was probably it.


yep, p2 @ 450mhz + dual voodoo2 cards gave me a lot better than 60fps at the time (don't remember exact numbers).

the rpi's gpu is much better than a voodoo2 so I can believe 133fps.


That 450Mhz CPU of yours is very likely much faster than the 700Mhz CPU on the Pi.


What interests me is what are implications for general computing performance while using GUI/xServer on raspbian or raspBMC? As I was told rPi has ridiculously powerful GPU compared to processor, but basically it's unusable due to lack of proper driver support. With such quantity of Raspberry Pis sold one would expect more support from Broadcom.


Not sure if you saw http://www.raspberrypi.org/archives/6299 but even before then I'm not sure I would imply that Broadcom hasn't been supportive. The thing was designed by someone employed full-time as a Broadcom engineer, after all.


Such quantity? It's like 2 million units. That's roughly equivalent to a mildly popular android phone model.


I don't know much about the q3 infrastructure - does Q3 use DLL mods or some kind of bytecode? I'm curious if this means that all the Q3-based standalone games - OpenArena, Smokin' Guns, Q3Rally, etc. can get a RasbPi distribution, that would breathe some new life into those old projects.


you've got the option. IIRC, q3 itself generally runs bytecode blobs rolled by ID and compiled with their speical tool to that does quakeC -> qvm. q3 based games generally have compiled dll or .so files that may or may not make external OS or library calls.

source: i cloned orange smoothie pro for jk3


That is so damn cool! I need to get started on my Gameboy Color emulator.


This is really great! I'd be interested to know what framerate the Pi can output at 1080p.

Someone ported Quake 3 to ARMv6 Symbian smartphones in 2008 [0]. I remember running this on a Nokia E90 Communicator and the framerate was over 15fps most of the time.

[0] http://koti.mbnet.fi/hinkka/Download.html


Not sure about the Pi, but the Pandora which has similar ARM class of hardware can run Quake 3 at 25-30+ fps in 800*480 resolution. The GPU of the Pi may be more powerful while its CPU is weaker.


The competition rules said at least 20 fps, which seems playable.


I used to run Doom 2 on my 386sx. Doom 2 required a math co-processor, which the sx didn't have, so I had to use a TSR which emulated one, slowing my CPU down even more. It probably ran at 1-2 FPS, but it was "playable" enough for me to spend countless hours playing it. Ahhh, memories...

So yeah, 20 FPS is definitely what I would call playable. ;)


Doom (or Doom 2 which had basically the same engine with just bugfixes and support for Doom 2 entities) didn't actually need an FPU. Even 386DX's or 486SX's had no FPU so no games from the pre-1995 era really depended on it.

I remember Falcon 3.0 and perhaps some other simulation games having support for FPU for those few who had one however...


On an N95 I remember it running even better than that. That port was actually amazing, and it even supported a full KB & Mouse via Bluetooth.


Shocking that the PI can produce 133 fps and my 27" iMac Quad core monster gets a random and feable 70-90fps.

Nice work though Simon.


If you're only getting 70-90 FPS there must be something badly broken with your iMac. The thing should be able to run at 125 FPS (capped) on high res without breaking a sweat.


Yeah I agree, I did quite a bit of looking into it as well. In bootcamp I was able to manage a solid 125fps, yet when in OSX it varied between 70-90.

Although I performed many tests I was never able to get the bottom of it.


Apples OpenGL drivers are known to have sub par performance compared to those for Linux and Windows.


That image is not from a Pi


Fair enough.


Check if your iMac defaults to software rendering for some reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: