I'm really surprised Scribd seems to have had little public trouble with copyright.
I used to find myself there all the time for various hard to find and not for public consumption automotive docs. The site was full of obviously infringing content.
I can hardly understand anything in that doc. I'm a decent-enough iOS programmer from a LAMP and design background. It makes me wonder if in 20 years people like me will look back at the iOS programming guide and be equally confused. As programming gets abstracted out, put into frameworks and simplified at the user-level it's sure to go even more visual.
I like that you said you are an "iOS programmer" that's the problem with programming today. you are not a programmer. if you were a programmer, you would be able to make sense of that document. so to answer your question, 20 years from now, programmers of that era will be able to make look at iOS programming docs and understand it, meanwhile "20 year future language programmers" won't. I say this because I learned gameboy programming in my first year of college (1997) by learning z80 and reading gameboy programming text files that were probably around 5 pages. that low level programming, till today serves me well.
Truly spoken like someone who has never written iOS programs. Some iOS programs are actually truly inspiring and I know of many people who haven't gotten into that market because it's too competitive (programming wise).
FWIW, I know my x86 assembly and am not an iOS programmer. But I have huge respect for good iOS programmers. Some of those apps are phenomenal.
He's not claiming that someone who programs for iOS is a bad programmer. At all.
He's claiming that people who identify themselves as an "iOS programmer", rather than just a "programmer", aren't really programmers. A "programmer" might be able to write inspiring stuff for iOS, but they haven't pigeon-holed themselfs into being an "iOS programmer"
> I know of many people who haven't gotten into that market because it's too competitive (programming wise).
Or because it's a market where really awesome programmers can't distinguish themselves from good-enough average ones because few customers can discriminate between those.
The framework is glorious, the underlying user-base code can still be utter horse shit. Any programmer that doesn't know assembly is worth his weight in basic.
hmmm. a bit harsh and inaccurate to call him "not a programmer." We are all really just digital stage managers.
Here's the secret to understanding the difference between API'ists and low-level programmers: time-to-market.
iOS programming is deeply mainstream in the market right now, so elaborate frameworks are available for it. GameBoy was waaaay ahead of its time and had no APIs for developing.
The same is true today: If you want to get ahead of the market, you can take the hit in complexity and develop for a lower-level of hw/sw, like the Parallela, in exchange for a short-term advantage, but there's no mainstream market yet. Or, you can write a great iOS app today, but you won't be the first. The initiative was lost a long time ago.
It's all about withstanding the headaches of bleeding-edge HW and SW, versus gaining the edge of being earlier to market.
If you're assembling a team, you want a range of abilities, from lowest-level people who can tell you what's really going on down deep inside up to the APIists who can leverage the work of many others to get access to widely-used features.
What makes you think that a "low-level programmer" can't do time-to-market stuff? That's like saying that a professional writer and proofreader can't fire off a quick SMS to his friends (sometimes with a typo, but nobody cares).
ah, i can see how my comment could create confusion.
I was actually intending agree with you, to mean that the low-level programmers are precisely those individuals who achieve the time-to-market. They can wade through the specs and configure devices before the polished API becomes available.
I don't see any reason why "future" programers could not just learn low level stuff. Your reasoning is completely subjectively based. Programing is something more than just specific knowledge.
programmer from dictionary:
>>a person who writes computer programs; a person who programs a device, especially a computer. <<
OP is referring to programmer in the same vein as someone refers to an 'artisan.' You wouldn't call someone an artisan if all they do is manage the inventory order from Baked-Cakes-For-Fakes. There are far too many 'programmers' who can't code their way out of a paper bag. If you don't know assembly you probably blow at debugging, and that's amateur.
Great attitude. That's like saying people that program in ASM on graphical displays aren't real programmers due to not using the old punchcard system. Get over yourself.
This 100%. It pangs me to see people call themselves .NET programmers or iOS programmers. You're either a general programmer or you're a class act. I know about 20 languages at this point and they all serve me well. You can bet you're going to need to know the hardware as well as assembly translation when you're not designing a stupid social app. Or well, don't learn assembly and continue to be mystified by the lldb debugger assembly dump in XCode, "that's just low level tundra tuft, lemme just close that and add some print statements..".sigh
i'm sure there are many talented .NET programmers who could quickly pick up assembly and become much more productive than you in that language... if they had a good reason to learn it.
That's not how it works. There's a reason unintelligent folk stick to one thing, it's because their aptitude does not stretch well across multiple venues. Programming is like chess, most good chess players play bughouse, losers chess, etc.. .NET is like sticking to 20min chess your whole life. You might get good with t but you have no dynamic range...
What has changed over time are the constraints. Nowadays if you want to store a number you can just do so. NSDictionary deals with mappings. You can just serialise data out and back again. You don't need to worry about memory until a crash says you are using too much, at which point Instruments points the finger. You are unlikely to encounter most boundary conditions in your code (I don't remember the last time malloc-equivalents returned no memory).
Back in the "olden" days, storing a number required thought. 8 bit processors could often deal with 16 bit values, but beyond that you were on your own. You had to be very careful designing your data structures so they used the least memory possible. Saving data was unreliable. There were often no debugging tools at all. Some lucky folks had ICE[1] but most of us had to guess why the system hung. All this meant you had to get low level because of the constraints.
But you know back in those olden days we didn't have to have a comprehensive help system. We could assume users were proficient. We didn't have to deal with security and attacks. No networking. Undo was rare, and if it existed was a single step. Your code only ran in one human language. So while we did have to care about the low level implementation details of the code, we didn't have to worry too much about the rest.
The low level constraints are pretty much gone now. The tools are great. And you do have to worry about a lot beyond the actual implementation like usability, smooth animations (games had to worry about that in the olden days but in a different way), multiple languages, coordinating with other systems over a hostile network, attacks and a whole bunch more. I'm glad for all this, because worrying about fitting data structures and a framebuffer in 16kb of memory didn't make for a better app.
Being familiar with the low level is still sometimes helpful, but not that often. It is also a heck of a lot more complicated - eg compiler produced code is not straightforward and logical. Multiple levels of cache, cache policies, speculative execution etc make reasoning about data more difficult. The majority of the time it doesn't matter.
Right now the majority of mobile apps behave rather poorly if you have more than one device. In the future that will be a given. UI is more likely to be goal driven with the system figuring best practises for meeting the goals, versus nudging things by pixels as is done today. (Heck a user will probably do some combination of pointing, speaking, thinking and just plain old looking.) I'd expect a more functional approach where immutable states are the unit of an app, and new states are produced via interaction and external updates with the states syncing across everywhere relevant instantly.
I think this document nowadays is more catered to embedded or micro-controller programmers. When the document was released it was common to get close to the hardware and probably most people had some experience/exposure nowadays not so much
Unless you know low-level programming, you're not really a programmer -- at least not one that won't commit a whole host of errors and mistakes. Know your hardware, know your assembly, know your C, and know your higher level languages and frameworks. Being ignorant to the base layers is sloppy, mate.
From what I gathered (8bit era cpu, console hardware videos) the concept of programming has virtually evolved. Nowadays you use general languages to interact with a system or set of virtual devices (iOS, its subsystems etc) meanwhile in the GB era the system was a set of concrete hardware chips connected by buses and you used mnemonics to orchestrate them (by low level byte messages). My conclusion is that you don't understand because it's a very alien looking framework. I'd also bet 5$ that in 20 years the changes won't be as dramatic as the 80s-00s shift.
I'd also bet 5$ that in 20 years the changes won't be as dramatic as the 80s-00s shift.
Thare's been a "dramatic change" between the 80's and 00's? If so, I must have missed it. I look at, say, IBM 1620 versus the machines designed in the 1980's, and that is change to me. In the 1980's, essentially everything we use now in the area of personal computing architectures has already been invented and put to use. Essentially, by the time you get to the 1980's, everything has been so homogenized and regularized that I really don't see anything new that happened since then.
Perhaps some change is going on now - AMD's hUMA seems to me to be the first major departure from the 1980's machine model in those twenty five years or so, but even then, I'm not sure how much is that "new" - IBM has had heterogenous CPUs on their mainframes for quite some time, so conceptually, it's just another idea getting to the PC world from the world of mainframes.
Back when I was teaching English in Japan, I stayed up late hacking on Game Boy code to keep that part of my brain active. It was during that time that I really realized that I wanted to work in software, or at least something with some creative output.
As I understand CS, it deals a lot with Data structure construction and management.
In my school, we began with high-level functional languages and it's only after 3 years of programming than we discovered low level programming and could write a Kernel.
Can you explain how a RISC assembly language can help you learn CS ?
I'm not sure if I define CS that narrowly, but anyway I was working with the C compiler, not the asm code. It was a good learning tool for me because of the simple output and the slow processor, which meant that I didn't need to worry much about the graphics and that poorly performing code was immediately, painfully noticeable. There were plenty of data structures to work with sprites, characters, screens, etc.
My CS course (in the late 90s) kind of went the other way. One of the earliest courses was one that taught how a cpu operates -- what a register is, what the accumulator is, etc. There was no actually programming, just working through the steps of what the computer does to perform a given operation. After that there were a number of directions to go in.
I've really been wanting to get into GBA programming. I really would like to buy a cart to be able to develop directly on my DS ( I thought I would be able to find one in Akiba, but they seem a bit impossible to find in person, and all the online stores I've seen seem mega shady)
Practically speaking, most of your iteration will be on emulators anyway, so don't let not having a cart stop you from getting started. GBA programming is a ton of fun!
I didn't know that GB consoles didn't have a flat memory model. That was horrible to program with x86/DOS and must have sucked even more on a tiny hand-held.
x86/DOS actually was much nicer than most consoles, as it had a 20 bit address space. All memory in the machine could be addressed, even if you had to juggle NEAR and FAR pointers.
The 8/16-bit consoles could only address 64k, which was to be used for program, data, VRAM and memory-mapped I/O. So you were forced to partition your code and data in 8k or 16k banks and manually swap them in and out. If bank 1 was swapped in, there was no way to access bank 2...
Right - everything has to be a sprite. That way you can represent the screen with something like 360 (20x18) sprites, instead of 23040 (160x144)) pixels. The sprites are also done by blitting hardware, the Z80 CPU presumably couldn't update a 160x144 screen at a playable framerate if it had to update pixel by pixel.
It's always interesting to see game console programming. On a similar note tonc http://www.coranac.com/tonc/text/ is a great intro to homebrew GBA programming.
"2. CPU
2.1 OVERVIEW OF CPU FEATURES
The CPUs of DMG and CGB are ICs customized for DMG/CGB use, and have the following features.
CPU Features
Central to the 8-bit CPU are the following features, including an I/O port and timer.
127 x 8 bits of built-in RAM (working and stack)"
How much memory is needed today to make games that are only half entertaining?
Of course modern games use more memory as they are running on completely different platforms. Memory usage has nothing to do with entertainment level. A 8-bit Chess Game might be as entertaining to someone as is Call of Duty 26 to someone else.
(Game boy has 8192 bytes of working ram. 127x8 bits is only for stack)
wow. My childhood dissected into something I can (to a marginal extent) comprehend. It's quite the fascinating reality check. It isn't really magic after all.
Not particularly... The game boy's processor is more like a i8080 than a z80. It lacks the second register set used for interrupt handling and the iX and iY registers and the super-sweet addressing mode they enable.
Game boy assemblers do use the z80 syntax rather than the crazy i8080 syntax, thankfully.
>>To download this document you must become a Premium Reader<<
EDIT:
http://www.romhacking.net/documents/544/
https://www.google.com/search?hl=en&safe=off&q=GameBoy+Progr...