The original link seems to be a search result, which makes me wonder could this be used as some sort of SEO technique.
I'm not implying that this was OP's intention, just that from what basic understanding of SEO I have, Google takes into consideration click through rates of a link, higher click throughs resulting in more favorable rankings.
So, theoretically, if you could get a lot of people to visit an article using Google's search link, instead of a direct link, would that somehow convince Google to rank it more favorably?
Again, not something I'm accusing the OP of, just generally wondering.
You're right, looks like it's been at least since 2009. I recall they used to not have any redirects, and then they started inserting them randomly, for only maybe 5% of search results. Then at some point they got phased in for all results. Thought that phase-in was more recent, but seems not.
Simple and valuable to Google, perhaps, but a major sign that they're no longer doing what's best for users. It breaks the web in at least two ways: makes it slower and hides the true links. I hate that they do this. Nothing has been more destructive of the warm feeling I used to have toward Google. I still use them out of habit but eventually irritation and the rise of some credible competitor will coincide and cause me to switch.
> The proper URL is the green line under the title.
If the URL is short enough that works fine, but if it's longer you can't copy the URL from there, because Google will insert an ellipsis in the middle of the long URL. For example, an article I was searching for just now shows up on the green line as ssdi.di.fct.unl.pt/masters/mcl/content/.../Robinson-CL2000.pdf. Can't copy that, and can't get the original URL from the right-click either!
Isn't this occasional problem worth having in return for better search results? How often you copy the link URL instead of checking what's on the other side?
Having grown up on Borland Pascal, I have a sentiment for Pascal and its derivatives. I enjoyed programming in BP, and I'm not too happy that Pascal derivatives lost their popularity to C derivatives. (I was reading a bit about Modula 3 and got quite intrigued by its module system.)
But, users are few, fragmentation is high (Borland Pascal/Delphi, Object Pascal, Component Pascal; Modula-2 and Modula-3; Oberon, Oberon-2, Oberon-07, Active Oberon, Zonnon). How the heck are you supposed to choose when all those languages taken together have probaby about user base as "large" as that of common lisp? Plus, Oberon webpages at ETH look like abandoned.
(I'll leave it to others to correct my Google Trends query to the proper search terms for Lisp variants -- or it could be that Lisp programmers tend not to Google their own language much :) )
In two decades as a professional programmer, I have only ever heard glowing remarks from Delphi developers about their environment. Certainly Eclipse and even Smalltalk can't match that.
Delphi 6 and 7 were the best IDE's of their time, way better than Visual Studio 6. The language was great for the time, there was a vibrant community around it, you could find a component for everything. Borland also shipped the sources for their main library, VCL, which helped a lot when debugging. It was the best way to do Rapid Application Development.
Unfortunately, the company lost it's focus, they've tried to move to .net and the next few versions of the IDE were really buggy. The language also lagged behind C#/Java, they didn't add full Unicode support until 2009, for example.
They're no longer lagging. The most recent version has a new ui framework that is hardware accelerated and cross-platform (a pretty unique combination). You can even make iOS apps with it.
Since Pascal lost, we gained buffer overflows exploits everywhere, thanks to a language that does not provide proper safer constructs coupled with bad developers everywhere.
The problem is, you can't do any real work in Pascal without resorting to the same unsafe constructs.
Delphi is a kind of useful Pascal and it totally does have those.
I said created, not maintained, if you read the book you will see later generations were later rewritten in C.
Of course you have to write on top of C, because that is what, sadly most operating systems are still written today.
But even that is slowly changing, when we look at the way operating system development is evolving, with safe layers on mini kernels, like Google is doing with Android or Microsoft with WP/WindowsRT/Singularity.
Anyway, I am stopping here, as won't convince you of anything and you won't convince me.
> Macro is okay. Procedure is not okay. Break is a procedure.
Why do you believe this to matter? Smalltalk's signaling and exception handling (both `catch` and `finally` equivalent) uses regular messages and blocks (with hooks into the interpreter obviously). What value do you see in making special syntax things which need not be so, ease of "optimizations" (such as compiling a block to a jump)?
Don't you understand the sheer beauty of the undefined behavior? It is not the beauty of a great painting, it's the beauty of the winter mountain: shining, tantalizing, with the stability of a marble on a needle and ready to take your life at the lightest sneeze.
Pascal has had labeled goto from its conception. Break is just an extension / syntactic convenience. Even in C and C++ i still use goto when I have to break out of multilevel loops.
I remember playing around with Several different versions of Oberon on my Mac back in the day. It was intriguing but not really useful except for working on Oberon (fine for a comp sci course). The one lasting impact it had on me was that its system font was a really nice sans serif face called Syntax, which I continue to use for many purposes today.
I'm very thankful for that contribution! Yeah it is overlooked, that system was definitely superior to what we have today. I hope one day new machinery can be reverse-engineered by the computer itself (evolutionary algorithms), instead of reverse-engineering closed-source drivers by hand.
Oberon was the language used in my high-school courses (This was only 7 years ago). I had a hard time understanding the choice at the time and no one was able to explain it to me, but now it seems it was the logical decision when you try to move away from Pascal as a learning language.
The resources at the time available on the web certainly didn't reinforce my confidence in the choice and so I went to the C derivatives and never looked back. If there would have been better resources and better 'publicity' my choice might have been different.
Did anyone else find the story about switching from a tree structure to a list kind of weird? On the one hand, that's a pretty trivial sort of optimization, and hardly seems worth making a fuss about.
But on the other hand, isn't he assuming that any future users of his compiler will have source code that looks like his? Sounds like all it would take is one person using an automatic code generator to generate Oberon code that's not smart about reusing variable names to bring the compiler to its knees...
http://www.ics.uci.edu/~franz/ - "Franz received a Dr. sc. techn. degree in Computer Science (advisor: Niklaus Wirth) and a Dipl. Informatik-Ing. ETH degree, both from the Swiss Federal Institute of Technology, ETH Zurich."
So Michael Franz can hardly be considered impartial to this matter.