Hacker News new | past | comments | ask | show | jobs | submit login
Objects vs closures (csail.mit.edu)
74 points by prog on Dec 8, 2010 | hide | past | favorite | 17 comments



I invoke the ko rule...

Situational Cycle: a move sequence that starts and ends at the same position such that at the end of that cycle it is the same player's turn as it was at the start. If cycles are allowed without restrictions on them, it is possible for a game to go on indefinitely. [1]

The 'ko' situation is the most common cycle that occurs in the game of go.

Ko: a situation where two alternating single stone captures would repeat the original board position. The alternating captures could repeat indefinitely, preventing the game from ending. The ko rule resolves the situation. [2]

----

[1]: http://senseis.xmp.net/?Cycle

[2]: http://senseis.xmp.net/?Ko


I like the approach in Scala: every function is an object, and every object which provide an apply() method can be used as a function.


Same as in Python, except apply() is called __call__()


I like the koan in the end. Now if only one can hit me in the head such that I understand the duality of electricity and magnetism...


Practically, electromagnetism is straightforward. Imagine a straight wire with electrons flowing through it at constant rate. This way, the moving electrons produce a magnetic field that is always perpendicular to the electric current (which is along the wire). What is perpendicular to a line in three dimensions though? It looks something like this: http://en.wikipedia.org/wiki/File:Electromagnetism.svg. This phenomenon is governed by Ampère's law (http://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law). A handy trick for figuring this out is using the right-hand rule: http://en.wikipedia.org/wiki/Right-hand_rule.

Now, you can arrange your wire in any number of ways to produce a more interesting field. For example you can arrange it in a loop (or several loops). Since, once again the magnetic field is perpendicular to the wire, at the center of the loop it will be perpendicular to the plane of the loop. This way you can make an approximation of a constant field along an axis (just in that small space at the center of the loop/coil).

The fun part is that the inverse relationship also works: a changing magnetic field produces an electric field. Thus, taking a coil and moving a magnet inside of it sufficiently fast produces an electric field, which compels the electric charges in the coil to move producing an AC current. The change in the magnetic field is called the magnetic flux. This process is described by Faraday's law: http://en.wikipedia.org/wiki/Faraday%27s_law_of_induction#Th...

Lastly, Maxwell combined these two equations with two other fundamental equations of electrodynamics (namely, Gauss's law and Gauss's law for magnetism) with some minor corrections for some special cases: http://en.wikipedia.org/wiki/Maxwell%27s_equations. In physics laws and such are very often named not after the first person to discover something, but after the last. From these equations you can derive that light is nothing more than an electromagnetic wave (which of course then goes into the whole duality of light).


Feynman explains it nicely in his lectures on physics. The key is relativity. (Magnetism is a relativistic effect of electricity.)

http://en.wikipedia.org/wiki/Classical_electromagnetism_and_... may be interesting. Especially the section "Relationship between electricity and magnetism".


Can someone summarize how magnetism is a relativistic effect?


The explanation given here is pretty decent:

http://en.wikipedia.org/wiki/Classical_electromagnetism_and_...

The main thing is to consider how charge densities change as we switch reference frames. Because the lengths contract when we switch to a reference frame where things are in motion, we see higher charge densities (same charge per less length).

A simple demonstration: Say we have a charge neutral, current carrying wire. We model it as a bunch of positive charges staying still, and some negative ones moving. The positive and neutral balance, but there is still a current because only the negatives are moving. Now imagine we switch to a different reference frame, one where the negative charges aren't moving, but the positive one's are, we are flying along parallel to the current. Now the positive charges contract relativisticly, so the density of positive charges is now greater than the negatives, and hence the wire appears to be carrying a net charge.


A closure is also a poor man's class (sort of).

I've often wished that C# let me implement interfaces with anonymous classes. Since it doesn't, I get around it by creating AdHoc classes for interfaces that I use a lot.

For instance, if I have an interface Foo defined as:

    interface Foo
    {
        string Bar(int x);
    }
then I also create a corresponding class called AdHocFoo defined like so:

    class AdHocFoo : Foo
    {
        Func<int, string> _bar;

        public AdHocFoo(Func<int, string> bar)
        {
            _bar = bar;
        }

        public string Bar(int x)
        {
            return _bar(x);
        }

    }
I've written a little program that creates these AdHoc class definitions from interfaces.


"so closures can be, and are, used to implement very effective objects with multiple methods"

I don't really believe this to be efficient. The linked text by Kiselyov implements the dispatch via Scheme's (case ...) expression. Efficient dynamic dispatch means one indirection through a vtable, so one load instruction more compared to a normal function call. Which compilers for functional languages can perform this optimization?

Since we are talking about Scheme here, we could compare the dispatch to dynamic languages like Python or Ruby, where the dispatch means looking up a string in a hashmap. I'm willing to believe that Scheme's case can keep up with that.


    (define (error . args) #f)
    (define (class x y z)
     (lambda (d)
      (case d ('x x) ('y y) ('z z) (else (error "Invalid slot ~a" d)))))

    (define obj (class 'this 'is-a 'test))
    (display (obj (read)))
    (newline)
The Stalin scheme compiler, if I read the output correctly, makes a separate type for 'x, 'y, and 'z. Then it does the dispatch using a switch statement (in the c code) over the three symbol types. This seems quite resonable to me; The C compiler will probably produce a jump table.

Note that I was unable to get the compiler to produce dispatch code at all without dispatching on the result of READ.


Clojure. Clojure has a fairly interesting feature called reify:

  (defprotocol Foo
    (bar [this])
    (baz [this]))

  (defn make-foo [a]
     (reify Foo
            (bar [this] a)
            (baz [this] a)))

  (dotimes [_ 10]
    (let [x (make-foo 'x)]
      (time
        (dotimes [_ 1e8]
          (bar x)
          (baz x))))
So 2 methods called 1 billion times only takes about ~500ms on my 2.66ghz i7 MacbookPro.

(of course, Clojure prefers values+behaviors, aka objects, without mucking it up with state)


It seems very likely to me that the JVM is recognizing that "bar" and "baz" don't do anything and (after some warm-up) optimizing them away. Microbenchmarking JVM is hard.


Not true. Timings are completely different if you take a method out. I'm not saying that the JVM is not doing any magic here - but here is a closure that gets method dispatch as fast as the host can provide.


You're probably right. I took 500ms for 1B iterations and saw that you're looking at ~0.25ns a call, and that seemed a bit low. However, based on your code, you ran it 100M times, not 1B (1e8 vs 1e9). That changes it to 2.5ns per call. I ran the code on my machine (similar to yours, Macbook Pro 2.66ghz) with Clojure 1.2.0, and I got just over 2500ms for 1e8 iterations, which is about 12.5ns per call.

For comparison, looping 1e8 times with two calls to empty functions in a static language takes ~639ms, gives me ~3ns per call. So, you can see why my first suspicion was that the JVM was doing something like just inlining the methods and avoiding the call altogether. Considering the differences in our reported numbers, you may have a newer JVM than me, and if it is beating simple CALL instructions, it must be inlining them or avoiding some of the looping.


Sufficiently Smart Compiler(tm) should be able to optimize `case` that switches only on a known set of symbols pretty well. Certainly not necessary to do hash-lookup.


I wish that _why didn't leave potion behind. One of the coolest thing about potion is that everything is a function. For example, if you have an integer named a that has a value of one, the a() will return 1. I hadn't seen that in any other language. Are there other languages with this feature?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: