Defining instance-specific behavior of any kind is catastrophic to method caching. JRuby has a hierarchical method cache so it can clear only what's needed, but MRI does not:
The late, great James Golick had a patch to add one once, but it never got merged upstream.
If you care about performance even the tiniest bit at all whatsoever, please don't use the techniques discussed in the OP in production code or in your gems. It may make your memos 30% faster on a microbenchmark... while causing the rest of your program to run considerably slower.
Cool article about the craziness of Ruby. Ruby is a frustrating language. The oauth gem, for instance, redefines '==(val)' on the AccessToken to 'Base64.encode(self.signature) == Base64.encode(val).'
This stuff feels really dangerous and unnecessary. I spend a lot of time on code reviews pointing out bad features of Ruby (and Rails) that we shouldn't be using because they break application flow and make it significantly harder to reason about the code for the small benefit of decreasing a few lines. But it's certainly fun to talk about :)
Ruby is a wonderful (and wonderfully powerful) language. Unfortunately, some of the popular ruby libraries (gems) are... problematic. To put it nicely. The web/rails gems in particular can be nasty minefields.
However, the language itself is great. The trick is to remember the usual advice that just because you can doesn't mean you should. Too many gems add "clever" metaprogramming (such as the def tricks in this article), when it isn't actually making the program simpler.
The beauty of ruby is that while it can act almost like a LISP for the times when you want powerful metaprogramming features, while also allowing simple shell or C style imperative code when that is more appropriate.
In its place, being able to redefine equality on value types really helps clarify code. Misused, it creates confusion. The problem is that it's easy to think you've got a case where it helps, when actually it's not well-defined. Usually that revolves around there being state you care about which is missed from the equality comparison.
Another trap is redefining #== without also looking at #eql?, which means Hash doesn't behave like you expect. It's just another bit of mental trivia you've got to Just Know...
Another (even more simple than Money) example are many of the standard library classes.
For example BigDecimal. With operator overloading, I can easily compare a BigDecimal object to an integer or float, the same way I can already compare them to each other:
BigDecimal.new("10.0") == 10.0
10 == 10.0
Without operator overloading, this would become needlessly messy (and require explicit handling of nils):
d = BigDecimal.new("10.0")
!d.nil? && d.value_equal_to(10.0)
Ruby has always been about readability of the code, and avoiding unnecessary repetition, and I think operator overloading (when used correctly) is a great example of this.
Right, this seems like exactly why we shouldn't be allowed to redefine it. As a reader of your code that redefines ==, I think == means we are talking identity until I find your function that redefines ==. It has made it so I need to understand more things in order to be able to reason about your code. That seems like a negative to me.
But even in core Ruby, == is not always identity; hashes, arrays, ranges, all of them are compared by value and not by identity. The assumption that == is comparing identity is broken, not the code that implements it differently.
For "primitives" yes, for objects no. What would be primitives in Ruby extend Object, because everything in ruby does, but (sigh) they redefine == so they act more like primitives in other languages. At least there is a clear cut rule, but it's pretty much turtles all the way down.
That's because you are among the people who don't like abstractions. I disagree with that opinion, but that's fine, Ruby is just definitely not for you.
Don't try to change or complain about Ruby, you are likely more happy with languages like go.
It is. By redefining the equality method (operators are just regular methods really), you abstract away how this kind of object need to be compared with it's peers.
The fact that it's an operator or a method doesn't change a thing. For example in Java many classes redefine the `equals` method. It's default behavior is just like Ruby, comparing identity. It's not an operator but the effect is exactly the same. And IMO it's worse because now you have a leaky abstraction with types you need to compare with `==` and others with `.equals`.
It's just you who have this expectation of operators being not redefinable. When I read Ruby code, for me `==`, or `+` are just regular methods like any others with just a bit of syntactic sugar.
It also allows for greater polymorphism. Like the Money object from before. If I couldn't redefine `+`, then `[Money.new(20), Money.new(22)].sum` wouldn't work.
If somebody wants object identity rather than semantic equality, they should be using `equal?`. The fact that different types have different equality semantics if just kind of inherent in the idea of a type.
And then a reader of the calling code might not have to look into the implementation to understand what the function is doing, whereas you definitely would when using == because == now means "whatever it's overridden to mean"
> Should Money.new(20, 'USD') == Money.new(2000, 'US Cents')?
Like someone else pointed out in this thread, designing APIs require consistency and good taste.
If I had to implement this API you code above would evaluate to `ArgumentError unknown currency "US Cents"`.
> Or Money.new(20, 'USD') == Money.new(125, 'CNY') when ExchangeRateManager.getExchangeRate('CNY', 'USD') == 0.16?
Again, me designing this API, it wouldn't be equal. Why? For the same reason `1 != "1"`, if you cast them, yes they are equal, but implicit casting (aka weak typing) is not idiomatic in Ruby, it's possible, but very rare.
> At this point you might as well do `m1 == m2.convert_to(m1.currency)`, because "HaveSameWorth" might mean many different things too.
I personally hate that last style because it's obvious that the "HaveSameWorth" relation is intended to be symmetric, and by writing it like m1 == m2.convert(...) you're prefering one side over the other. It looks bad for me :).
Also, in case of real-life objects it makes sense to spell out what do you mean by 'equality' (or 'equivalency'), and leave the default implementation to represent the philosophical concepts of "the same" and "equivalent to".
But you almost certainly are logically preferring one side over the other! You do usdValue == cadValye.convert_to(usdValue.curr) because one of those currencies is the one your transaction is working with (in this case, USD is your goal).
Without explicit comments/documentation it is hard to imagine a good example. Its a longstanding issue. Even Lisp has 'equals' and 'equals?' which is an abomination. One looks for identity of object; the other for identity of value (if I understand it right). These kind of things are bug factories.
Kent Pitman has always been a good read, for the problems of equality in dynamic languages. The typical Ruby or JS programmer stumbles through the day just "getting by", where it comes to comparing objects.
Ehh, there are good and bad uses. Yes, dot product should not be overloaded asterisk operator because elementwise multiplication of vectors is a thing, and also if you overload multiplication for matrices then dot product should be u-transpose-times-v for consistency. But what about vector addition? There's no ambiguity. Try writing any 3d graphics code without overloaded vector addition. It sucks.
I generally disagree with people who dislike a language feature only because of its abuse potential. Good programmers should not have to suffer for the sake of damage controlling bad programmers.
Edit: I do think it's stupid to make the identity-equals operator overloadable. Identity-equals and value-equals are separate concepts. In C++ this isn't an issue because == is the value-equals operator.
> Edit: I do think it's stupid to make the identity-equals operator overloadable. Identity-equals and value-equals are separate concepts. In C++ this isn't an issue because == is the value-equals operator.
In Ruby, == is value-equals at well (identity-equality being such a rarely needed concept in Ruby that it wouldn't make sense to privilege it with its own operator).
I disagree, operator overloading often makes the code much more readable. I don't see the problem unless you're trying to write C in Ruby. As long as the overloaded implementation keeps the expected semantics as described in the language documentation, it's fine.
Most problems people have with operator overloading seem to be caused by the same issue that makes them whine about "debugging metaprogramming" (I referenced that in other comment here). Namely, instead of trying to understand the code and the model behind it, they try to bring over their own assumptions about how the code should work.
That 5-component object compared by ==? How does it work? Sit down, read the code and find out. The answer depends on what exactly the object represents and what makes sense in the domain model.
That's exactly the issue. When its a small program that works fine. A large one? Its a matter of time and effort. Anything can be looked into, with time. Which is money. And effort over time ~mistakes.
Its too simple to do victim-blaming here. You don't understand my code? Well, just read it all so you know how clever I was.
If you want to write code that can be easily assimilated, which most readers would think they understood from its source and not by reading it at some meta-level, then you have to code with one hand behind your back.
Its not ever 'fine'. Its confusing at best. Imagine a component with 5 attributes. Does '==' match them all? Some of them? Loosely or tightly? I'm afraid just seeing '==' in the code is never going to be informative.
Instead, maybe a method MatchAttr1And2(v1, v2) would certainly tell a subsequent reader a little more about what's going on.
> Imagine a component with 5 attributes. Does '==' match them all? Some of them? Loosely or tightly? I'm afraid just seeing '==' in the code is never going to be informative.
This is, fundamentally, a disagreement about the value of encapsulation. With an opaque, encapsulated type, '==' should mean whatever makes the most sense in the context of that type. For a pointer that might be "equality" means "same memory address", whereas for a vector that might mean "equal components". As a user, "equality" should match an intuitive understanding of what it means for two things of this type to "be the same". It's an art form. Like many things in programming, doing it well requires good taste.
Operator overloading is a powerful technique for preserving encapsulation. It's the polar opposite of:
> Instead, maybe a method MatchAttr1And2(v1, v2) would certainly tell a subsequent reader a little more about what's going on.
This leaks implementation details like a sieve. It's a great recipe for encouraging dependency on a particular implementation detail across module boundaries, and rolling yourself a great heaping ball of mud.
Only if you do it that way. Make it MatchForParticularPurpose(v1, v2) instead, and voila no leak.
Operators are unique in the language. They hold a special place. They deliberately are written to imply something we already understand. No fair lumping them in with every other attribute or method of an encapsulated type.
Intuition is a very, very poor thing to depend upon in a programming language. I disagree heartily that it should be the solution to disambiguating any operation.
> Only if you do it that way. Make it MatchForParticularPurpose(v1, v2) instead, and voila no leak.
Making the name more opaque won't save you at all. You're making what should be a local detail -- how your type implements equality -- into something that only works when it is global knowledge.
Consequently, your type is brittle and incomposable with types that aren't infected with this knowledge:
[instance11, instance2, instance3].sort
won't do anything sensible until we infect either the Array type or the call site with non-local knowledge about your type.
Every type you build in this manner will find knowledge about itself diffusing throughout your application, like children peeing in a pool. Composition will be limited, inflexible, and require manual insertion of type-specific knowledge, because you have failed to encapsulate knowledge about equality.
Everything needs to know about everything else, and in the end you've built a tightly coupled ball of mud.
> Operators are unique in the language. They hold a special place. They deliberately are written to imply something we already understand. No fair lumping them in with every other attribute or method of an encapsulated type.
I would argue that a CORE value of Ruby is that everything is an object, and objects communication by message passing. Treating an operator as anything other than a message between objects is fundamentally wrong.
If you expect operators to do anything other than call the appropriate message on an object, you're misunderstanding the syntax.
Agree about debugging. To my thinking, though, there are two types of "programming": professional (for day jobs, where debugging, testing, and maintainability matter), and recreational (where the point is to just explore new things and try crazy stuff that no one in their right mind would ever "really" do). This "def" stuff falls into the latter category.
Honestly, I wish there were more people doing posts about recreational programming topics. WhyTheLuckyStiff was one of the last great recreational Rubyists. I miss that kind of no-holds-barred exploration.
Debugging Ruby, in general, is often a pain in the ass. It stems from an awful combination of terseness and metaprogramming. The terseness comes from using an identifier to both represent variable reference and method invocation. Contrast this to Lisp-like langauges, where function (or macro) invocation only happens in the first position of an S-expression. In Lisp, it's clear that an identifier is either a variable or a function depending on where it's located.
In Ruby, no visual indication exists. Worse, things like attr_reader blend instance variables with local variables and method identifiers. Throw in a method_missing and inheritance, and you can easily lose weeks just tracking down where an identifier is even coming from. Throw in a gem or two, and all hope is lost.
> Contrast this to Lisp-like langauges, where function (or macro) invocation only happens in the first position of an S-expression. In Lisp, it's clear that an identifier is either a variable or a function depending on where it's located.
Both `foo' and `+' are variables here - while present on a first position of an S-expression at one point - even if you're running a Lisp-1 (like Scheme), i.e. where functions and variables share a namespace - or rather, a symbol can have separate function and value bindings. In Lisp-n (like Common Lisp) you can have a variable slot bound to a function value (i.e. lambda).
But one thing Lisp does have, which impacts readability significantly IMO, is simple and consistent syntax. Contrast to some Ruby-like languages which let you skip braces when working with dictionaries, making you stop and wonder how the hell a given piece of code is going to be parsed by the interpreter. Or Scala, which has so much context-dependent meaning bound to non-letter characters that I finally start to understand why people were afraid of C++ operator overloading. Both examples are, in my opinion, cases of syntactic sugar leading to cancer of semicolon.
The example you gave is clear, though. They are variables because they appear in a LET form. It's clear from the context what is going on. What I'm referring to, in Ruby, is something like:
def some_method
what_is_this
end
You don't know what what_is_this is. It could be an instance variable, a method (anywhere in the inheritance tower), or an autogenerated method from method_missing. It's impossible to tell without digging through the code. But the problem there is, you can't simply grep the code for things like this. You could end up with 100s of uses of the identifier and never find the source. Especially if you inherit a class from a gem, or method_missing has been used.
> In Lisp-n (like Common Lisp) you can have a variable slot bound to a function value (i.e. lambda).
Yes, and Lisp-1 vs Lisp-n has been a hot debate for decades in both Lisp and Scheme communities. I'm not about to claim it's a completely solved problem there.
> You don't know what what_is_this is. It could be an instance variable, a method (anywhere in the inheritance tower), or an autogenerated method from method_missing. It's impossible to tell without digging through the code.
We know it's not an instance variable -- that would be @what_is_this. It would have to be a local variable, but plainly there is no such variable local to this method. So we know it's a method. Let's go hunting (we could do this vis binding.pry, or byebug, or in IRB with an instance of whatever defined some_method):
method(:what_is_this) rescue false # if false, this is coming from method_missing
method(:what_is_this).source_location # there's your definition location, if it wasn't coming from method_missing
In general I used to find debugging ruby hard, coming from a background in more static languages. Once I learned the debugging facilities it provides, finding things got a lot easier.
True. You have to learn to treat programs written in dynamic languages as living, mutable, interactive things instead of designs set in stone. However, when a simple syntax derails your reading, it is a thing of concern.
Then again, you could make similar shenanigans in Common Lisp with `symbol-macrolet', but it's obscure feature that pretty much by definition will be used only by people who know when to use it :).
> However, when a simple syntax derails your reading, it is a thing of concern.
Definitely agreed! It's rare that I've come across a set of code that is _so_ over-abstracted or dynamic that reading the code doesn't usually shed light on the issue I'm after, and usually it's a "dense" language (Scala, Lisps, etc.) that manages to achieve that.
Last time I was really, seriously confused about the code was when going through a Scala codebase written by a person deep in love with traits. My problem was less of the syntax or "denseness" and more of nonlocality - every conceptually "whole" algorithm was split into 10 or 20 files and 2 class hierarchies, each having their own hierarchy of traits.
It made some sense in the end, but it took me a lot of time to figure that one out - not because it was complicated, but didn't fit in my head. That the original author was uncooperative and didn't want to explain things too much didn't help either.
> But the problem there is, you can't simply grep the code for things like this. You could end up with 100s of uses of the identifier and never find the source.
Personally, I see this as the entire point of the original Smalltalk-esque OOP paradigm: an object is a living, mutating black box that responds to messages, where an object's "type" is just "object": a thing with a protocol for sending and receiving arbitrary messages, not a thing with particular messages it is lexically known to respond to.
In other words, an object is like a remote server on the Internet. You can no more introspect an object by looking at the source of the classes it was originally constructed from, than you can introspect a remote web service by reading the source of the frameworks it was based on. The set of messages an object will respond to, and how it will respond to them, is part of the object's state, not part of its definition.
This "absolute encapsulation" forces the producer to publish an API if they want anyone to consume their object/service. This is great! If the API is machine-readable, and the object/service publishes it as a response to a message, an OOP runtime can even perform runtime introspection on the object/service, enabling consumers to configure themselves to speak another object's protocol, to reconfigure when that protocol changes, and even for two objects to negotiate a communications protocol among many options.
People have rediscovered the benefits of black-boxes with published, oft-machine-readable APIs in the last few years, calling the new version of OOP "microservices." It's the same idea: objects with no lexical type beyond "thing that responds to messages encoded in this format", being used as black-box interfaces into libraries which may be running locally or remotely.
This is also, effectively, the definition of a "process" in Erlang: something that will (asynchronously) receive messages if you send them, and might send you a response, or might not, with the interpretation of a given message depending entirely on the process's state. (People who say Erlang is a functional language are looking on the wrong level of abstraction. Processes are objects!)
---
To get back to Ruby, though: imagine an object which serves as a REST client, where the method_missing of that object translates the {method_name, ∗args} into a GET request to an API server, with the method name becoming the path and the arguments becoming query parameters. This is an idiomatic kind of Ruby object, because Ruby is actual-OOP rather than the "classes are types, right?" faux-OOP of static languages.
The messages this REST-client object responds to depend entirely on code running somewhere else that could be modified at any time. There is nothing static analysis tools could do to figure out what this object will or won't respond to. The only way to introspect its operation at all is at runtime.
And yet, I would argue that this implementation of such an object is the best, most 1:1 translation of the concept of "REST client" into a programming language. Errors are propagated from the remote server, through HTTP, into local dynamic-dispatch errors, through the defaults of Ruby's runtime. You're not having to go against the grain, writing a "get" method and then making up all sorts of custom exceptions it can raise. You just make a local object, that stands in for a remote object, and then you interact with it as an object.
> People who say Erlang is a functional language are looking on the wrong level of abstraction. Processes are objects!
People thinking those two are mutually exclusive are not getting the difference between the map and the territory :).
I worked a bit with Erlang professionally. Erlang is both functional and object-oriented, if you think about Smalltalk-style OOP and not Java-style OOP (the constant confusion between the concepts of those two is incidentally why I think that most things written about OOP are bullshit - they focus on wrong details; I've seen even university courses confusing the shit out of students by calling C++ methods 'message passing' and objects as equivalent to 'actors').
RE the problem of black boxes - black boxes are cool, what's not cool is if the interface you use to talk to black boxes is in itself confusing. In the example GP posted, it's not clear on first and second thought what exactly does the middle line mean - if it is a value, evaluates to a value, evaluates to a value with side effects, possibly taking your control flow for a sightseeing trip around the Moon, etc.
Also, in real world, we have to have some idea of what the black box is really doing. The only value that comes from assuming something is like "a remote web service" is knowing that it probably sucks, it's totally unreliable, if it responds at all it's after time noticeable to end user of your program, and you have to plan for it disappearing at any moment because the founder takes exit money or forgets to renew a domain. Yes, you could program treating everything like a web startup, but few simple assumptions like "this is inside my program so it responds fast and lives as long as the rest of the program does" can help tremendously improve the speed of coding and the speed of the program itself.
> Yes, you could program treating everything like a web startup, but few simple assumptions like "this is inside my program so it responds fast and lives as long as the rest of the program does" can help tremendously improve the speed of coding and the speed of the program itself.
I think the distinction between regular Smalltalk OOP (designed before distributed programming was a thing) and what people actually mean when they call a language "actor-modelled" is that, in a language like Erlang, you get to lean on the language itself (or in Erlang's case, the OTP framework) to be pessimistic about other objects' behaviors for you.
One thing that I think is missing from today's OOP language landscape, though, is a concept of protocol parameter adjustment for long-lived peers, ala TCP window scaling. While I wouldn't expect this of anyone's one-off RPC protocol, the frameworks like OTP, specifically made to cleverly handle OOP RPC stuff, should be capable of multiple levels of "formality" in how objects speak to one-another, where an object that e.g. repeatedly sends messages to a named service should be eventually JIT-optimized into one that grabs the PID of that named service and messages it directly. Of course, if that named service dies, the process will crash—but like any other JITed code, that's just the point at which the JIT abort-traps back into the un-optimized codepath and re-runs the function. This can be generalized to an arbitrary degree; you can go so far as to imagine e.g. distributed Erlang nodes that pass bytecode to one-another and gossip about code-changes, in order to be able to JIT-inline remote (pure) function calls.
A side point here is that it's incredibly hard to make people stick to abstractions when not sticking to them gives a competitive edge.
Say we've implemented your entire idea with JIT-optimized message passing. What would happen there is people learning ins and outs of particular JIT implementation and JIT-hacking being a required topic on job interview, just like today knowing millions of ways of hacking CPU cache is something expected from a professional (non-web) programmer (web programmers are expected to know there is something called 'processor' and that it doesn't like nested for loops; source - was a professional web programmer).
Anyway, while OTP does a pretty good job of papering over some of the local/remote objects difference and helps you keep the whole thing running even if something external breaks, I think it would be cool to go further in the direction of the ideas you just described.
I don't see anything hard to debug in this code. Honestly, if this kind of code makes debugging hard for one it means that one assumes too much about how code should behave instead of looking how it actually behaves.
I hate this "dynamic programming[0] = hard to debug" meme. A truly hard to debug code is one that's nonlocal (you have to read 20 files to follow the execution flow (hello Scala trait abusers)) or displays random behaviour (e.g. threading, dependence on external resources). This one? Every "tricky" thing is contained in a block of 10-20 lines. Just isolate the endpoints and follow the execution until it does something you think it shouldn't.
In a way, the biggest enemy of a bug hunter is their own assumptions.
[0] - examples here aren't really metaprogramming, and even the latter isn't that hard to debug if you actually sit down and read the code.
Zork-alikes aside... I'm looking for practical applications :)
I recently re-watched Yaron Minsky's "Effective ML" talk where (towards the end) he talks about making read-only and read-write types. That's where I thought Jamis was going with the state machine example: one could tell an object "yo, make yourself immutable!" (ie redefine your methods so that you can't change yourself). But in an OO world that seems more neatly achieved with subclassing. [To the extent of faking anything "immutable" in ruby"].
There's #freeze I suppose... and no #unfreeze. Which is perhaps sensible :) And #freeze only guards against new assignment to instance variables; it doesn't guard against an instance method mutating the content of an instance variable (def esquirify! ; @name << ' Esq.' ; end). So there's that. But I'm far from convinced...
Python has this "inner-methods" capability (with saner scoping) which is a great antidote for python's limited lambdas. But that's not a problem with ruby.
It's a neat trick (and nice to read something from Jamis again). Are there any sensible use cases?
One use might be to create simple singletons. That said, I'm not entirely sure how you would do it, maybe have `Object#initialize` redefine `Object#new` to always return the singleton.
Ha! An IRB-based interactive adventure is actually really, really cool and clever. It never fails to amaze me how much Rubyists abuse metaprogramming and langauge quirks.
Somebody now shall goeth forth and implement Zork...
Thanks! I'm not the first to talk about using IRB for interactive fiction, but I think I might be the first to do so using nested defs. :) IRB-based Zork would be awesome! I hope someone does that.
I've somehow always found Ruby to be opposite to the Unix philosophy of doing one thing and doing it well. While Ruby may seem trivial and fun in the beginning, it tends to be cumbersome and maintainable as the size of the repository grows. Coming from a Python world, my first reaction to Ruby was that it was more like Perl where there are many ways to achieve the same thing, and no it was not really helpful if you inherited poorly written code.
$ foo()
{
bar ()
{
echo I am bar
}
}
$ bar
bar: not found
$ foo
$ bar
I am bar
Pretty much the same thing as the Ruby example.
This is simply because function defining is a kind of statement or expression with a side effect which must be evaluated. The side effect is global (a name is globally associated with a function). So if the side effect is in a function body, its evaluation is delayed until the function is called, and then its effect is still global.
If you're a sufficiently advanced Unix purist (aka a Plan 9 user), the Bourne shell is a violation of Unix principles. Contrast the v6 shell, where this construct is impossible because there are no functions. Now that's do-one-thing-do-it-not-quite-terribly.
This is indeed the basic feature of any language that's evaluated at runtime. When working with such a language, one needs to learn the program as a dynamically growing construct instead of a vision cast in stone when you press the "compile" button.
Or rather, one needs to learn which constructs destructively manipulate a global environment, and which perform lexical binding.
In Python, an inner def will lexically bind a function, creating a closure. Python is not less dynamic than Ruby.
In Common Lisp, a defun inside a defun will behave similarly to Ruby; but if you want lexically scoped local functions, you use a different operator, namely flet or labels.
Scheme has a define which is lexical: it brings a lexical identifier into the scope for forms which follow.
Lexically scoped items, even in a dynamic language, in fact can be "cast in stone when you press the compile button"; they are cast in that stone which is the entire compiled environment of the surrounding function.
> Just because you CAN do something doesn't mean you SHOULD
This is hacker news, not "production code" news. Abusing things in unexpected ways is pretty much the definition of hacking.
Many people seem to be jumping on this article as if the author was suggestion a new programming pattern we should start using, rather than an interesting look at some of the lesser-known quirks of ruby. I'm pretty sure even the author would agree that triple nested method definitions are not something we should use for production code.
This is a fundamental part of the language, the idea that objects and classes are dynamic, not static.
What I would say is that although we should have an extremely good reason to employ these techniques in production, I believe that every professional Ruby programmer should be able to understand them and/or figure them out in a few minutes.
Agree with you 100%, and the ability to dynamically define methods is one of Ruby's great strengths.
However, metaprogramming is a power that should be used wisely. When implemented unnecessarily it reduces readability (and probably performance) for no real gain.
The devise source contains some great examples of metaprogramming used properly:
Readability is in the eye of the beholder. For those unfamiliar with higher-order programming, maps are just a 'too clever' form of a for loop.
A clean and understandable solution is one matching the problem being solved in a precise way. In search for simplicity one can't forget that programming is a trade, and one should be expected to actually learn some shit.
To that point, Jef Raskin famouly said that “intuitive == familiar,” and all-too-often, that’s exactly what people mean when they talk about “intuitive" code and/or user interfaces.
Indeed. Looking over the article, the first example is sort of obvious if you ever worked more than few hours with a decent dynamically typed language, and the rest expose interesting functionality that could be papered over with a macro in order to build something useful. Like, you know, object-oriented programming can be built by hiding lexical closures under a macro or two, and was in fact built that way in the past.
In other words - just because you don't understand something doesn't mean it's "clever code".
http://jamesgolick.com/2013/4/14/mris-method-caches.html
The late, great James Golick had a patch to add one once, but it never got merged upstream.
If you care about performance even the tiniest bit at all whatsoever, please don't use the techniques discussed in the OP in production code or in your gems. It may make your memos 30% faster on a microbenchmark... while causing the rest of your program to run considerably slower.