This shouldn't be surprising to anyone who knows that Kakutani/Brouwer's fixed-point theorem (the basis of Nash's formulation of game theory and General Equilibrium Theory in neoclassical economics) was proved non-constructively. It assumes that there's a continuous endomorphism in a convex space with no fixed points, and shows that you can use such a function to retract to the boundary of that space [1][2]. However, the boundary itself is full of fixed points, and this shows by contradiction that fixed points must exist for continuous endomorphisms. By presuming the non-existence of the fixed points in such a proof, you can't use it to arrive at any fixed point. Nash uses a clever argument to show these fixed-point equilibrium strategies exist in games, and Debreu and Arrow use a similar argument in the space of preferences to find equilibrium prices. [3] Unfortunately, these theories have been used to promote the myth that markets are self-regulating despite all evidence to the contrary. [4][5]
A more precise title for article would be: "In Game Theory, No Clear, Universal, Fast Path to Equilibrium".
And I'm not an expert in either field, but the article seems go steer pretty close to P=?NP. The article seems to acknowledge that "brute force" communications is a generic, universal way to solve every game, which could “take longer than the age of the universe”, thus being “completely useless, of course.”
On the other hand, a lot of games have "additional structure that greatly reduces the amount of information each player must communicate", so you can apply simplifications to solve them. In other words, use heuristics.
TL:DR, if you don't know the rules of the game, you cannot solve it.
A bit more detail: Most games make use of some form of utility function. A utility function generates numeric values for a given outcome. Utility functions are one of the sticky human aspects of games that are difficult to accurately model. This is often papered over by making assumptions that a players utility functions are linear, monotonic, or identical.
For example, where applicable, the monetary value of an outcome is often used in place of a players utility function. However, it is well documented that people's utility functions with respect to money are usually both non-linear, and non-monotonic.
If you don't know enough about the utility function of a player in a given game, there is very little you can infer about the structure of optimal strategies.
Player preferences aren't within the bounds of the rules of the game, though.
You can have 100% knowledge of the bounds of the game's rules and still have a tremendous amount of difficulty in ascertaining player preference. The article makes it clear that the process of iterative playing of computationally complex decision games does not assure very significant approximation of preference.
In other words, you need to do something more than just play in order to reliably reach a place close to equilibrium.
The key quote in the article is: "there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other virtually everything about their respective preferences."
This is a layman's way of saying that "if you don't know the utility functions of the players, it's hard to make computational inferences". The example in the next paragraph with a game with 2^100 leaf nodes carries with it an implicit assumption that the utility function for each of the leaf nodes is arbitrary.
Compare this to the game of Go, a game which has in excess of 2^2000 leaf nodes. The success of AlphaGo indicates that sheer size is not a barrier, rather it is being able to parametrically express the utility function for arbitrary nodes which is important.
I don't think you've really understood, because your contradictory response supports mine: Even when fictitious play is a good approximation for a real system (and they aren't because firms do not adopt static strategies), strategies do not always converge, which means iterative play does not necessarily lead you to terminal equilibrium states.
But maybe we can salvage equilibria as a functional tool. Can these systems arrive at 'almost equilibrium states?" Knowing this would still give us some predictive oomph. This article states that even approximations might be out of reach, but there are ways to speed the development of the metagame along.
On your other point, you've missed half of the article's content. It isn't saying "if you don't know the utility functions of the players it's hard to make computational inferences". It is saying that the amount of information required to obtain the utility function of the players for even trivial games is well beyond the scope of most systems we have, and accordingly the assumption that even approximate equilibrium will be reached requires a good independent rationale. This is a MUCH larger issue than the base computability of the problem and jumps into the epistemology of economics and policy.
This is expanded upon because those assumptions regarding reaching equilibrium are often used in justifying financial, policy and other decisions where billions of dollars are at stake. See the Cournot and Bertrand competition models and their successors for a view into how that faulty assumption can lead you to terrible policy decisions.
This is more of practical importance than it looks on yhe first glance.
Take software development, for example. End users, marketing, sales, stakeholders and tech people are all playing a complicated game. Let's assume everyone knows what he wants.
Virtually no one is interested in withholding information, but communicating it somewhat efficiently is next to impossible.
If a mediator-based equilibrium was more archievable in short timeframe (and you could prove that), it will benefit everyone.
On a similar note, recently a lot of research has looked at the extent to which utility functions fail to explore a space, and how the introduction of novelty is a crucial new strategy:
"Novelty search is a recent algorithm geared toward exploring search spaces without regard to objectives. When the presence of constraints divides a search space into feasible space and infeasible space, interesting implications arise regarding how novelty search explores such spaces. "
[1] Lawvere - Conceptual Mathematics
[2] https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem#A_...
[3] https://en.wikipedia.org/wiki/Arrow%E2%80%93Debreu_model
[4] http://coin.wne.uw.edu.pl/mbrzezinski/teaching/HE4/BlaugForm...
[5] Weintraub. Stabilizing Dynamics: Constructing Economic Knowledge