I think you have misunderstood Searle's Chinese Room argument. In Searle's formulation, the Room speaks Chinese perfectly, passes the Turing test, and can in no way be distinguished from a human who speaks Chinese - you cannot "pop the illusion". The only thing separating it from a literal "robot that speaks Chinese" is the insertion of an (irrelevant) human in the room, who does not speak Chinese and whose brain is not part of the symbol manipulation mechanisms. "Internal cause and effect" has nothing to do with it - rather, the argument speciously connects understanding on the part of the human with understanding on the part of the room (robot).
The Chinese Room thought experiment is not a distinct "scenario", simply an intuition pump of a common form among philosophical arguments which is "what if we made a functional analogue of a human brain that functions in a bizarre way, therefore <insert random assertion about consciousness>".
The Chinese Room thought experiment is not a distinct "scenario", simply an intuition pump of a common form among philosophical arguments which is "what if we made a functional analogue of a human brain that functions in a bizarre way, therefore <insert random assertion about consciousness>".