If you are okay with that, you no longer need to upload anything, and your problem becomes one of imitation rather than continuity. You can dispense with the philosophical question of identity entirely and obviate away the problem of interfacing with a mind.
Find a way to decompose an individual's personality into progressively more granular dimensions of behavior. Model stimulus -> response reactions as transformations. Sample enough responses from each type of behavior, make linear approximations of the transformations until you achieve a 1:1 simulacra, and derive a basis for the response basis. Your human mind will be a matrix representation of the map between the stimulus and response spaces.
As unrealistic as all of that sounds, it still sounds significantly easier to me than uploading (or even interfacing with) a mind.
Modeling a mind as giant stimulus->response is similar issue as "breaking" an block cipher by building exhaustive input->output map, which for 256b blocks almost certainly requires more resources than is contained in whole known universe.
On the other hand while uploading minds probably has some issues, it involves replicating thing that we can assume resides in physical space, the question remains as to how and whether at all we can measure the internal structure with sufficient accuracy and whether such measurement is nondestructive.
And in the end, problem of somehow interfacing with an mind can mean many things which range from solved issue (when typing this comment counts as "interfacing") to something that for me seems like purely engineering issue (building new neural attached "peripherals" for the brain or emulating existing ones)
I can see why you're drawing a comparison between enumerating stimulus -> response correspondences and "breaking" encryption using a lookup table. But I don't agree that the two would be comparable in difficulty a priori. First, encryption schemes typically try to complicate their linear structure by design, such that inverting the sequence of linear transformations can't be done without (ostensibly) secret information. Second, encryption schemes also approximate maximal randomness, which is entirely foreign to human activity.
For a basic example off the top of my head, consider a hash function. A hash function h : {0, 1}^' --> {0, 1}^n is not actually* injective (it can't be, since the domain is effectively infinite-dimensional and the codomain is finite-dimensional). However, cryptographic security mandates that it should be infeasible to find a preimage message for any digest in the codomain. Moreover messages with very small differences in the domain should be mapped to digests with very large differences in the codomain. This artificial noise and complexity doesn't resemble human reactions whatsoever; it's fairly easy to say two things which will each elicit your regional-specific greeting. In general many human responses have common triggers, and I would further conjecture that you could categorically simplify this further by reducing human responses to broader equivalence classes based on language, geographical region, mood, etc.
This is not to say it isn't challenging. I would expect building a linear approximation of a human mind using input -> output mappings to be extraordinarily difficult. But it's not artificially difficult, like well-designed cryptography. Breaking well-designed cryptography is intended to be, in a mathematical sense, a maximally difficult endeavor - much more difficult than basically anything else you can possibly do in nature.
More to my original point you and I can at least entertain a coherent conversation about building a matrix representation of a human mind using finite stimulus and response spaces. We're in familiar territory, even if it's not ultimately possible or feasible. It's mathematically sound to approach this, given a few well-defined assumptions.
In contrast, I don't know (nor am I confident anyone else knows) how to 1) replicate a human mind via as of yet nonexistent direct brain-interface technology, or 2) how to upload a human mind, using even more nonexistent technology so as to preserve continuity of consciousness. Not only are there rampant unknowns unknowns involved in the engineering efforts entailed here, there are unresolved questions and rampant philosophical disagreements in the fundamental assumptions. We're not in familiar territory here.
how to 1) replicate a human mind via as of yet nonexistent direct brain-interface technology
I’d say this is an engineering problem, rather than #2, which is philosophical. For #1, all you really need is to measure all brain cells accurately enough, then recreate the whole thing in a simulation. Could probably be achieved with nanobots or very advanced scanners within couple of centuries. Might be acceptable to destroy the original in the process, if it makes it more feasible.
This assumes that the entire state space of a human's mind is observable.
Some (most?) people have a rich internal world which can never be caught by looking at I/O relations.
Also, do you believe that by asking e.g. a person like Albert Einstein a reasonable amount of questions (an amount that a person can answer without getting seriously annoyed or fatigued), you would be able to reconstruct their problem solving skills? Sounds unlikely to me.
I'd push back against the idea of rich interiority personally, but ultimately I think that's more of a philosophical question. Practically speaking since we're engaging in this idea without the requirement to achieve continuity of consciousness, I'd argue it's potentially an unnecessary concern. If you will not continue after death, do you personally care if your replacement has interiority? On the other hand, if you want the things you care about to doing to continue getting done by a replacement, you do care that they are capable of everything you are and respond exactly as you would to every situation.
That being said I think your second challenge is more interesting. Albert Einstein's achievements are the product of both extreme knowledge/specialization, his experience and his personality. I think you'd likely have to have a "knowledge space" which can be mapped to the human "mind matrices." That does complicate things a fair bit, but in the abstract I think a "vanilla human" could plausibly be seeded with Einstein's personality and as much knowledge as you want.
Or we can relax the requirement that “it remains you”, and be content with the idea of cloning the mind.
How do we know that the incomprehensibly weird and advanced minds running our ships and habitats in those unimaginably distant points of space and time won't just decide that virtual smiley faces with our names written on them are close enough to, "it remains you." How do we know they won't one day do a "slightly lossy compression" of the human race?
I left it up to the imagination of the reader. It could be one of our minds after an imperfect cloning, modified by someone for "efficiency." It could be super-optimizing AIs. Tens of thousands of years from now, thousands of light-years distant, who knows what it will be like?
It seems the consensus in this subthread is that gradual replacement involves a potentially unsolvable philosophical problem, unlike cloning. How can you be sure you remain you during gradual replacement, and don't turn into some kind of a philosophical zombie?