My (speculative) master plan for immortality

7 minute read

This is my master plan for immortality (as of 2025). Lots of this has been developed with Augustus Odena. The standard Eisenhower disclaimer about planning applies here: “Plans are worthless, but planning is everything.”

Roughly, the plan is:

  1. Solve continual learning
  2. Build AI-powered glasses
  3. Connect these glasses to the brain
  4. Upload human minds to silicon

I am serious about the 4th stage!

Who am I?

tl;dr: I am an ML researcher and entrepreneur. I invented the Scratchpad technique for enabling transformers to perform multi-step reasoning. I co-founded Adept, which exited to Amazon. See my homepage for more.

The plan

1. Solve the machine learning problem of continual learning

Current machine learning models don’t learn from experience when they are deployed. Models have a training phase, where they learn and acquire knowledge, and a deployment phase, where they interact with users but don’t acquire more knowledge. They do add more context to the kv cache, but using the kv cache is too expensive to scale in its current form.

This is clearly not how humans work—we continuously act in the world and acquire new knowledge and skills as we go. In order to effectively learn new skills on the fly (as well as become effective personalized agents), LLMs need to learn continuously like humans.

In this blog post, Dwarkesh Patel gives a good non-technical explanation of the problem and speculates that without this missing piece, scaling up current models may not lead to short AGI timelines. I also suggest here that this problem is solved in human brains via fast updating in the hippocampus.

Chatbot systems such as ChatGPT have incorporated forms of memory, presumably by summarizing past chats into the KV cache, but I suspect that this post-hoc workaround is brittle and incomplete.

I won’t discuss technical details of possible solutions, but I believe this is the most important problem in machine learning today.

Side note: I suspect this problem is also related to a) the fact that curriculum learning doesn’t work yet for LLM training, and b) the fact that models seem very token-inefficient at pre-training time. Currently, LLMs prefer to be trained on nicely shuffled, iid data (at least within each training phase). This is very different from how humans learn—humans build up knowledge incrementally. To learn math, you’d first read the grade-school algebra book and do some problems, and only after you’ve learned that content would you move on to the calculus textbook + problems. Imagine trying to learn math by reading a random blend of shuffled chunks from grade-school algebra textbooks and calculus textbooks all at once! Imagine also that you’d need to read every math textbook ever written multiple times. Both of these seem wrong, and they seem like they might not be true in a world where we had good continual learning algorithms.

2. Build the next generation of personal hardware: AI-powered glasses

Now that modern AI systems can understand and produce natural language, human-computer interfaces can be much more natural. This means we can finally build a personal computer you can talk to and that can talk back to you (in addition to the standard visual interfaces).

Where should this personal computing system live? Ideally, it should be built into glasses. I’m already annoyed at how often I have to pull my phone out of my pocket to talk to AI chatbots. It’s doubly annoying when I have to open the camera app, take a picture, send it to the chatbot, and then keep the phone out while I wait for a response. Glasses solve all of this by a) sitting right next to your human i/o interfaces (eyes, ears, and mouth) for easy communication, and b) seeing and hearing everything you see and hear. This gives glasses context that a handheld device lacks when it’s not out and actively recording. Others have attested to this. Plus, I already wear glasses, so there might be no additional overhead.

I’m particularly inspired by Accelerando, a novel about the singularity, in which AI glasses feature prominently. Written twenty years ago, it describes an “exocortex” centralized within smart glasses that control a host of AI agents acting on behalf of the user.

Continual learning is critical for the optimal experience with AI glasses: if the model can learn facts about you, your preferences, and new skills over time, it can be much more useful than a stateless question-answer machine.

I’ve made small angel investments into two startups in this space: Mentra and Raven.

3. Improve these glasses by integrating signals from the brain

Disclaimer: I’m not an expert so I could possibly be saying incorrect things about neuroscience.

Imagine you had perfect AI glasses. How would you make an even better user experience? If the glasses are already effectively interacting with your human i/o interfaces, the next logical step is to push deeper into the stack and read/write signals directly from/into the nervous system. In other words, I want a computer that can read my thoughts and directly send me information via thought.

Lots of academic research and companies such as Neuralink have shown that this is possible in theory by placing electrodes inside the brain to record and stimulate neural activity. However, this requires brain surgery, so right now it’s only used in cases where there’s a medical need that outweighs the downsides of surgery.

One important question is whether non-invasive techniques can capture neural signals well enough to be useful. In an ideal world, lightweight, non-invasive methods such as EEG could be made reliable enough to decode signal from brains without surgery, and these devices could be incorporated into the AI glasses. Here’s one study by a team at Meta using EMG and EEG to non-invasively read actions out of the brain. If EEG works well, I could easily imagine incorporating technology like this into a glasses product. Here’s a cheap around-the-ear EEG device you can buy now that seems like it would pair well with glasses, and here’s a bulkier cap device which would perhaps be a bit less stylish but presumably gets better signal.

I’ve talked to a few people about decoding signal reliably from EEG. Some are optimistic, some are pessimistic, and others are genuinely unsure if it will work.

Invasive brain computer interfaces techniques using electrodes or ultrasound might be necessary to extract signal reliably, in which case I might need to sign up for surgery in order to get my desired mind-reading capabilities! Ultrasound-based BCIs could also be exciting because, while they do require placing a device beneath the skull, they don’t require placing a device inside the brain itself, which makes them less invasive than electrode-based systems. See here for a discussion on this.

So the big question is: can signals from the brain be used to improve the user experience with glasses? Can this be done non-invasively? If so, you could have an even tighter loop between humans and machines. As an intermediate step to full mind uploading, I want telepathic control over all my computers, and it seems like this is the correct path to get there.

4. Upload human minds to silicon to achieve immortality

This is even more speculative, but if/when it happens, it will obviously be the most important thing to ever happen in the history of humanity.

There’s a lot to say about mind uploading, but very briefly: if we can record and integrate signals from the brain to improve smart glasses, while recording all the inputs and outputs the person experiences, then we can start to collect data on the mapping between the world and brain state. Using this data, we can build simulated models of mappings from inputs to brain state and from brain state to actions. In my view, a perfect version of these mappings (one that is indistinguishable from the subject) constitutes an uploaded human mind.

There are exciting projects to map the neuronal circuits in the brain, including the FlyWire project (using electron microscopes) and mammalian brain circuit mapping work at E11 Bio (using expansion microscopy and light microscopes). The hope is that once it is collected, this brain circuit data can be used to build simulations of neural activity. One important caveat is that these techniques require preserving and sectioning the brain matter in question, which means they can’t be done on live humans!

It would be great if we can upload a mind without killing the biological person. It’s entirely unclear now if that’s going to be possible, but if it is, it would likely require collecting lots of embodied data, where the subject is observing, thinking and acting in a variety of circumstances.

It’s possible that with enough data you can skip the intermediate brain state and just learn a perfect mapping from inputs to outputs. Here is some super early work in this direction led by Wai Keen Vong in Brenden Lake’s group, where it was shown that head-mounted camera recordings from a single small child could be used to learn some critical aspects of human language.

In either case (brain recordings or no brain recordings) having an embodied recording device is necessary, and souped-up AI glasses are a perfect way to collect this kind of embodied data.

It’s worth saying that there’s a huge amount of complexity not being discussed here—for example, ensuring that uploaded minds can change and learn over time. However, I’ll leave the discussion here for now and just reiterate how excited I am about the possibility of mind uploading.

End

Anyway, that’s the plan.

Reach out if you want to chat about any of this!