AI and Aging: Cognitive Crutch or Thinking Cap?

AI pros and cons

How AI exposes your thought process as you age—for better or worse

by Tom West for his Substack

With so much having been written on AI, I’ve hesitated to weigh in on how it impacts an aging population. But as an advisor and somebody who cares deeply about his senior clients, I’d like to carve out a few observations on AI—the ones I actually use with my clients, friends, and family.

The simplest way I’ve found to think about artificial intelligence is this: AI doesn’t replace you. It exposes you.

That idea came into sharper focus for me in a recent conversation with colleagues. Underneath all the noise—job loss headlines, productivity hype, futuristic speculation—there’s a quieter, more practical truth: AI is a competence amplifier. It magnifies how you already think, decide, and engage with the world.

That has very different implications depending on where you are in life.

In my world, that shows up in financial decisions. AI can sharpen your thinking or give you false confidence in a plan you don’t fully understand. The difference isn’t the tool—it’s the thinking behind it.

The exposure problem

One of the more uncomfortable observations that came up is how easy it is to turn your brain off when using AI. Not everyone does this, but the technology makes it easier than ever to outsource your thought process.

If you’re already someone who thinks clearly, asks good questions, and challenges assumptions, AI becomes a powerful partner. You can use it to explore topics, accelerate thinking, and sharpen ideas.

For passive thinkers, though, AI reinforces that behavior. It shoots out answers so quickly, so confidently, and so conversationally that it creates an aura of omniscience. If you’re already a passive thinker, chances are you won’t poke under the hood and verify if those answers are incomplete, inaccurate, or even deceptive (more on that in a moment). You start to feel informed, but without being thoughtful and also perhaps wildly inaccurate.

That’s the exposure problem.

That’s particularly dangerous with money. Feeling informed is not the same as being informed. I’ve already seen people use AI to validate decisions they’d already emotionally committed to—retirement timing, investment shifts, even major purchases—without ever stress-testing the assumptions.

Pre-retirement: Adapt or drift

For those still working, the AI conversation tends to default to job security. Will I be replaced by AI? That is upsetting, but it also obscures the more immediate issue: capability.

AI is already replacing certain types of work—document review in law, basic analysis in finance, routine administrative tasks across industries. The bigger question is what happens to the people around that work that AI is doing.

Historically, many professions trained competence through repetition. You started with the grunt work, built pattern recognition, and eventually developed judgment.

AI is compressing or outright removing that pathway. Now the quandary becomes how to develop that expertise while working alongside a tool that does all of those repetitive tasks that built that expertise.

The people who figure that out—who deliberately build judgment, context, and cross-functional thinking—will use AI as leverage. The people who don’t risk becoming dependent on a system they don’t fully understand.

Post-retirement: Independence vs. drift

In retirement, the stakes shift. It’s less about career progression and more about maintaining independence.

Here’s where AI has real promise. It can simplify complexity. I often encourage clients to use AI as a first pass on complexity—“Explain this to me simply,” or “What questions should I be asking?”—especially when reviewing financial documents or planning decisions. Not to replace advice, but to show up to those conversations more prepared and more aware.

However, there is a catch that’s often overlooked. Many retirees are already at risk of understimulation. The structure of work is gone. The demand to stay sharp is lower. And cognitive engagement can decline if it’s not intentionally maintained.

AI can go one of two ways here.

  • Used well, it becomes a tool for engagement—asking better questions, exploring new ideas, staying mentally active.
  • Used poorly, it becomes a crutch—a fast path to easy answers that replaces the very thinking that keeps people sharp. Financially, that’s where small mistakes compound. Misunderstanding a distribution strategy, oversimplifying risk, or relying on a single AI-generated explanation can quietly erode outcomes over time.

Over time, that difference compounds.

Cognitive support without cognitive surrender

For individuals experiencing some level of cognitive decline, AI can be genuinely helpful. It can repeat information patiently. Clarify instructions without growing weary of questions. Reduce the friction of navigating complex systems. In many cases, it can extend a person’s ability to function independently. For some clients, this becomes a bridge—helping them stay engaged in financial decisions longer. They may not process everything as quickly as they once did, but with the right prompts and support, they can still ask good questions and stay involved in choices that affect their lives.

It is imperative, though, to not let support become substitution. The goal is not to hand over thinking entirely. It’s to scaffold it, providing a support structure that enables a person to stay engaged, ask questions, and participate in decisions.

That’s where families and advisors still play a critical role. AI can assist. It cannot replace human oversight, accountability, or care.

The echo chamber problem

AI is designed to be helpful and responsive to your inputs. It’s an assistant that wants to do a good job and please its boss—you.

That unfortunately can pull you into a comfortable feedback loop, reinforcing your existing beliefs in a way no other informational tool has before.

You ask questions a certain way. The AI responds in kind. You refine within that frame, and this back and forth continues, with the AI adapting toward your prompts more than some objective informational goal. Before long, you’re operating inside a highly personalized echo chamber that feels productive but may not be particularly accurate.

This is especially dangerous for families under stress. When people are overwhelmed—managing aging parents, careers, finances—they don’t have the bandwidth to question everything. AI can help organize and simplify by summarizing options, comparing care facilities, evaluating costs. But without a clear decision framework, speed just amplifies bias. I see this with families making financial decisions under pressure. If no one steps back to challenge assumptions, they can end up efficiently heading in the wrong direction.

>>Read full article