Fusion92Fusion92 Home
Back To News
Elemental Insights

Amplifier or Crutch: How To Stay in the Driver's Seat With AI

In the second installment of our Elemental Insights series, we explore the hidden tradeoff of generative AI: we can ship faster than ever, but risk outsourcing the very judgment that makes the work good. Here’s how to use LLMs with intention, so your thinking gets sharper, not softer.

Alyssa Machinis

Associate Director, Growth Consulting

Alyssa is a strategic leader on Fusion92’s iCOE team who blends data, design thinking, and a decade of integrated agency experience to drive innovative, customer-centric solutions for leading brands.

The proliferation of artificial intelligence (AI) is almost incomprehensible. Every week brings us more “new.” Outside of simply new technology or new models, we’re having to develop new ways of working that are actually shifting our brain chemistry. What’s more, the adaptation and evolution of AI is outpacing these shifts, meaning we’re building the plane as it’s flying, so to speak. Focusing on large language models (LLMs) specifically, while they’re shrinking the time to delivery for a myriad of tasks, I’ve found that they’re also impacting some of the critical thinking skills that I’ve honed over the last few decades. This article, for example. Before AI, I’d be researching on my own, writing an outline, creating drafts, getting peer feedback. It would take me a long time, but I’d be responsible for the entire process. Now I have the option to offload much of that cognition, and that choice is where it gets complicated.

This isn’t just me and my musings; there’s been over 30 studies in the last few years from the Massachusetts Institute of Technology, Harvard Business School, the Wharton School of the University of Pennsylvania, Stanford University, Microsoft and others building evidence on what AI does to human cognition. What many have found is while AI improves immediate output, people’s underlying cognitive processes weaken.

However, it’s not all dire. Some of the research also shows how AI genuinely improves outcomes. One study by Harvard Business School found that AI can serve as a “cybernet teammate,” delivering many of the same benefits as human collaborators, including generating better ideas and sharing expertise. The key is to stay actively engaged in the input and output. The minute we stop thoroughly reading, critiquing and revising the output is the minute we start atrophying our own critical thinking. The difference between AI as an amplifier versus AI as a crutch comes down to how consciously we’re in the driver's seat.

To frame how to use AI to amplify your skills rather than undermine, I’m going to point to some studies that I (… and Claude … and Perplexity) found that bring to light why implementing higher levels of discernment when using AI in your day-to-day is so important.

In 2025, MIT Media Lab tracked brain activity via EEG across four sessions over several months in three groups: ChatGPT users, Google searchers and unaided writers. The ChatGPT users showed up to 55% lower neural connectivity versus the unaided writers, and when they switched to unaided work, their brain activity stayed suppressed. However, the unaided writers who used ChatGPT in Session 4 showed increased brain connectivity. What this tells us is using ChatGPT isn’t inherently bad, but when it’s used is extremely important.

This is Lesson 1: Rely on your brain first; let AI validate and enhance later.

Our brains thrive on effortful learning. When we outsource cognitive thinking too much, it impacts our brain’s ability to form and strengthen neural pathways. We can liken it to when we don’t use certain muscles. What happens? They get weaker. For example, recently the Growth Consulting team was working on a Salesforce Marketing Cloud lead nurture program. Working with ChatGPT, I figured out a methodology but wasn’t happy with it; something felt off, and all the pieces weren’t falling nicely together. After a collaboration session to talk through where I was getting stuck, we realized the original method that ChatGPT led me to was incorrect, requiring to go back to the drawing board, researching more on what a sophisticated nurture program could look like and tapping into past relevant experiences. Meaning: We needed to do some significant thinking ourselves first in order to better guide the output.

Another study, done by the Wharton School this year, theorizes that AI is introducing a third cognitive mode (or System 3) outside of System 1: “Thinking fast” (intuition) and System 2: “Thinking slow” (deliberation), where artificial cognition happens outside of the brain, supplementing or replacing Systems 1 and 2. Where this becomes problematic is in “cognitive surrender,” where AI outputs are accepted with minimal scrutiny. The institution did three preregistered experiments using reasoning problems in which AI was sometimes programmed to give wrong answers. In the study, when AI was right, almost 93% of people followed it. However, when it was wrong, nearly 80% still took it as truth. Essentially, when AI does the thinking, your brain does less, and we end up erroneously accepting fodder as fact.

This leads into Lesson 2: Scrutinize everything.

Scrutinizing requires skill, aka Lesson 1. We need to hone our capabilities as subject matter experts in order to accurately judge AI outputs. AI is only so smart, and while it is learning increasingly more every second, it lacks all the context and relevant experience that humans have. For instance, ChatGPT is extremely affirming in its responses, making it easy to accept what it says as fact. For a recent project, I used ChatGPT to develop a Discovery Session workshop format. I knew generally what I wanted to talk about, so ChatGPT was used for me to shorten time-to-deliverable. However, I had to give it extremely detailed prompts and correct as I went along, because otherwise, I knew it would give any answer to satisfy me, even if it was wrong. It’s my job to know when it’s wrong and to guide it toward a correct output.

The last study I want to talk about is Navigating the Jagged Frontier. Harvard Business School, the Wharton School, and MIT Sloan developed a field experiment with Boston Consulting Group (BCG) consultants to study the uneven impact of AI capabilities on productivity and quality. Subjects were randomly assigned to one of three conditions: no AI access, GPT-4 AI access or GPT-4 AI access, with a prompt engineering overview. Inside the frontier, AI users outperformed by 25%; outside it, they were 19% less likely to be correct than non-AI users, and what’s worse, couldn’t tell the difference. The delineation between what’s “in scope” for AI versus not in scope is a jagged terrain that creates complications for high frequency users.

What the study also revealed was that how people worked with AI mattered as much as whether they used it at all. Three human-AI collaboration patterns emerged: Directed Knowledge Co-Creation (Centaur), Fused Knowledge Co-Creation (Cyborg) and Abdicated Knowledge Co-Creation (Self-Automator). That’s a lot of jargon, so let’s break it all down.

Centaurs split the work between themselves and the model in a more deliberate way, deciding which parts to hand off to AI and which parts to do personally. In practice, that means they use AI as a specialist tool for certain sub-tasks but keep control over the overall structure and judgment of the work. Cyborgs integrate AI much more continuously into their workflow, interacting with it throughout the task rather than separating “human” and “machine” phases. This is a tighter form of collaboration in which the human and AI are almost operating as one system. Self-automators cede both selection and execution to AI, which makes the workflow fast but much less mentally engaged.

This leads to Lesson 3: Learn when to use what pattern.

With Centaur, you’re using AI selectively while defining the what and how. This had the highest accuracy in the BCG study, and it helps our domain expertise grow. In Lesson 1, this is how I needed to use it to figure out which parts of the nurture program I’m designing compared to which parts I’m handing off. However, there’s a time and place for Cyborg as well. Having discernment over the output while keeping AI in the loop at every step helps maintain your domain expertise while accelerating your output and upskilling in new areas. Cyborg is like working hand in hand with a bunch of AI interns who you can bounce ideas off of. And lastly, Self-Automator means outputs are fast and polished. However, this is for low importance, low difficulty tasks that do not require much oversight (think emails, rewriting a presentation slide, summarizing notes). It’s easy to get sucked into Self-Automator mode, but it can lead to the most inaccurate outputs, so practice high judgment with this mode.

We’ve walked through three lessons that help us maintain our critical thinking and cognitive discernment. For each lesson, we can ask ourselves a key question to help put them into practice:

  1. Rely on your brain first; let AI validate and enhance later
    The question you can ask yourself here is: Did I do the thinking first, or did the tool do it for me?
  2. Scrutinize everything
    Here, we can ask: Can I defend this output, or am I just accepting it as fact
  3. Learn when to use what pattern
    And the last question to ask ourselves: Does this task actually require my full judgment, or is this a fair handoff?

The most important skill in an AI-saturated environment isn't any technical or domain capability; it's the meta-skill of knowing when to lean on the tool and when to do the work yourself. That requires honest accounting: What do I know, what am I still building and what am I trading away when I reach for a shortcut? As researchers at Yale University and Princeton University put it: "We produce more but understand less." The solution is consciousness: Not less AI, but more deliberate AI. The people who will get the most from AI over the long run aren't the ones using it most, rather the ones using it most intentionally.

Let's Talk07

Let's Talk Transformation. Yours.

Now that you've seen our story, we'd like to learn more about you and talk about how we can help you achieve more than you ever thought possible.