top of page

Three Years of ChatGPT: What I Wish We’d Framed Differently From the Start

  • Writer: Carolina MIlanesi
    Carolina MIlanesi
  • 4 days ago
  • 5 min read

As ChatGPT turns three, I find myself thinking less about the milestones we’ve celebrated and more about the moments where a slightly different framing could have created even greater impact. Hindsight is, of course, a luxury, but it also offers clarity about how narratives shape behavior, how early choices ripple outward, and how the way we introduce a technology often determines how people experience it for years to come.

Looking back, there are three areas where I wish the conversation had started differently: how we talked about productivity, how we explained the tech layoffs, and how we prepared our education systems for an AI-powered world. None of these reflections are criticisms.


They are simply acknowledgments that the framing we choose matters. It influences whether people feel empowered or threatened, curious or defensive, ready or caught off guard. And in the case of AI, perhaps more than with any other technology we’ve seen, framing has been everything.


1. We framed AI around speed, not possibility


In the earliest months of generative AI’s rise, the dominant narrative was one of time savings. Shave off minutes here, automate tasks there, do what you already do, but faster. That message landed quickly because it was practical, tangible, and easy to demonstrate. You could show someone how to turn a messy draft into clean prose in seconds, or how to summarize a document in one click. People understood it immediately.


But looking back, I wish we had started with a different question, not “How can AI make your work faster?” but “How could AI help you work differently?” Because those two ideas lead to profoundly different mindsets.


When people are told a tool will save them time, they look for ways to squeeze it into their existing workflow. Efficiency becomes the goal, and the measure of success becomes: How much quicker can I get this done? But efficiency alone doesn’t unlock imagination. It doesn’t push people to reconsider how work could be redesigned, reimagined, or redistributed. It doesn’t invite exploration. It simply optimizes what already exists.

If, from the very beginning, the message had centered on transformation rather than speed, people might have approached AI with a deeper sense of agency and creativity. Instead of asking, “How can this help me finish my tasks faster?” they might have asked, “What new value can I create if some tasks no longer consume my time?” That shift, from optimization to reinvention, would have sparked a broader mindset change much earlier in the cycle.


And I believe it would have led to more experimentation, more openness, and more willingness to rethink old processes rather than trying to supercharge them. 


Recognizing early on that AI demanded new workflows and opened the door to entirely different ways of doing business could have also driven a much stronger focus on company culture. So much of AI’s success hinges on people’s willingness to change—and on organizations being ready to rethink how they value, support, and measure talent. It’s not just about upskilling or reskilling. It’s about seeing people for the full value they bring, not just the tasks they perform. When companies embrace that mindset, AI becomes not a threat, but a catalyst for a healthier, more human-centered workplace.


Of course, it’s not too late. In fact, I’d argue that as we enter this new wave of AI agents, the greatest returns will only come when we stop trying to fit them into old processes. To unlock their full value, we need to think differently—reorganizing workflows, and in many cases creating entirely new ones.


2. We allowed tech companies to pin layoffs on AI instead of acknowledging the real cause


The second thing I wish we had handled differently is the narrative around the major tech layoffs of the early 2020s. Many companies were eager to get off the hook for a period of aggressive, unsustainable overhiring during the COVID boom. Admitting that they had over-expanded, or misread the market, was uncomfortable. Allowing AI to absorb the blame was much easier.


And so the idea quickly took hold that AI was already eliminating jobs at scale, long before that was even remotely true. Headlines amplified it. Public conversations simplified it. And the nuance of what actually happened, market correction and hiring bloat, was lost.

By the time we tried to clarify the picture, the damage had been done. People were already linking AI with job loss, feeding a fear-based narrative that shaped early perceptions of the technology. Instead of entering the AI era with curiosity, many workers entered with anxiety. Instead of seeing possibilities, they saw threats.


Had we pushed back more forcefully against this convenient but misleading narrative, the transition into AI-assisted work could have felt very different. People might have approached these tools with more openness, more willingness to experiment, and more trust that AI was being developed to augment them, not replace them.


3. We weren’t early or bold enough in bringing AI into education


The last area where I wish things had unfolded differently is education. For many institutions, the initial reaction to AI was to restrict it, ban it, or pretend it didn’t exist. Understandably, educators were cautious. They feared plagiarism, shortcuts, and unknown consequences. But in trying to protect learning, we inadvertently delayed preparing students for the world they were about to enter.


Imagine if, from day one, AI had been invited into the classroom as a learning partner rather than treated as a threat. Students could have been taught how to analyze, question, and improve AI-generated content. They could have learned to use these tools for ideation, research, critical thinking, and creativity. They could have understood both the capabilities and limitations early on, building the AI literacy that is now becoming essential across nearly every field.


If schools had opened their doors to AI sooner, we would now have a generation entering the workforce already comfortable with these tools, ready not just to use them efficiently, but to integrate them thoughtfully and responsibly into their work.


Fortunately, we’re starting to see a real shift. Some colleges are now actively encouraging students to use AI as a tool for testing theories, supporting research, brainstorming ideas, and deepening their critical thinking. Instead of treating AI as a shortcut, they’re positioning it as an intellectual partner, something that can push students to question more, explore more, and ultimately learn more


Looking forward


Three years in, these reflections aren’t regrets. They are lessons, valuable ones. AI has already transformed how we work, learn, and create, and it will continue to reshape industries and societies in ways we are only beginning to understand.


But the next three years don’t have to repeat the narrative of the first three. We can reframe the conversation toward possibility, clarity, and preparedness. We can encourage people to approach AI with curiosity rather than fear. And we can ensure the next generation enters the world not intimidated by new tools, but empowered by them.


Anniversaries are a time to celebrate, but they’re also a time to course-correct. And as ChatGPT turns three, I’m reminded that the story of any technology is written not just by what it can do, but by how we choose to introduce it to the world.


 
 

©2023 by The Heart of Tech

bottom of page