How We Use AI Without Losing Ourselves

Saying what matters, when it matters

 

We get asked about AI constantly at Ladyship. How we're using it, whether it's changing our work and whether we're worried it will replace the things that make what we do distinctive.

We all know that when it comes to the impact of AI, the ground is constantly moving under our feet. And it’s happening faster than governance, norms or best practices can keep up, let alone interrogate thoroughly. In moments like this, the collective instinct is often to rush towards adoption in order to prove relevance and avoid being left behind.

But speed alone isn’t a strategy. In our view, the real leadership test isn’t whether or not your company uses AI (we all are at this point) but how deliberately you choose and guide where it belongs in your practice, and where it doesn’t.  

At Ladyship, we don’t see AI as a shortcut to thinking or a replacement for judgment. For us, it’s a tool that can support clarity without flattening it and, when used well, the technology can free up space for the work that truly matters. Used poorly, it risks accelerating sameness at scale. The key difference is intent.

 

AI is Not the Author


We use AI across our current workflows and everyday business infrastructure at Ladyship. Perplexity, Claude, ChatGPT, Granola– and our own agents– help us remove friction, speed up the mundane and serve as an initial sounding board for ideas. We might also use platforms like Figma Make to sketch, sense-check, explore possibilities and to get thoughts out of our heads and onto the page faster. As the tools evolve, we evolve with them. 

What we don’t use AI for is authorship. We think about AI as belonging in the black and white– contracts, audits, synthesis, cleanup– not the gray, where judgment, taste and strategic ambiguity live. As Harvard Business Review recently argued, generative AI may accelerate competence, but it doesn’t close the gap between novices and experts.

In our work, AI doesn’t originate strategy or generate the first creative leap. It doesn’t decide what matters, what’s worth fighting for, or what a brand should stand behind. We fiercely believe that work still requires human judgment, cultural awareness and lived experience. Things no model can replicate.

A crutch for conformity happens when teams use AI to generate an answer faster, when the pressure to produce something– anything! – outweighs the willingness to sit with uncertainty long enough to unlock something genuinely good. We see it everywhere. Work that's technically competent but strategically inert and completely interchangeable.

But used intentionally, AI can lift the administrative fog so that more of our energy, time and headspace can go towards rigorous strategic and creative thinking.

 

Where AI adds Value


AI genuinely adds value when it tackles repetition, aggregation and busy work (even though it can make errors here too). All those tasks that drain time without adding depth. Used as such, it creates breathing room, which gives senior minds more space to think, challenge, refine and sharpen.

AI also has a role to play in how we help clients build confidence in our strategic recommendations and creative solutions. As more of our clients use agents in their own day-to-day work– including how they review ideas and give feedback– we’re exploring how AI-powered research, testing and creative tools can make our thinking more legible, tangible and convincing without compromising its originality.

However, where we choose not to use AI is just as important. We don’t use it to make taste calls or resolve ambiguity that needs a human lens. While AI can help us move faster at some points, those moments don’t require acceleration but human expertise and discernment.

In a world where it’s increasingly easy to generate something “good enough,” the real concern isn’t that AI replaces creativity but that leaders stop pushing for what’s genuinely original, distinctive and brave. “Good enough” used to be the floor. Now it risks becoming the ceiling.

 

Protecting Craft Requires Backbone


One of the most important things we’ve learned is that AI needs to be challenged rather than accepted at face value. It’s fast, useful and occasionally very insightful. But it’s never in charge.

As a team, we argue with it. We tussle with its assumptions. We reject its first answers more often than we accept them. Our instincts, experience, and gut always override any output that doesn’t feel right. All of this is so important, because AI naturally optimizes toward the center. Left unchallenged, it will almost always return you to the mean, remaining competent, coherent and quietly forgettable.

Maintaining this posture matters. After all, we can’t blame AI tools for diluting our craft on their own. We, as strategists, thinkers, and creators give over agency when we surrender our own hard-earned, well-honed judgment for convenience.

The backlash to hollow, AI-generated work is coming from audiences who can sniff out when work feels empty, derivative or unconsidered. That pushback isn’t anti-innovation or technophobic. It’s simply pro-human. And a reminder that in a world where anyone can generate slop, the people who can genuinely think are worth more than ever. As Business Insider recently reported: “the hottest job in tech is writing words.”

 

Ethics Aren’t Optional


We try to be honest about something the industry tends to sidestep, which is that AI also has a physical cost.

Data centers consume enormous amounts of energy. The infrastructure behind these tools is not immaterial, even if it's invisible. We acknowledge that. We support organizations working in regions disproportionately affected by the environmental impact of digital infrastructure. And we treat our AI usage as a strategic decision. It’s not a default behavior or one-time policy but something that requires ongoing scrutiny.

Internally, we keep the conversation live. We ask ourselves: what should AI be used for this week, on this brief, at this stage? They're questions that need to stay open, not something we check off once and file away. That’s why we’ve developed internal guidelines—and continue to refine them—to ensure coherence and accountability in how we use these tools.

We're also watching the bigger picture with clear eyes: The concentration of power in the hands of AI's inventors and owners. The widening gap between those who shape these tools and those who absorb their consequences. The very real possibility that we look back at this moment and realize we were trying to make AI do far more than it should.

 

So, What’s Next?


Like most technologies before it, AI is going through a period of overclaiming, followed by a period of recalibration. We've seen it before. We remember the NFT frenzy. We watched the Web3 gold rush. It tends to play out in a similar pattern: novelty surges, saturation follows, then discernment becomes the differentiator.

The brands and consultancies that will define the next decade won't be the ones who moved fastest but the ones who protected what made them special: their human capital.

 

What This Means at Ladyship


AI is a tool, not the driver That means using it to create more space for the human thinking that genuinely moves business forward. We’re ruthless about where we draw that line because the real differentiator is the quality of judgment, nuance and courage at the heart of every decision. Ladyship believes the future belongs to teams that know what to automate, what to protect, and what should remain unmistakably, wonderfully human.

 

Written by Rana Brightman & Jemma Campbell

Next
Next

The Power of Radical Candour