Why Engineers Resist AI Tools (And How to Change That)
When ChatGPT launched, I was close to wrapping up my stint at Hearst. I watched our engineering and product teams start exploring AI use cases, and we experimented with replacing some algorithms we'd written at Semantics3. We tried having GPT-3 classify product attributes we'd spent months writing regex for, and it nailed 90% in the first attempt. We were floored at how far we could get with even zero-shot prompting. That's when I knew LLMs were going to transform how we approach engineering.
Fast forward to today, we've witnessed progress at breakneck speed. My teams are putting together POCs faster than ever using tools like Lovable, analyzing transcripts for action items, embedding powerful use cases into products, and we've even seen non-engineers leverage ChatGPT as their sounding board or data analysis tool.
As a VP of Engineering leading a team of 20 and working with other engineering and product teams across my organization, I've been a strong advocate of using AI tooling to enhance our day-to-day workflows. But I've also observed resistance or reluctance from some engineers to adopt these tools. I want to share the most common concerns I hear and how we've been working through them - not to dismiss these concerns, but to open up a broader conversation about how engineering teams can navigate this shift together.
"Is my data safe?"
This is not just a fair concern - it's a responsible question every engineer should ask. Skepticism here is warranted.
Here's what we've learned: while several AI tools on their free plans use data for training purposes, most (if not all) paid enterprise plans explicitly exclude data from training. OpenAI even allows you to opt out on their free plan. The key is reading the data usage policies carefully.
Here's what we've done to address this:
- Maintained a list of vetted tools approved by our legal and security teams that fit our risk tolerance
- Deployed company-wide paid plans with centralized access control
- Set up a private LLM deployment hooked up to Open WebUI for sensitive work (we use AWS Bedrock)
- Established clear guidelines: we strictly forbid using client data for analysis even on our paid ChatGPT/Claude plans
The goal isn't to eliminate risk entirely - it's to understand and manage it appropriately. Every organization will have different risk tolerances, and it's worth having these conversations explicitly rather than letting uncertainty prevent experimentation.
"Is it going to cost a fortune?"
I've had teammates worry that trying these platforms will rack up unexpected charges, especially with usage-based APIs. "What if it costs more than $10?" they ask.
Here's how I think about it: if spending $10 saves even a few hours of effort, or cuts delivery time from weeks to days, that investment pays for itself many times over. But I understand the anxiety around unpredictable costs.
To address this, we've implemented LiteLLM to manage per-API-key costs within the team. This lets us set budgets, prevent surprises, and most importantly, allows engineers to experiment without fear of an agent eating up all the credits.
As engineering leaders, it's important to set clear guidelines around experimentation costs and create an environment where teams feel safe trying new approaches without budget anxiety.
"It makes mistakes, so I don't trust it"
This concern often comes from early experiences with LLMs - limited knowledge, inconsistent outputs, the feeling that you could've done it faster yourself or the code is going to be unmaintainable. That's valid.
The landscape has shifted dramatically. With advances in reasoning capabilities and agentic frameworks, first-pass outputs are significantly more reliable. But here's what hasn't changed: engineers still need to verify the output. Think of it less like trusting a senior engineer and more like collaborating with a very capable but inexperienced team member.
The real skill isn't blindly accepting AI outputs - it's learning to:
- Break down complex tasks into smaller, well-defined pieces
- Prompt effectively to get the results you need
- Review and validate outputs critically
- Iterate when something isn't quite right
- Keep your codebase's documentation up to date
We've seen engineers on our team use Claude Code to refactor complex modules, not by accepting what it generates blindly, but by using it to explore different approaches faster, understand trade-offs better, and ultimately reduce timelines significantly while maintaining full ownership of the code.
"I'm worried about becoming deskilled"
This is one of the most thoughtful objections we hear, and it deserves serious consideration. There's a real risk that over-reliance on AI could erode skills, especially for early-career engineers who haven't fully developed their problem-solving muscles yet.
Our perspective: AI should amplify capabilities, not replace thinking. The engineers we see thriving with AI are those who:
- Use it to handle boilerplate and repetitive tasks, freeing time for complex problem-solving
- Leverage it to explore unfamiliar domains faster, then dive deep to truly understand
- Treat AI-generated code as a starting point for learning, not a black box to copy-paste
If you find yourself blindly accepting AI outputs without understanding them, that's a red flag. The goal is "you plus AI" being more effective than "you alone" - not "AI instead of you."
"I need to stay ahead of this"
I won't sugarcoat it: AI is reshaping what engineering work looks like. But the future isn't "AI replaces engineers" - it's "engineers who effectively leverage AI will be far more productive than those who don't."
The engineers who will thrive aren't necessarily the most senior or experienced today. They're the ones who are curious, adaptable, and willing to evolve their workflows. This doesn't mean everyone needs to become an AI expert overnight. It means staying open to experimentation.
"This feels overwhelming - where do I even start?"
If you're feeling this way, you're not alone. The pace of change can be daunting, especially if you're earlier in your career.
My advice: start small.
Phase 1: Low-risk assistance
- Use AI for writing documentation or test cases
- Have it explain unfamiliar code or concepts
- Generate boilerplate code for well-understood patterns
Phase 2: Collaborative development
- Pair programming with AI tools like Cursor or GitHub Copilot
- Use it to explore different implementation approaches
- Let it handle repetitive refactoring while you focus on architecture
Phase 3: Advanced workflows
- Experiment with agentic frameworks
- Build custom RAG systems for your domain
- Design AI-augmented development workflows
You don't need to master everything at once. Pick one small, repetitive task you do regularly and see if AI can help. Learn from that experience, then gradually expand.
Creating space for different adoption speeds
What we've learned as a leadership team: this isn't a mandate to transform overnight. Different people will adopt at different paces, and that's okay. What I'm suggesting is openness to experimentation.
If engineers are skeptical, we encourage them to start with low-stakes tasks. If they're concerned about data security, we review our policies together. If they're worried about costs, we help set up appropriate limits. If they're anxious about deskilling, we build in review processes that ensure understanding.
The role of engineering leadership isn't to force adoption - it's to create an environment where teams feel safe exploring these tools, learning from failures, and gradually building confidence.
Approach AI with curiosity and healthy caution
Every time I try a new AI tool, I go through the same cycle: the first two hours leave me in awe of what's possible. Then a sense of unease creeps in as I wonder whether we're moving fast enough, whether we risk falling behind.
I think both reactions are healthy. The awe keeps us curious and experimental. The unease keeps us honest about the pace of change and the need to evolve.
What we should be building in our engineering organizations is a culture where we embrace AI tools not out of fear of irrelevance, but out of excitement for what becomes possible when we augment our capabilities. Where we approach new technologies with both optimism and critical thinking. Where we support each other through the learning curve rather than judging those who adopt more slowly.
The future of engineering isn't AI versus humans. It's humans and AI working together in ways we're still figuring out. And that's a conversation worth having across the industry.
A practical suggestion: Monthly AI exploration day
Here's something we're implementing that might work for your team: dedicate one day a month for engineers to explore a new AI tool or capability. No specific deliverables required, just pure exploration.
The rules are simple:
- Pick any AI tool you're curious about
- Spend the day experimenting with a real problem or task
- Share what you learned (even if it didn't work out)
- If there's a cost involved, the company covers it
This removes the financial barrier, creates protected time for learning, and normalizes experimentation. Some explorations will lead nowhere. Others might transform how your team works. The key is building a culture where trying new things is expected, not exceptional.
If your organization doesn't have budget flexibility for this, start even smaller: a monthly lunch-and-learn where someone demos a tool they've been using, or a shared Slack channel where people post their AI experiments. The format matters less than the habit of continuous exploration.
I'd love to hear your thoughts and experiences. What resistance have you encountered? What's worked in your organizations?