AI: You’re Holding it Wrong
We all remember the infamous antenna “scandal” surrounding the iPhone 4. Suddenly, Apple’s prized smartphone had spotty reception if you held it a certain way. The quip attributed to Steve Jobs—“It’s not a bad design; you’re just holding it wrong”—sparked endless debate. Yet if we look closely, that “blame the user” scenario has surprising parallels to how many of us are treating AI today. The technology, while powerful, often fails in the hands of those who don’t know how to properly leverage it.
In the last few weeks, I’ve worked closely with different types of users—from an executive curious about AI’s business potential to an individual contributor wrestling with AI to write basic code. Their experiences taught me one clear lesson: success with AI isn’t just about having access to the tool; it’s about knowing how to “hold it” correctly. If you’re an IT decision maker looking to adopt or expand your use of AI, here’s what you need to know.
A Tale of Three AI Users
Tale #1: The Executive Who Only Wrote Poetry
The first person I spoke with was an executive at a nonprofit organization. Surprisingly, he had been using AI only in his personal life—to craft poems for fun. He was completely unaware of its professional capabilities. When employees began requesting access to ChatGPT, he didn’t see the point.
But once I explained how AI could help augment (and in certain cases, even replace) a grant writer’s efforts—speeding up research, summarizing key findings, drafting well-structured proposals—he suddenly saw the bigger picture. Grants, after all, revolve around data, narrative, and repetitive structuring—perfect tasks for AI to assist with. By walking him through a real use case specific to his nonprofit, he recognized that AI wasn’t just about creative wordplay; it could accelerate mission-critical processes.
Key Takeaway: If you only see AI as a novelty, you’ll miss out on productivity and cost-saving opportunities. Educate your leaders early—show them real use cases tied to ROI and organizational impact.
Tale #2: The Aspiring Developer with Outdated Skills
Next, I spoke with a friend attempting to build a simple database application using JavaScript and web forms. He hadn’t touched serious coding in 30 years, yet was convinced AI would let him bypass the need for deeper technical knowledge. After all, AI can generate code, right?
While AI can indeed provide boilerplate scripts and suggestions, my friend spent hours refining his prompts because he didn’t know how to talk to AI in “developer language.” He ended up scouring results for errors, rewriting prompts, and verifying code sections bit by bit. Ultimately, he recognized that he needed to invest time in re-learning basic coding principles to even understand AI’s suggestions properly.
Key Takeaway: AI is not a magical replacement for expertise. To use it effectively—whether for coding, analytics, or content creation—you still need a foundational understanding of the domain.
Tale #3: The CTO Who Hasn’t Laid Off a Single Developer
Finally, I met a CTO who proudly shared that AI was now responsible for 50% of his development team’s output. Impressive, right? Given the doomsday headlines about AI replacing jobs, I had to ask: “How many developers have you laid off?” His answer: zero.
Instead of staffing cuts, the company simply accelerated its development cycles. Projects that once took months now took weeks. AI augments the team’s capabilities, handles routine coding tasks, suggests bug fixes, and frees developers to focus on higher-level work. Developers still need to review the AI-generated code, adjust architecture, and ensure security compliance, so the human element remains essential.
Key Takeaway: A well-orchestrated “human + AI” workflow can dramatically increase productivity without forcing layoffs. AI becomes the accelerator, while your people remain the strategic drivers.
Are We Getting to AGI This Year? Probably Not.
Amid the excitement, there’s a big question on the minds of tech leaders: Are we on the cusp of Artificial General Intelligence (AGI)—the type of AI that can perform any intellectual task a human can, only better? Based on my experiences, we’re still a long way off.
Current AI tools—language models like ChatGPT—excel at pattern recognition, summarization, and even creative tasks, but they don’t possess the broader problem-solving or “common sense” reasoning we associate with human intelligence. They also rely heavily on the quality of the data and the context provided by users. This means they can easily stray into inaccuracy if prompts are imprecise or if the dataset is outdated or biased.
No, AI isn’t “all-knowing,” nor can it autonomously complete complex assignments without humans in the loop. It’s more akin to a sophisticated pattern-matching machine that needs ongoing human guidance—someone to double-check the facts, make strategic decisions, and provide domain expertise.
Will AI Bring 30% Productivity Gains Across the Board?
The second burning question I often hear is about productivity. Many organizations dream of a near-term future where AI fosters double-digit (even triple-digit) efficiency gains. Is that likely to happen for everyone? Unfortunately, the answer is no—at least not in the short term.
Why?
- Data Readiness: Companies with well-documented processes, clean data, and robust governance policies have a head start. AI thrives on structure. If your data is scattered or poorly labeled, any AI model will struggle to generate accurate insights.
- User Training: Like the friend learning JavaScript, end users need to know how to prompt AI, interpret results, and integrate suggestions into workflows. AI “fails” most often when users don’t know how to harness it effectively.
- Guardrails: Summaries and content generation are great, but if you don’t have the right governance and compliance measures in place, you risk misinformation or breaches of sensitive data.
In other words, only the organizations that invest in data management and user upskilling will see dramatic productivity boosts. The rest will either lag behind or experience disillusionment with AI’s real capabilities.
How to “Hold” AI Correctly
Step 1. Educate & Upskill
Provide ongoing training for employees at all levels—from the executive suite to individual contributors. Understanding the basics of prompt engineering, data ethics, and AI’s limitations is crucial for every role.
Step 2. Establish Strong Governance
Before rolling out AI solutions, ensure your organization has robust data governance and compliance frameworks. This not only reduces risk but also helps AI produce more relevant and accurate results.
Step 3. Adopt a “Human + AI” Workflow
Resist the urge to see AI as a direct replacement for staff. Instead, use AI to handle repetitive tasks, free up people for strategic thinking, and accelerate innovation. Keep humans in the loop to sanity-check outputs and guide AI in real time.
Step 4. Tailor Use Cases to Business Outcomes
Don’t introduce AI for AI’s sake. Identify bottlenecks in your workflows or manual tasks that eat up too many hours. Pilot AI in those specific areas where you can measure ROI. Success in targeted areas will build momentum.
Step 5. Iterate & Scale
AI deployments aren’t “fire and forget.” Gather feedback, measure effectiveness, and refine prompts, models, and data sources regularly. As you learn, scale AI to adjacent processes and teams.
Final Thoughts
AI is already here, and its potential is enormous. But as with the iPhone 4’s antenna debacle, sometimes we’re quick to blame the technology instead of examining our own approach. It’s not that AI is “badly designed”; it’s that many organizations—and the individuals within them—are simply holding it wrong.
For IT decision makers, the path forward is clear: invest in the human side of AI (training, governance, strategy) and provide the right guardrails to keep AI outputs relevant and accurate. Whether it’s an executive drafting grant proposals, a novice coder building web forms, or a CTO leveraging AI for half of all code, the message is the same. AI is an augmenter—not an out-and-out replacement. And it’s a powerful one if you know how to hold it.
If you’re ready to stop “holding AI wrong,” start by assessing your organization’s data processes, training needs, and governance frameworks. Identify where AI can make the biggest immediate impact. Then, pilot a targeted solution, learn from it, and scale up. The future of AI in your organization is in your hands—just make sure you’re holding it right.
Share This Story, Choose Your Platform!
Keith Townsend is a seasoned technology leader and Chief Technology Advisor at Futurum Group, specializing in IT infrastructure, cloud technologies, and AI. With expertise spanning cloud, virtualization, networking, and storage, Keith has been a trusted partner in transforming IT operations across industries, including pharmaceuticals, manufacturing, government, software, and financial services.
Keith’s career highlights include leading global initiatives to consolidate multiple data centers, unify disparate IT operations, and modernize mission-critical platforms for “three-letter” federal agencies. His ability to align complex technology solutions with business objectives has made him a sought-after advisor for organizations navigating digital transformation.
A recognized voice in the industry, Keith combines his deep infrastructure knowledge with AI expertise to help enterprises integrate machine learning and AI-driven solutions into their IT strategies. His leadership has extended to designing scalable architectures that support advanced analytics and automation, empowering businesses to unlock new efficiencies and capabilities.
Whether guiding data center modernization, deploying AI solutions, or advising on cloud strategies, Keith brings a unique blend of technical depth and strategic insight to every project.