I deliver products that yield a 10x improvement in the lives of users. Your product needs to solve for people's problems. I start with discovery and continuous experimentation to develop and ship products that make a substantial improvement in customers' lives.
I build products that endure because they are grounded in real human behavior, measurable outcomes, and disciplined execution. With a foundation in Industrial-Organizational Psychology and nearly two decades leading AI-enabled and mission-critical platforms, I translate complex science and emerging technologies into scalable, market-relevant solutions.
Cookie-cutter PMs bound by inflexible processes lack the vision to create lasting products that matter. My mission: 10X benefit for users. At least 2X ROI for the company.
Currently independent โ building AI-powered applications with Cursor. Open to Principal PM and Director of Product roles where measurable user impact matters as much as technical sophistication.
Tactics usually cover what thing to do now whereas strategy asks "where are we going, and how do we get there?" This may be an oversimplification, and let's not get into debating definitions. They are well established and my descriptions are more for context than for Oxford.
So, when presented with a problem, a tactically minded PM is going to identify solutions that solve the problem, right? That means a Strategic PM will think about the larger implications of this problem and possible solutions? How will this problem impact us downstream? How can we turn this into a win? What are the consequences of solution A vs solution B?
But what is thinking like an ML engineer? The recent post by Moe Ali talks about a hypothetical PM solving AI error with a test agent and a robust testing scenario. Valid strategy, but is this strategic thinking or tactical thinking?
This PM is surely solving problems and there are a lot of outputs. But what is the outcome? How down in the weeds is this approach and does it set up the product for success, or just move past one pothole on the roadmap? A PM's role extends well beyond testing and validation. And does a heavy focus on engineering skill turn a creative thinker and problem solver into a hammer in search of a nail?
Does getting too technical eliminate strategic thinking? Does getting too strategic overlook the tactics that actually win the daily fights? Where does one need to land to optimize outcomes?
I think the answer lies in your job title and seniority as well as team composition.
Principal PMs, Director Level, and up should be focused on 2nd and 3rd order effects โ very strategic. There should be others on the team, project managers, a lead engineer, etc., who can contribute valid approaches to solve technical problems.
Senior PMs balance some Tactics and Strategy and leverage problem solving. A Senior PM should look past the solution to the intended outcome and any unanticipated consequences that may arise. This is closer to systems thinking and shifts the focus to outcome versus outputs.
Junior PMs should solve problems, overcome hurdles, and move the product along. There should be senior staff to mentor and guide these PMs to mature their thinking as they move from 0 to 1.
One caveat: All of this hedges on team composition. If the other members of the product team are empowered, there's likely some good tactical contributions coming from those skilled and competent team members. If it's a feature team, or worse, outsourced work, then even a Principal PM is going to be forced to operate at the tactical level to overcome obstacles.
This is foundational to the Product Model and essential to building a lasting product that customers continue to love.
If your roadmap cannot be changed โ if newly discovered, high-value features can't be prioritized over things already on the list โ then you're not going to fulfill your customers' unmet needs. They will go elsewhere.
It's funny. I got a very similar question from a recruiter the other day. I was asked how to rectify a needs-roadmap disconnect. I don't see there being any tension here. I think this situation is probably very common in empowered product teams.
CI/CD should include continuous discovery and experimentation. Those who practice continuous experimentation should expect their assumptions to change with new information.
Isn't that the whole point of testing in the first place? To validate or refine our hypotheses?
Act One โ Discovery. This is where the plot is revealed. The unmet need, the hidden friction, the problem nobody has named yet. Thrilling work if you let it be.
Act Two โ Testing & Experimentation. Each minimally viable experiment (MVE) is a suspect to be interrogated. Those that don't qualify for production are either victims โ if we're stretching this to an Agatha Christie story โ or ruled out as suspects.
Act Three โ Delivery. This is the big reveal. Poirot โ or, more currently, Benoit Blanc โ knows whodunit and delivers a grand monologue that explains the murder, the motive, and implicates all the characters in some way.
And this is sort of where my analogy falls apart. Because delivery is terminal โ and no Product Team wants a short product lifecycle. There's no room for CI/CD in this story.
Unless we're talking sequel. Or trilogy.
Ooh โ Dynasty! Let's Friday the 13th this thing. That franchise endured for decades.
I've unlocked $50M in sustained funding and doubled graduation rates from 50% to 100% โ not by managing backlogs, but by turning product teams into missionaries and customers into evangelists.
Great products don't happen by accident. They happen when a PM obsesses over the right problem, aligns the team around real customer needs, and maintains a value creation mindset from discovery through delivery.
My practices make missionaries of product teams, evangelists of customers, and ensure products meet the needs of the business โ across segments, across the full product lifecycle.
I'm looking for empowered teams ready to discover their next flagship product. If that's you, let's talk.
How do you want to approach AI? Do you:
1. Lay off anyone not directly contributing to your AI initiative, and use those savings to go all in?
2. Wait, watch the tea leaves, and see how things play out?
3. Invest earnestly (but don't bet the farm) in an AI initiative, starting with discovery and needs assessment โ hire (not fire) key persons who can further your AI investigation, and keep your talented staff involved so that when you find those winning solutions, you can quickly bring them to market?
Option 1 feels like putting half your money on one number on the roulette table. Option 2 feels like you might miss the boat entirely. Option 3 is the right combination of risk and bold initiative to produce meaningful results.
This AI renaissance is looking more and more like the dotcom bubble. Every organization is going mad with FOMO, or counting the money before it's been made โ thinking AI just automatically comes with a value proposition. It's not that the internet didn't add value (it's omnipresent today) โ it's that so many just assumed the money would flow without asking "how?"
In 2025, companies weren't asking "how?" either. Will there be a similar bubble? Folks are adamant on both sides. Regardless, there will be severe consequences โ some good, some bad, some great. And a whole lot of people are going to have a #OpenToWork banner on their profile.
And yes, I used an em dash. On my own. No help from a chatbot. And I am proud of it โ
We cannot turn PMs into coders. Not that PMs can't code โ they shouldn't. They should do what someone else can do better.
Further, they are one person. The product lifecycle takes many minds. Taking one away โ even if it's just so that the PM can run a few experiments โ takes away an informed opinion, a creative perspective, a voice of reason, and an opportunity to stop or fix before time and resources are wasted.
When we make PMs wear more hats, who's to say they should not wear all of them? What's next? PMs must be UX folks too? And PMMs? And Account Managers? Customer Service Reps? Why not just have a PM do it all with their trusty chatbot copilot?
"...Doing A, B, C"
"to x, not just y."
"...at the intersection."
My resume stank of ChatGPT. I'm one of those, now. I don't know how I got into the trap. But I did. It was just so easy. I knew I should be more diligent. I knew if I wasn't careful, it would get out of control. I figured I could quit anytime.
How many did I send out like this? To whom did I blow my chances?
I shook off the stink of AI as best I could. I reviewed and rewrote every bullet. But I fear there's an AI essence that persists.
For all you job seekers out there โ don't let a few job descriptions that reek of AI convince you that you can offer the same. Hiring managers have the power right now. And they're inundated with applications. Don't make it easy for them to dismiss you.
Write your own resume.
I started with formal instruction โ taking courses on AI Product Management and trying to add things to my resume that would help me stand out. At the same time, I've been doing a little vibe coding.
Using OpenAI, I've built a Ghost Job Detector Chrome Extension, a Resume and Cover Letter Customizer, and an article writer called Product Salad โ which helps me write those The Onion-inspired satire stories about product management and AI.
Today, I am building an app for Alpine ski racing technicians โ Alpine Prep Pro. It isn't really revolutionary. But it is giving me the opportunity to keep my skills fresh and develop some more nascent ones.
I'm starting pretty basic. The app lets users put in information about skiers, skis, tuning equipment and wax, and events. It provides recommendations for waxing and tuning for an upcoming race or training day โ but it assumes the user knows how to tune and wax. It's really just a database for equipment and events to start.
I'll expand the app with more support features next โ make it more accessible to the other parents of junior racers who may not consider ski maintenance a fun hobby. The app will become more instructional as I mature it.
The point here is that I am borrowing from the Lean Startup Movement. I'm trying out the build, measure, and learn mindset.
I see posts about Claude or OpenAI enabling a PM to produce PRDs during lunch, or produce a prototype in the morning and a second in the afternoon after using that same copilot to test the first. AI is not going to make a PM a one-person product team. It is not going to exponentially accelerate product development.
It may continue this trend of rushed, under-tested, poorly thought-out products that fail. AI may help us automate some things. It may even help us get a prototype in a morning. But it won't do the thinking for us.
AI is the ultimate yes-man. Even when you program your chatbot to be critical and merciless, it is still going to enable you to mature a mistake right into the danger zone.
What makes a good Product Manager valuable is his or her ability to think. Dozens of decisions need to be made every day, in every stage of the product lifecycle. Those decisions cannot be made in isolation. They shouldn't be made with just a chatbot either ("Yes. Brilliant. You're killing it today!"). They should not be made without getting real data from real humans โ users, designers, developers, executive sponsors, competitors.
The decisions we make as product managers come from using messy data, experts' opinions, and our own internal neural network. We make decisions based on experience, data, and the outcomes of the last decision we made.
If you want to be a great PM โ or if you want to hire one โ prioritize good analytical skills. Anyone can move sticky notes on a Kanban board. But not everyone can figure out the "what" or "why" that differentiates a Claude from a Fraud.*
*"Fraud" being my hypothetical failed AI initiative from a hypothetical company that never got off the ground.
Vibe coding isn't just a skill for your resume. It should be a hobby. If you want to land a job in AI, you have to genuinely like AI. You can't just see its potential for your company. You have to try, experiment, ideate, and reveal that potential.
You should have been doing this in 2022. But it is not too late. You just have some aggressive catching up to do.
So, if you want to be a part of the AI revolution โ don't get on the bandwagon. Get in the saddle, grab the reins, and start leading the way.
โ Forgive the clichรฉs. This was done without the aid of an LLM.
If you're building products that must perform in complex, high-consequence environments โ and you care about measurable user impact as much as technical sophistication โ let's talk.