Original source: saasberry
This video from saasberry covered a lot of ground. 6 segments stood out as worth your time. Everything below links directly to the timestamp in the original video.
If your organisation has rolled out AI tools without teaching people how to configure them, it has effectively outsourced judgment to a system calibrated for nobody in particular.
Iowa State Course Makes Building a Personal AI Assistant the First Assignment
Most organisations stop at teaching employees how to write a prompt, but Bill Schmarzo argues that training the model itself is what matters — and his Iowa State course acts on that conviction from day one. Every student constructs a personal AI assistant, then feeds it peer-reviewed research, Socratic questioning methods, and a catalogue of nineteen human decision-making biases, so the tool reasons against confirmation bias rather than toward it. The underlying logic rests on three declared realities: AI executes only what it is trained to do, relentlessly optimises whatever variables it is given, and amplifies pre-existing biases.
What this exposes is the structural gap between AI adoption and AI literacy — companies deploying tools without teaching people to shape them are, in effect, delegating decisions to an untrained proxy. That gap compounds risk as autonomy increases.
"AI will only do what you train it to do. If you don't train it to do something, it ain't going to do it."
Treating AI as a Productivity Tool Is the Biggest Strategic Error Leaders Are Making, Schmarzo Argues
Organisations fixated on using AI to work faster are, Schmarzo contends, replicating the trajectory of the spreadsheet — broadly adopted, quickly commoditised, and ultimately generating no sustainable advantage. The more durable path runs through effectiveness first, choosing three high-impact marketing campaigns over ten undifferentiated ones, and then through outright re-engineering, eliminating steps entirely rather than accelerating them. It is at that third stage, doing things differently rather than merely faster or better, where compounding value begins to accumulate.
The structural issue here is that productivity gains are, by definition, available to every competitor simultaneously, whereas re-engineered processes embed institutional knowledge that is harder to replicate and grows more valuable over time.
"Productivity is not a sustainable differentiation. It's like the spreadsheet — soon everybody's using one, there's no advantage."
Schmarzo's 'AI in the Middle' Model Reframes How Sales Forces Are Structured
Rather than assigning salespeople to territories by averaging past performance, Schmarzo describes building detailed causal propensity models on both sales staff and customers — mapping each person across dimensions of growth, financial profile, technology readiness, risk tolerance, and organisational adoption — and then letting AI act as a matchmaker, pairing the right salesperson to the right account. Domain-specific language models then surface relevant content in real time as the customer relationship develops, functioning as what Schmarzo calls a Yoda or Sherpa embedded in the conversation itself.
The real question is not whether AI replaces human salespeople but rather how deeply organisations are willing to instrument their own people and customers to make the matching meaningful — a data commitment that most have not yet made.
"AI could act as a matchmaker saying, 'This is the right salesperson for this account.'"
AI Performs Poorly When Optimising for Financial Metrics Alone, Schmarzo Warns
The core argument Schmarzo makes is that ROI, net present value, and internal rate of return are lagging indicators — measurements of outcomes already past — and that AI models optimising for them are working with the wrong instruments. When a high-school class in Des Moines brainstorms the factors involved in choosing a restaurant, they consistently generate between 100 and 120 variables, each shifting in weight depending on intent and context. Organisations that reduce that complexity to a single financial metric are, in effect, asking AI to solve a rich, contextual problem with impoverished inputs.
Broader value definitions encompassing customers, employees, and community generate the hundreds of leading indicators that AI can meaningfully optimise — and the financial results, Schmarzo argues, then follow as a downstream consequence rather than a direct target.
"AI does a terrible job of optimising lagging indicators. What you give it becomes law to it."
Design Thinking Workshops That Generate Hundreds of KPIs Are Becoming the Blueprint for AI Deployment
When Schmarzo's team tackles a problem like nursing retention, the process begins not with data but with a room of thirty to forty stakeholders — nurses, doctors, administrators, insurers — who frequently disagree with one another. Groups of four or five, deliberately mixed for dissent, generate desired outcomes on sticky notes without debate; the resulting 80 to 100 outcomes then cascade into hundreds of KPIs covering benefits, impediments, failure consequences, and unintended effects. The financial case, Schmarzo argues, does not need to be constructed separately: improve nursing retention, and reduced hiring costs, fewer lawsuits, higher patient satisfaction, and increased community economic activity all follow as measurable downstream effects.
What this exposes is a structural mismatch in most AI business cases, which demand financial justification upfront for outcomes that are, by their nature, generated last.
"If we understand the leading indicators, the financial aspects will all spill out — and they're very robust."
Autonomous AI Agents Need Causality Built In, Not Just Probabilistic Averages, Schmarzo Says
The central limitation Schmarzo identifies in current generative AI tools is that they are, at their core, probabilistic averaging machines — useful for approximating what people like a given user tend to prefer, but incapable of capturing what that specific individual actually values and why. Building agents that can serve individuals well requires constructing causal propensity models — explicit, weighted representations of each person's preferences — so that when an agent makes a decision, it can quantify its reasoning and accept correction when the weights prove wrong. His illustration involves a family of five planning a vacation, where the agent must balance five distinct propensity profiles rather than defaulting to a statistical mean.
The structural issue here is that explainability and individual calibration are not cosmetic features but functional prerequisites for agents operating with meaningful autonomy.
"Nobody I talk to, especially my students, wants to be average. The challenge with a lot of these AI tools is that they're probabilistic averaging tools — and that gives you averages."
Summarised from saasberry · 39:19. All credit belongs to the original creators. Streamed.News summarises publicly available video content.