Original source: FedEx Institute of Technology
This video from FedEx Institute of Technology covered a lot of ground. 4 segments stood out as worth your time. Everything below links directly to the timestamp in the original video.
Biased AI is not a hypothetical risk — it is already determining who gets hired, housed, and approved for credit. The question is whether the public has the tools to push back.
AI Bias Already Shaping Employment, Housing, and Credit Decisions, Researcher Warns
The debate over whether artificial intelligence poses a future threat misses a more immediate problem: biased algorithms are already making consequential decisions about people's lives across employment, healthcare, housing, college admissions, and credit, with most individuals unaware that any model influenced their outcome. Schmarzo draws on Cathy O'Neil's book Weapons of Math Destruction to argue that models built without sufficient diversity in their variables and metrics are generating outcomes that are neither responsible nor ethical — and that regulatory frameworks like the White House AI Bill of Rights or the EU's recent rules are unlikely to fix this on their own.
What this exposes is a structural gap between those who design and deploy these systems and the broader public they affect — a gap that Schmarzo argues only widespread data literacy can close. Meaningful participation, he contends, requires citizens to understand how their data is collected, how models use it to persuade or constrain them, and what rights they hold to demand transparency when an algorithm shapes a decision that touches their life.
"AI is already here and it's impacting important swaths of our society in very negative ways — we must address this."
Schmarzo's 'Nano Economics' Framework Builds Individual Predictive Models to Beat the Law of Diminishing Returns
The law of diminishing returns — the principle that spending ever more on maintenance, marketing, or care eventually yields less and less improvement — can be circumvented, Schmarzo argues, not by spending more but by abandoning averages entirely. The technique he calls Nano Economics, developed during his time at Yahoo, builds individualized predictive models for every discrete entity in a system: each machine part, each hospital patient, each website visitor. At Yahoo, that meant scoring all 500 million daily visitors across dozens of interest dimensions stored in cookies, then bidding for advertising inventory in under a second based on individual propensity rather than demographic averages — a distinction that made the difference between a viable ad business and irrelevance.
The structural insight here is that Big Data's real value was never volume but granularity: the ability to act on individual-level signals rather than population-level means. Applied to healthcare, the same logic would allow hospitals to match patients with appropriate nurses and treatments based on personal risk profiles, compressing cost while improving outcomes — provided the underlying data is trustworthy.
"Big Data was not about volume — it's about granularity."
ChatGPT as Research Assistant, Not Oracle: Schmarzo Urges Socratic Approach to Generative AI
The competitive advantage that once came from memorising and recalling information has been effectively eroded by generative AI tools like ChatGPT, which retrieve definitions, theorems, and synthesis faster than any individual can. Schmarzo's argument is that the human role must shift decisively toward knowledge application — using AI as a research assistant guided by iterative, Socratic questioning rather than accepting first-pass outputs as authoritative. He describes prompting ChatGPT to review a draft blog post for omissions as a practical illustration: the tool returned five suggestions, four of which he judged worthless, but the fifth was genuinely useful at negligible cost.
The real cautionary precedent, he contends, is social media — a technology whose capacity to spread disinformation at scale preceded any serious effort to build public critical-thinking habits around it. AI is potentially more consequential, and the lesson is that source verification and structural scepticism need to be embedded before the damage compounds.
"The quality of the ChatGPT responses is entirely dependent on how effective you are in engaging it through a series of prompts."
Healthcare Data Corruption Threatens AI Promise, as Hospitals Recode Treatments to Maximise Insurance Payouts
American healthcare has, by Schmarzo's reading, already crossed the tipping point of diminishing returns — spending more while life expectancy in states like Iowa has recently declined — making it among the most promising sectors for AI-driven personalisation of treatment and welfare programs. The obstacle is not primarily regulatory, though HIPAA presents real constraints; it is the integrity of the underlying data. Hospitals, Schmarzo reports, are systematically recoding drug treatments to match reimbursement categories that pay better, meaning the clinical record reflects financial optimisation rather than medical reality — and therefore corrupts the very training data that AI models would depend upon.
The structural issue here is that a payment architecture incentivising miscoding and an AI architecture requiring accurate longitudinal data are fundamentally incompatible. Until transparency in data capture is treated as a precondition rather than an afterthought, the analytical precision that Nano Economics promises in manufacturing or advertising will remain inaccessible to the sector that arguably needs it most.
"We've optimised around money, and what it's doing is taking data that's so valuable that we can learn from and making it worthless."
Summarised from FedEx Institute of Technology · 1:12:40. All credit belongs to the original creators. Streamed.News summarises publicly available video content.