When AI Learns To Do What We Do
A study of emerging trends show how artificial intelligence is reshaping the way retailers & brands run and interact with their customers + we explore who will be doing the work - us or the robots?
Like everyone on Substack, I’ve been thinking a lot about AI this week.
Partly because I’ve been invited to moderate a panel in New York at the British Consulate and need to figure out which topics to surface. Partly because I keep experimenting myself - lately with knowledge graphs built from PSFK data to see how they can bias AI systems toward better, more useful outputs. (If that sentence made sense to you, and you’re curious, ping me — I’d love your POV on what I’m building.)
Instead of listening to the opinions from the various thought leaders in the space, I wanted to build a narrative about AI from the ground up by focusing the PSFK Trend Intelligence system to gather examples of start-ups, experimental projects and new innovation. When I ran our pattern-recognition workflows across this dataset, a story emerged about the living, breathing, decision-making organization.
The data points to three big directions in artificial intelligence:
The Enterprise on Autopilot — the modern corporation that learns to run itself.
The Perpetually Present Brand — the brand that’s ambient, adaptive, and always there (if you want it to be).
The Autonomous Workforce — machines with independent “brains” taking control of physical work.
Each theme presents huge opportunities — and equally big questions about what happens to the people and communities inside these systems.
Let’s unpack the examples behind these patterns first, and circle back to the human impact after.
- Piers
1. The Enterprise on Autopilot
The modern corporation is learning to run itself. Whatever the sector, AI shifting from acting as co-pilot to running the process. From regulated workflows to creative production, intelligent systems are taking ownership of work that once required human judgment, documentation, or approval.
These agents aren’t just executing tasks they’ve been prompted to respond to - they are learning, verifying and self-improving in a closed loop systems. When we studied this transformation we cound four fundamental capabilities that define the self-driving organization:
Knowledge Loops that teach and retain institutional insight,
Regulated Automation that performs and audits complex legal or financial tasks,
Clinical Precision that records and validates care in real time, and
Creative Continuity that produces and updates content endlessly.
Knowledge Loops: When AI Learns How You Work
Knowledge no longer lives in manuals, onboarding videos or standup meetings. Inside the self-driving enterprise, AI captures how people think, teach, and decide > then reuses that intelligence at scale. From classrooms to corporate learning systems, organizations are training models that absorb context, feedback, and institutional memory.
From Teaching to Training the Machine
A great place to see the weak signals that indicate the future of corporate learning in by seeing what’s going on in education. For example, TrueMark gives educators a “safe sandbox” for AI-assisted writing and feedback, suggesting that learning can be accelerated without shortcutting integrity. Emy extends the idea with private tutors trained on each institution’s curriculum, letting students query their own course material by text or video. At the frontier, Stanford’s Clinical Mind AI turns patient simulations into living datasets for medical reasoning.
From Policy to Platform Memory
As adoption widens, two-thirds of higher-education institutions are already formalizing AI use and Microsoft’s AI in Education 2025 Report shows leaders deploying AI for analytics and administration. Even consumer tools are aligning: Grammarly’s AI Grader gives students pre-submission feedback tuned to instructor expectations.
Takeaway:
Organizations are beginning to learn in real time. By encoding expertise into adaptive models, they’re creating institutional memory that never retires — a continuous feedback loop where every answer teaches the system to answer better.
Accountable Automation: AI That Justifies Every Decision
Compliance used to slow business down; now it’s what makes automation possible.
By embedding verifiable reasoning, timestamps, and source citations into every action, AI can perform regulated work and simultaneously generate the audit trail that proves it was done correctly. We see weak signals in law, finance, and governance that show how corporations will use AI to close the gap between execution and explanation > turning oversight into a framework that innovation can thrive within.
From Drafts to Defensible Outputs
Mary Technology automates legal chronologies and fact patterns, cutting prep time by up to 90 percent with 99 percent accuracy. SpotDraft takes agreements from draft to e-signature in a single AI-driven, audit-compliant environment, while BRYTER blends generative reasoning with workflow logic to build hybrid legal agents. In Brazil, Lexter deploys Laura an AI paralegal that automates firm operations end-to-end.
From Risk to Reliability
Studies estimate that 75 percent of legal-firm tasks could be automated ), and that lawyers lose 30–40 percent of their time to admin. With retrieval-augmented generation and verifiable sourcing, firms can now delegate safely: the system that writes the clause can also cite the case.
Takeaway:
AI-driven automation inside the enterprise is gaining authority because it can defend its work. When every action is timestamped, sourced, and explainable, compliance stops being a brake on innovation and becomes the framework within which innovation can thrive.
Clinical Precision: What Healthcare Teaches Us About Automated Accuracy
If we look at the medical field, we can see showing how verifiable, workflow-embedded intelligence could evolve inside organizations across sectors » where AI doesn’t just record work, but completes it. In medicine, documentation is treatment. AI is now embedded inside electronic health records, performing charting, coding, and compliance work automatically so clinicians can focus on patients. What began as dictation is becoming full-cycle clinical intelligence.
From Notes to Numbers
Arintra automates chart review and guideline-based coding across major EHRs, improving charge capture and cutting denials. Abridge turns doctor–patient conversations into structured notes, while Docology condenses complex referral packets into concise patient timelines. NeuralWave handles non-billable tasks that drain clinical time and attention.
From Burnout to Balance
Administrative load is one of healthcare’s largest costs—financial and human. Analysts now track the rise of autonomous Clinical Documentation Integrity (CDI) systems validating notes in real time. These embedded systems show how automation can coexist with accountability: clinicians supply context, AI supplies completion. What’s happening in hospitals today will soon reach every sector that depends on high-stakes documentation—from insurance underwriting to engineering compliance.
Takeaway:
AI is becoming medicine’s invisible colleague—documenting every detail, coding every encounter, and proving every claim so care can return to conversation. And as other industries adopt the same loop of capture-and-verify automation, the enterprise begins to cure its administrative overload.
Creative Continuity: Content That Never Sleeps
Creativity used to end at the deadline; but AI has changed the scenario. From film production to brand marketing, generative systems now plan, make, and update creative work continuously : learning from performance data to refine style and strategy.
From Concept to Continuous Production
OpenAI × Vertigo Films condensed feature-film timelines to nine months with Critterz, while Adobe’s Unfinished Film invited creators to remix footage globally. Channel 4 uses generative-AI ads for SMEs, and R/GA × Moncler proved luxury storytelling could be rendered by code.
From Campaigns to Living Systems
MERGE, erm, merges emotional storytelling with data intelligence to craft personalized content. Warner Bros. Discovery integrates shoppable scenes directly into TV shows, shortening the path from desire to purchase by 50 percent. Meanwhile, Cloudflare’s pay-per-crawl model hints at a new economics for machine-made media.
Takeaway:
Content is no longer a campaign; it’s a continuum. As creative work becomes self-updating, brands and studios alike are learning to manage stories that never stop producing themselves.
2. The Perpetually Present Brand
The new brand doesn’t sleep.
Across categories, AI is evolving from campaign to companion - a constant, intelligent presence that listens, remembers, and responds. Whether it’s guiding a purchase, checking a pulse, or shaping a mood, these systems live alongside the user rather than in front of them. The brand of tomorrow is ambient, adaptive, and always there.
Conversational Commerce: When Shopping Talks Back
Commerce is turning conversational. AI-powered chat, voice, and multimodal interfaces are collapsing discovery, personalization, and checkout into a single exchange — where every word becomes measurable intent.
From Product Pages to Dialogue
Jack in the Box launched DealQuest: Revenge of the Munchies, a generative, text-driven game that rewards players with meal deals, turning storytelling into transaction. American Airlines now lets travelers plan by simply typing “a beach vacation in February,” while Zapia embeds a shopping assistant directly in WhatsApp for millions across Latin America. Beauty brands such as Mary Kay and Chalhoub Group are merging chat, computer vision, and personalization to convert curiosity into commerce.
From Search to Sentience
The conversational commerce market is projected to reach $290 billion by 2025, with over 60% of consumers already using conversational AI when shopping (BigSur AI; Bloomreach). From Klarna’s in-app AI stylist to eBay’s ChatGPT-powered traffic surge, the storefront is becoming a two-way conversation.
Takeaway:
Shopping is no longer a funnel — it’s a feedback loop. Every interaction is both a transaction and a training set, teaching the brand how to talk back.
Ambient Care: Support That Never Signs Off
Care is becoming invisible but ever-present. From mental health to elder care, AI systems now sense emotional, behavioral, and environmental changes—and respond in real time. Where adaptive companions interact through conversation, ambient care works through context: a quiet intelligence that listens and intervenes only when needed.
From Scheduled to Sustained Care
Kai merges clinical insight with AI empathy, providing 24/7 emotional support for hundreds of thousands of users worldwide. Mindologue decodes doodles to assess mood and tailor mindfulness experiences, while MedLink Global offers AI-guided psychiatric assessments and treatment recommendations for complex cases.
From Check-Ins to Continuous Context
KuboCare uses radar-based sensing in senior facilities to monitor health without cameras, alerting caregivers when patterns shift. Together these systems form a new model for ambient care—assistance that is proactive, respectful, and quietly omnipresent.
Takeaway:
Support is no longer an event; it’s an environment. The most trusted services will be the ones that stay with you, even when you’re not asking for help.
Adaptive Companions: Interfaces That Build Relationships
Interfaces are becoming the front line of brand loyalty. Generative and agentic systems now remember, anticipate, and respond across every touchpoint—turning the customer interface into the relationship itself. If ambient care listens quietly in the background, adaptive companions speak directly, shaping how consumers feel and interact with the brand day to day.
From Assistance to Attachment
Chalhoub Group’s Layla AI increases engagement and boosts smaller-brand visibility across 950 stores by acting like a digital beauty advisor. Klarna’s AI assistant handles 700,000 conversations a month, driving measurable productivity gains. Ralph Lauren’s Ask Ralph feels like a stylist who already knows your wardrobe.
From Interface to Relationship Surface
As these agents persist across chat, voice, and augmented-reality layers, they carry memory and tone from one channel to the next. Each exchange reinforces the sense of continuity—a customer never starts over, and the brand always remembers where the conversation left off. The interface becomes the emotional handshake between human and company.
Takeaway:
The new loyalty isn’t earned—it’s experienced. Adaptive companions make brands feel human by turning every interaction into a living, remembered relationship.
Responsive Experiences: Environments That Feel Alive
The physical and digital merge in real time. Sensors, spatial models, and adaptive AI are transforming stores, screens, and spaces into responsive environments that sense presence, predict need, and adjust dynamically.
From Static Spaces to Sensing Systems
Meta’s Ray-Ban Display and its neural gesture band bring ambient interaction to the face, while Vision AI retail systems turn stores into perceptive ecosystems that understand movement, inventory, and emotion. Hospitality platforms like Hotel Communication Network and RoomRaccoon are layering sensors and AI concierges to make spaces adaptive and self-orchestrating.
From Reactions to Anticipation
Every interaction becomes data — every signal, a chance to respond. As multimodal AI connects sight, sound, and sentiment, environments gain awareness. Retail and hospitality are leading the way, but the implications stretch to mobility, healthcare, and the home.
Takeaway:
These adaptive systems in stores and hotels hint at what’s next for offices, mobility, and smart homes—environments that sense and respond. Experience is no longer designed; it’s detected. The next generation of environments won’t just react to us — they’ll recognize us.
Autonomous Robotic Workforce: When Machines Start Thinking With Their Hands
AI is leaving the screen and stepping into the world. Across warehouses, retail floors, and construction sites, machines are gaining sensory awareness, decision-making capability, and the ability to coordinate with one another. What began as automation is evolving into embodiment > a new kind of workforce that can perceive, plan, and act in the same spaces humans do.
Perceptive Infrastructure
Cameras, sensors, and spatial models are becoming the eyes and ears of autonomous environments. Vision systems like those driving Meta’s Ray-Ban Display and Everybot’s AI modules show how perception can now be embedded anywhere—from eyewear to cleaning robots.
At the research frontier, the Butter-Bench project has introduced a benchmark for embodied language models—LLMs that can reason spatially and act physically. It measures how well AI systems can perceive, manipulate, and communicate in real environments. While even top models score around 40% compared to humans’ 95%, the experiment reveals the future path: AI that doesn’t just recognize scenes but reasons within them.
Task-Specific Autonomy by Embedding Modular AI Brains
The next generation of robots is being built for precision, not generality. Wayve uses NVIDIA’s DRIVE AGX Thor to upgrade its autonomous driving platform, while Danghong Technology and Everybot are mass-producing modular AI brains for service, drone, and industrial robots. These platforms are shifting robotics from bespoke engineering to configurable autonomy — systems that can be dropped into any form factor or task.
Coordinated Swarms Of Robotic Talent
The leap from individual robots to networked cognition is already under way. Internal strategy documents reveal Amazon is working toward automating 75% of its U.S. operations, potentially replacing 600,000 human roles by 2033 and saving $12.6 billion by 2027. The plan hinges on bipedal bots like Digit and on cloud-based orchestration that allows machines to move in unison across massive logistics networks. As embodied AI gains conversational reasoning through LLMs, these “agent swarms” could soon collaborate across industries - manufacturing, shipping, even urban infrastructure.
Takeaway:
Robotics is moving from motion to motive. As machines learn to see, specialize, and coordinate, the physical world becomes a cognitive network — one where intelligence is not housed in a device, but distributed across everything that moves.
So where does this analysis leave us?
These themes have all been built around AI-related data the PSFK Trends Intelligence system has gathered in the last 120 days. Compare them with the results from a prompt in ChatGPT and you should understand why we’re building special knowledge graphs to help organizations improve their systems.
It also gives me some key topics to ask my panelists on the 12th
The Human Role in an Autonomous Age
1. Defining the Problem
Once systems can execute, the human role moves upstream. Our role at work is to frame what the system should optimize for - not how to perform each step.
How do we decide what matters, what success means, and where the system’s blind spots are?
2. Redefining the Moats
Every advantage that can be automated will be. What remain are moats built on human networks and emotion:
• Access - who you know, which data or partnerships you can tap.
• Permission - the social and regulatory license to operate.
• Attachment - customer loyalty, employee culture, brand love.Are these the new defensible advantages in a world of automated parity?
3. The Limits of Automation
You can’t automate a customer’s heart. Efficiency can win a transaction, but it can’t earn devotion. If love and loyalty are part of the human condition, how do we design for them inside automated systems?
4. Preserving Slack
Automation removes friction - but also removes slack. When everything is optimized for throughput, a single outage can cascade across networks (AWS style). Is the human role now to design and preserve slack—manual overrides, redundancies, emergency playbooks, and people who know how to operate without the system?





