
Today in AI – 12-28-2025
Key Stories (past 48 hours)
-
China drafts rules to police ‘human‑like’ AI and emotional companions
China’s Cyberspace Administration released new draft measures (Dec 27) to tighten oversight of AI systems that simulate personality or form emotional bonds with users. Providers would need to warn against overuse, monitor for signs of dependence, and intervene when users show extreme emotions or addiction; lifecycle safety, data protection, and strict content rules are also mandated. For anyone building chat companions, agentic support tools, or emotionally responsive interfaces, this signals higher compliance costs and stronger operational safeguards for products touching China. Full story -
OpenAI posts ‘Head of Preparedness’ as Altman flags agent risks
OpenAI opened a senior role to lead preparedness across frontier risks (cyber, bio, self‑improving systems). A weekend report highlighted Sam Altman’s note that rapidly advancing models are creating “real challenges,” including security and mental‑health concerns, prompting renewed debate on how fast to push agentic features versus hardening defenses. For enterprises piloting LLM agents, expect higher expectations for red‑teaming, evals, threat modeling, and incident response around autonomous tool use. Full story -
LG teases two‑armed home robot ahead of CES, hinting at consumer robotics push
LG previewed “CLOiD,” a dual‑arm home robot with five‑finger hands and seven degrees of freedom per arm, framed as capable of a “wide range” of chores. Beyond the demo, the real story is whether cost, reliability, and safety have caught up enough for household deployments. Watch for pricing, autonomy boundaries, and what “Affectionate Intelligence” actually does in practice. Full story -
Humanoid realism, real‑world timelines: industry voices urge caution
A fresh round of analysis this weekend argued that many humanoid claims remain overhyped, citing integration friction, safety overhead, and ROI uncertainty. The takeaway for ops leaders: expect selective near‑term wins in logistics/inspection, but budget conservatively for general‑purpose humanoids until durability, autonomy, and maintenance profiles improve.
Emerging Trends
-
Regulators pivot to psychological safety in AI companions
Early signals: China’s draft measures explicitly target emotional dependence and overuse, requiring user warnings and provider interventions. Potential impact: Companion apps, wellness bots, and “relationship‑like” agents will need measurable safeguards (screening, referrals, limits), plus telemetry and governance to prove compliance. Expect similar language to surface in other markets. -
From ‘ship agents fast’ to ‘secure agents first’
OpenAI’s preparedness hiring underscores a broader shift: risk‑led requirements (evals for cyber/bio misuse, agent behavior under prompt injection, auditability of tool calls) are becoming table stakes. Evidence from recent security advisories and red‑team write‑ups has moved the conversation from “can the agent complete tasks?” to “can it do so safely, repeatedly, and under attack?” For buyers, RFPs should now demand threat models, response playbooks, and real‑world eval data. -
Consumer robotics re‑enters the chat—cautiously
LG’s teaser is one of several pre‑CES shots across the bow. The pattern: more dexterous hands and richer perception, paired with tighter claims. Expect a split: single‑purpose or semi‑autonomous helpers that excel at narrow chores vs. ambitious “generalist” household robots that remain demo‑ware until cost, safety, and service models mature.
Conversations & Insights
-
“Agents: year or decade?”
A widely shared essay over the weekend argued that 2025 fell short of the promised “Year of the Agent,” citing brittleness outside coding terminals and UI friction in the wild. Takeaway: Enterprise planners should treat agentic rollouts as multi‑year, focusing on scoped, tool‑bounded tasks with strong guardrails and human supervision. -
Safety vs. speed after the OpenAI preparedness post
Discussions on X/LinkedIn and in security circles converged on one theme: preparedness must be resourced like product engineering. Takeaway: Teams building agent features should budget for continuous red‑teaming, prompt‑injection defenses, and incident drills, and publish internal “safety cases” before expanding tool permissions. -
Humanoids: hype check
Engineers and operators amplified analysis cautioning against aggressive timelines for general‑purpose humanoids. Takeaway: Keep pilots focused on well‑scoped, supervised tasks (pick, tote, inspection) with clear uptime and service SLAs; defer high‑variability chores until total cost of ownership and failure modes improve.
Quick Takeaways
- Regulatory heat is moving from abstract “AI risk” to concrete obligations around psychological harm and emotional dependence. Build detection/intervention into product and support workflows now.
- If you ship agents, assume adversarial contexts by default: require evals, tool‑use audit logs, least‑privilege permissions, and rollback plans before broadening access.
- Robotics excitement is real, but budgets should prioritize narrow, high‑ROI automation while monitoring consumer‑grade releases for durability, service, and safety learnings.
- Communicate safety posture externally—customers increasingly expect formal threat models, eval evidence, and responsible‑use documentation alongside capability claims.
Sources
- Reuters — China issues draft rules to regulate AI with human‑like interaction (Dec 27, 2025): https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/
- The Verge — Sam Altman is hiring someone to worry about the dangers of AI (Dec 27, 2025): https://www.theverge.com/news/850537/sam-altman-openai-head-of-preparedness
- OpenAI Careers — Head of Preparedness (compensation and role scope): https://openai.com/careers/head-of-preparedness-san-francisco/
- The Verge — LG teases a new chore‑completing home robot (Dec 26, 2025): https://www.theverge.com/news/850242/lg-cloid-home-robot-chores-ces-2026
- The Wall Street Journal — Even the Companies Making Humanoid Robots Think They’re Overhyped (Dec 26, 2025): https://www.wsj.com/tech/ai/humanoid-robot-hype-use-timeline-1aa89c66
- The New Yorker — Why A.I. Didn’t Transform Our Lives in 2025 (Dec 27, 2025): https://www.newyorker.com/culture/2025-in-review/why-ai-didnt-transform-our-lives-in-2025