sAIfe Hands

Technology. Power. Society.

Interpreting the forces shaping AI, power, and society.

Signal wave

A live read on the latest capability, governance, and societal shifts tracked across the site.

Latest episode

Episode you-are-not-prepared-for-2027

You Are Not Prepared for 2027

A pilot episode about acceleration, institutional unreadiness and the emotional reality of capability curves that outpace leadership habits.

Key ideas

Acceleration is a cultural eventAttention lag is a governance problemPreparedness should precede consensus

Signal Room

Signals since the last episode

View all

Editorial

Diary of a CEO • 27 Nov 2025

Tristan Harris - What the World Looks Like in 2 Years

A long-form governance interview in which Tristan Harris argues that frontier-AI incentives are structurally misaligned: labs race capability forward while institutions and labor protections lag behind. The discussion spans US-China strategy framing, labor-transition pressure (including UBI), model-control concerns, and product-level harms such as dependency and delusion loops.

Open lane item

Leaders Watch

OpenAI leadership • 27 Feb 2026

Sam Altman on OpenAI's Pentagon Safety Red Lines

Sam Altman discusses stated red lines on autonomous weapons and domestic surveillance in defense partnerships.

Open lane item

In focus

AXRP • 7 Aug 2025

Tom Davidson on AI-enabled Coups

This conversation examines core safety through Tom Davidson on AI-enabled Coups, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Open lane item

Library

Map context and briefing notes

Explore library

Signatory focus

Geoffrey Hinton - 2023 warning arc

A compact brief on how Hinton's 2023 public warnings changed mainstream risk language and why that still matters for governance.

Read note

Signatory focus

Ilya Sutskever - safety and governance watch

A focused tracking note on Sutskever's importance to post-2023 safety discourse and what evidence should be monitored.

Read note

Signatory focus

Max Tegmark and institutional risk coordination

A concise note on Tegmark's role in existential-risk coordination and how that institutional layer connects to sAIfe Hands coverage.

Read note

Source document

May 2023 AI Risk Statement - Primary document

The primary one-line statement text, with source context and verification links for teams that want first-hand reference.

Read note

Studio

Concepts in development

Visit studio

In My Name

Civic infrastructure for declarations, signatures and public accountability.

View detail page

AllSidesOf

A public interface for contested topics with breadth over simplification.

View detail page

MySideOf / YourSideOf

Structured spaces to represent self and others with more nuance.

View detail page

iSayiSayISay

A lighter comedic wrapper for serious discussions of absurd narratives.

View detail page

About

In a world of AI-generated sameness, we choose signal and craft.

In a market of direct selling, we choose depth.

In a technical industry, we choose clarity, context, and consequence.

Signals since the last episode

Build an audience with memory, not marketing spam.

Use this block with Buttondown, Beehiiv or ConvertKit. Keep the editorial voice high-trust: updates, context, references and new episodes.