Provably Safe AI is written by Helder S Ribeiro (@agentofuser). I'm a software engineer and independent HCI researcher currently "upskilling" into formal methods for AI existential safety and writing about it here at provablysafe.ai.

My main research has been in taking insights from algorithmic compression and sequence prediction ML to develop input-optimized, Turing-complete, compositional, AI-assisted GUIs. I presented the research project Keykapp at the HDC Eclipse conference on Hyperdimensional Computing and AI Safety in October 2023:

At the moment my main interest is applying this research to the creation of predictive and programmable visual interfaces for the Lean theorem prover, with the goal of significantly lowering the barrier of entry to formal verification and scaling up the productivity of research teams.


Why subscribe?

Subscribe to get full access to the newsletter and publication archives.

Stay up-to-date

Never miss an update—every new post is sent directly to your email inbox. For a spam-free, ad-free reading experience, plus audio and community features, get the Substack app.

Join the crew

Be part of a community of people who share your interests. Participate in the comments section, or support this work with a subscription.

To learn more about the tech platform that powers this publication, visit Substack.com.

Subscribe to Provably Safe AI

Learning in public about formal methods, AI, and policy

People

Helder S Ribeiro

Learning in public about formal methods, AI, and policy