Hi! I’m Julian. I work on AI governance and policy at Open Philanthropy, a philanthropic advisor and funder. I live in New York City, but I’m originally from Vancouver, Canada.
Welcome to my new blog, Secret Third Thing!
Here, I’ll be writing about AI — specifically, “transformative AI” (or “TAI” for short) — in a way that is accessible to a general audience.
Compared to most of the people in my immediate social circle, I have a fairly unusual belief underpinning my general worldview.
I think there’s a very real chance that in the next five or so years, at least one of a select handful of “frontier” AI companies (OpenAI, Anthropic, Google DeepMind, xAI, etc) will succeed at their goal of building transformative AI systems (TAI) that outperform humans in a number of relevant intellectual domains, such as writing software, conducting scientific research, planning and executing military operations, running companies, and much more.
I'm excited about TAI's potential to increase liberty, elevate global living standards, advance scientific research, and make humanity wiser and more cooperative. However, I also think systems with these capabilities have the potential to cause extraordinary harm (possibly to the point of causing human extinction).
While I'm optimistic that things will ~mostly turn out okay, I think the world should urgently be preparing for TAI’s arrival.
We should be doing things like:
Designing evaluations that rigorously measure AI systems’ risk-relevant properties
Encouraging AI companies to be more transparent about risk-relevant properties of their systems
Making information security at AI companies stronger so they can’t easily be hacked by adversaries
Fleshing out clear, comprehensive, and respectable “if-then” commitments that AI companies can publicly make: "If our system shows capability X, we will implement safety measure Y"
Developing safety measure Y
And safety measure Z, and…
Building defensive technologies that make society more resilient to catastrophic risks
My hope is that one day in the future, humanity will look back at the development of TAI and think “yep, glad we did that.” Then, I’ll be able to retire from this industry to work on something less stressful, like high-rise window washing or hostage negotiation.
But until that day, this blog is where I'll be sharing my takes as we journey down the road to TAI. As a spoiler, my perspective isn't that we're definitely doomed, nor that everything will automatically work out fine — it's a Secret Third Thing.
So if you’re interested in reading about my $0.02 on how we might navigate this bumpy period, stick around.
It's going to be a wild ride!
Good evening Julian
Excited for this!