Press "Enter" to skip to content

News

I am looking for postdocs, research engineers and research scientists who would like to join me in one form or another in figuring out AI alignment with probabilistic safety guarantees, along the lines of the research program described in my keynote at the New Orleans December 2023 Alignment Workshop. Researchers with strong expertise in the span of (a) mathematical skills (especially about probabilistic methods), (b) machine learning (especially about amortized inference and transformer architectures) and (c) software engineering (especially for training methods for large scale neural networks), please write to me if you are interested and motivated by the AI-driven existential risks.

I am also specifically looking for a postdoc with a strong mathematical background (ideally an actual math or math+physics or math+CS degree) to take a leadership role in supervising the Mila research on probabilistic inference and GFlowNets, with applications in AI safety, system 2 deep learning, and AI for science, which are the main current research areas in my lab.