aintelope
We're an AI safety research team prioritizing impact with short timelines in mind. Our work focuses on the safety of agentic AIs through verification and neurologically inspired solutions.
In project aintelope we're developing a virtual platform that allows experimentation with affective systems in various environments, and the consequent benchmarking of the alignment of these agents. We hope that our work will find solutions for alignment and facilitate further discussion on how cooperation works in theory and practice.