Today we speak with Dr. Roman Yampolskiy. Dr Yampolskiy is a tenured Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering at the University of Louisville, where he is the founder and current director of the Cyber Security Lab. Dr. Yampolski is an expert in AI safety and the author of many books including, “AI: Unexplainable, Unpredictable, Uncontrollable.”
We cover a lot of ground. Is Yann LeCun of Meta AI’s analogy of how to build safe AI valid? How can AI optimists can go on a podcast and admit Large Language Model AI is so insecure as to be “exploitable by default,” and yet be enthusiastic boosters of it? Is Gary Marcus right that the performance of LLM AI is topping out? We also touch on the tunnel vision in AI companies around AI risk. What is the role of Silicon Valley venture capitalists with a religious vision of AI and billions of dollars to push it, versus the beleaguered community pushing for AI safety work that goes past lip service?
There was a delay of some months between recording and releasing this episode, which is a long time in the world of AI. Some references may be slightly dated, e.g. when we talked about AI as a learning aid at the beginning of the podcast, the study that said students that use AI do worse on tests had not yet been released.
Share this post