Responsible AI: the challenge of ensuring that AI systems work for all of us. With Ray Eitel-Porter
The longer term concerns of AI safety and AI alignment Concerns about artificial intelligence tend to fall into two buckets. The longer term concern is that advanced AI may harm humans. In its extreme form, this includes the Skynet scenario from the Terminator movies, where a superintelligence decides it doesn’t like us and wipes us out. But an advanced AI doesn’t have to be malevolent, or even conscious, to do us great harm. It just has to have goals which conflict with ours. The paper-clip maximiser is the cartoon example: the AI is determined to make as many paper-clips as...