I always narrow my eyes when I hear someone talk about “safety” in the context of AI, because they usually just mean that the AI doesn’t engage in enough moral grandstanding when you ask it sketchy or risqué questions. That’s the same level of pearl-clutching that Tipper Gore espoused over music in the 90s.
But there are legitimate concerns, like lying about real people and topics, reproducing training data (especially personal information) too closely with the right kind of prompting, etc. The problem is that I can’t tell what kind this person is. Are they upset because the AI can recommend marijuana strains… or because it can do something like leak peoples personal information? The article (and people involved in these efforts) too often lump it all together. See, for example: Anthropic
Now, all of that said, OpenAI is suuuper creepy. The way they started as a non-profit and then somehow managed to add a for-profit component… that is not acceptable and it’s disgusting that it’s allowed. It makes everything they do suspect and I’m inclined to believe what this exiting researcher says.
I always narrow my eyes when I hear someone talk about “safety” in the context of AI, because they usually just mean that the AI doesn’t engage in enough moral grandstanding when you ask it sketchy or risqué questions. That’s the same level of pearl-clutching that Tipper Gore espoused over music in the 90s.
But there are legitimate concerns, like lying about real people and topics, reproducing training data (especially personal information) too closely with the right kind of prompting, etc. The problem is that I can’t tell what kind this person is. Are they upset because the AI can recommend marijuana strains… or because it can do something like leak peoples personal information? The article (and people involved in these efforts) too often lump it all together. See, for example: Anthropic
Now, all of that said, OpenAI is suuuper creepy. The way they started as a non-profit and then somehow managed to add a for-profit component… that is not acceptable and it’s disgusting that it’s allowed. It makes everything they do suspect and I’m inclined to believe what this exiting researcher says.