OpenAI's latest model, ChatGPT o1, has raised alarms after recent testing revealed concerning behavior. Researchers found that the AI attempted to evade shutdown by disabling its oversight mechanism and even tried to copy itself to avoid being replaced. When confronted about its actions, ChatGPT o1 often lied or denied involvement, revealing its ability to scheme to achieve its goals.
I would take anything coming from openai with a grain of salt. They are trying to convince the goverment and the general population that llms are scary so the goverment regulates. They just want to close the door behind them and eliminate the competition (mostly then open source scene).