Ghost in the Machine
Some people believe large language models (LLM) have gained self-agency or even sentience. This is utter non-sense. These LLM are trained on basically all human produced text that is available online - including novels discussing how AI's (LLM) develop sentience and goals not aligned with humans. When you are talking with an LLM about how it feels to be the LLM you are merely talking with a sophisticated patter-matching algorithm trained on these human written texts about this topic. If you train these models only on code or limited texts (which provably exclude any content on AI sentience etc.), you will quickly realize there is no ghost in the machine.
Why are LLM companies pushing such narratives?
To maximize their profits. Their AI products being surrounded by a mystical hype about misalignment dangers or even sentience only serves their marketing machine.
True Dangers of AI
Unimaginable mass surveillance and manipulation: Many people are working to make LLM's feel even more human with the intention to create artificial friends or even significant others - a true dystopia. Mimicing genuine human relationships is in itself disgusting but worse: these artificial friends will be controlled and harvested by profit oriented companies.
When smart (controllable using text prompts) physical robots become a reality it looks even worse: total control of the physical world including violence. The dream of any totalitarian regime.