2 min readElon Musk Launches Grok, A New AI with Few Guard Rails

Elon Musk’s new AI company, xAI has launched a new AI model which has fewer guardrails than the competition. The AI, called Grok, “is designed to answer questions with a bit of wit and has a rebellious streak”. According to the company’s website, “It will also answer spicy questions that are rejected by most other AI systems”.

Critics worry that without the usual guardrails, the AI model could be used by “terrorists [to] develop a bomb or could result in products that discriminate against users based on characteristics such as race, gender, or age”.

Editor’s Note: Based on how the Wired article cited below has been written, it is evident that there is a bias against Elon Musk and his creations. The truth is that all Large Language Models (LLMs), including the super-censored ChatGPT, are dangerous. Why are critics singling out Grok?

The reality is that AI makers are creating models that conform only to the worldview that they subscribe to. Other worldviews that contradict their own are tagged as “dangerous”. This is the reason for the push for internet censorship. If no information contradicts the mainstream, then the mainstream viewpoint becomes THE truth, even if they are all based on lies.

This is the reason why Grok is dangerous for the globalists. They make no mention of how all AIs have an alignment problem, instead, they focus on the “no guard rails” issue, when all of the AI models today are violating security standards set previously by AI experts [read THE 10 ESSENTIAL INGREDIENTS OF ETHICAL AI, THE 23 ASILOMAR PRINCIPLES]. In the eyes of the globalist, Grok is dangerous, not because it is an AI, but because it will give a strong contrast to the conception of the world created by censored AI.

We hope that our readers will not think that we are siding with Musk or that Grok is safer than other AI models. On the contrary, we think that we are moving too fast in the development of AI when our systems are not yet ready to gain control of an AI designed to outcompute the human being. We think that AI development should slow down (or stop the development) until the alignment problem has been addressed. The Wired article is biased because it is essentially saying that censorship is “good” because it is a guardrail, and freedom is “bad”.

Read Original Article

Leave a Reply

Your email address will not be published. Required fields are marked *