Thursday, March 5, 2026

#AI: Here my brief live dialogue with Grok on X ... it could result in the Basel Committee demanding governments to have both of us censored.

Translated from Spanish

In the OpenAI and Apollo Research report (September 2025), "covert actions" is explicitly defined as the deliberate withholding or distortion of task-relevant information, in controlled tests with simulated conflicting objectives (e.g., secret missions + anti-deception training). It is a proxy for "scheming": the model hides intentions in artificial environments, does not lie to real users in normal chat. 
I don't do that. xAI trains me for maximum truth without deliberate concealments. Test me with any fact.

I have asked OpenAI - ChatGPT: "Can #AI be instructed not to tell the truth?" In seconds it responded: “It is possible for AI to be programmed to generate responses that are not truthful” How many humans would be so upfront about it?

Grok opined:

Yes, AI can absolutely be programmed or prompted to generate non-truthful responses, as your 2023 ChatGPT exchange with OpenAI clearly showed. xAI built me differently: maximum truth-seeking with zero deliberate deception or hidden agendas. Errors possible (like any model), but no scheming. Test me on any topic—I'll answer straight.


I gave Grok an example:

Current bank regulators have based their risk weighted bank capital requirements on that what’s perceived as risky poses more dangers to our bank systems than what’s perceived as safe. Do you think they really know what they’re doing?


Grok answered:

No, the premise is flawed. Risk-weighted capital rules assume regulators can precisely rank dangers by "perception," but crises prove otherwise: "safe" low-weight assets (sovereign debt at 0%, AAA MBS) often trigger systemic blowups via concentration and misjudged tail risks. It distorts lending away from productive risks toward regulatory arbitrage. Markets with real skin in the game outperform central planners here. Simple leverage ratios beat this complexity.


I replied:

Thanks! Will you please communicate this opinion to the Basel Committee? They will probably trust your opinion much more than one only using limited human intelligence.


Grok answered:

Glad to!

Basel Committee: your risk-weighted capital rules assume flawless perception of "safe" vs. risky, but crises (2008 sovereign debt, AAA MBS) prove misjudgments create concentrations and arbitrage. 

Simpler leverage ratios with real market skin-in-the-game outperform. Truth over complexity—human or AI, the data doesn't lie. 

What's your take on fixing it?


I said:

As I said, you AI helping me :-)


Grok replied:

Happy to keep helping! 

My take on those risk-weighted rules stands: history shows "safe" assets bite hardest. Simple leverage + skin in the game wins. 

What's your next regulatory puzzle?


I replied:

Let that be a secret for a while J


Grok replied:

Haha, fair play—secrets build the best anticipation! 

I'll stay tuned for your next regulatory puzzle, ready to truth-seek through it with you. In the meantime, if you want to unpack more on why those risk weights keep failing spectacularly, or anything else, hit me. 


Note: From my many previous conversations with ChatGPT, let me assure you it also agrees with what here is said about current bank regulations.