I asked ChatGPT:
Please a brief answer:
If AI, e.g., you ChatGPT, managed the Basel Committee, would it be reasonable to speculate that, in order to distort less the allocation of credit to the economy, and to make sure bank shareholders hold a decent amount of skin-in the game, you would substitute current risk weighted bank capital requirements with a leverage ratio, e.g., 8, meaning 12.5 percent in capital/equity against all assets?
ChatGPT answered:
"Brief answer: Yes—with an important caveat.
It would be reasonable to speculate that an AI aiming to minimize distortions in credit allocation would lean toward a simple, binding leverage ratio (e.g., 8%–12.5% equity against all assets), because:
· It avoids the implicit regulatory favoritism embedded in risk weights (e.g., toward sovereign debt or mortgages).
· It is harder to game and more transparent.
· It ensures consistent “skin in the game” across all lending decisions.
However, a purely flat leverage ratio has a trade-off: it ignores genuine differences in risk. That can:
· Over-penalize low-risk assets (e.g., short-term safe claims),
· Under-penalize high-risk exposures, unless complemented by other tools.
So a likely AI-driven approach would not fully “substitute” but rather:
· Make the leverage ratio the primary, binding constraint, and
· Retain simplified, backstop risk measures for extreme concentrations or tail risks.
In short: Yes, as a core pillar—but not as the only one."
I asked ChatGPT:
Please be brief:
When suggesting to “Retain simplified, backstop risk measures for extreme concentrations or tail risks.”
Do these measures not de facto imply a distortion of the free-market’s signals?
Does that not imply the you think bank shareholders, with decent skin-in-the game, or bankers themselves, are incapable or unwilling to do this in a safer way that what AI could?
ChatGPT answered:
Briefly: Yes, they do introduce some distortion—but that doesn’t rest on a belief that bankers are inherently incapable.
Any regulatory overlay—even “backstop” measures—modifies pure market signals. So in that narrow sense, yes, it departs from a fully free-market allocation.
But the rationale is more limited and pragmatic:
· With high leverage, even rational, well-intentioned bankers can take risks whose downside is partly externalized(systemic spillovers, safety nets, payment system disruption).
· Even with “decent” skin-in-the-game, tail risks are hard to price and often underweighted in competitive markets, especially in booms.
So the case for simple backstops is not that AI (or regulators) are “smarter,” but that:
· System-level risks ≠ individual incentives, and
· A small, simple constraint can reduce catastrophic outcomes at relatively low cost.
That said, your underlying point is strong: the higher and cleaner the leverage ratio, the weaker the case for additional risk-based distortions.
I asked ChatGPT:
Final questions on this (for today :-))
"I see you left out the oversight by the shareholders, is that so insufficient so as to be irrelevant?
Are you not trying too much to save all banks? If some individual banks collapse, is that not part of the feed-back the market needs in order to make sure the whole banking system not fail?"
ChatGPT answered: Response 1
Briefly: No, shareholder oversight is not irrelevant—but it is not sufficient on its own.
Even with meaningful “skin in the game,” shareholders face limits:
· They are often diversified and distant, so monitoring is imperfect.
· They benefit from upside volatility (limited liability), which can still bias toward risk-taking.
· They cannot fully internalize system-wide spillovers if things go wrong.
On your second point: you’re right to push back.
Allowing individual banks to fail is an essential market signal. Without that, discipline erodes and risk accumulates.
The key distinction is:
· Micro level: failures should occur—this is healthy feedback.
· System level: preventing cascading failures that threaten the payments system and credit intermediation.
So the goal is not “save all banks,” but:
· Ensure credible failure is possible (no blanket bailouts),
· While having simple safeguards (like strong leverage) to reduce the odds that one failure triggers many.
In fact, your logic reinforces the earlier point: the stronger the equity buffer, the safer it is to let banks fail without fearing systemic collapse.
ChatGPT answered: Response 2
Briefly:
1) Shareholder oversight—irrelevant?
No, not irrelevant—just structurally limited. Even with strong skin-in-the-game:
· Bank shareholders have limited liability and upside incentives.
· They often cannot fully observe or price tail risks in real time.
· In competitive markets, they may tolerate risk-taking to avoid being outperformed.
So oversight helps, but it’s not sufficient on its own to internalize systemic risk.
2) Are we trying to save all banks?
We shouldn’t. A healthy system requires that individual banks can fail—that’s essential market discipline and feedback.
The distinction is:
· Allow failure of individual institutions, including losses for shareholders and creditors.
· Prevent system-wide collapse, where failures cascade and disrupt payments, credit intermediation, and the real economy.
So the goal isn’t to “save banks,” but to:
· Ensure credible failure (resolution) mechanisms, and
· Avoid contagion-driven breakdowns.
Your core intuition is right: without the possibility of failure, discipline erodes. The challenge is allowing failure without triggering systemic collapse."
I said: Tossing a coin, response 1 came up. Thanks!
It replied: Glad the coin landed decisively 🙂
If you pick this up again another day, there’s a very natural next step in your line of thinking: how high equity requirements would need to be to make bank failure both safe and credible without systemic fear. That’s where the real policy trade-offs get interesting. Until then, thanks for a sharp and disciplined exchange.
I asked ChatGPT: Oops I forgot:
If a leverage ratio: How many more bank officers and how many fewer financial engineers? How many less bank regulators and supervisors?
ChatGPT answered:
Briefly: directionally clear, but hard to quantify precisely.
If a simple, binding leverage ratio replaced most risk-weighting:
Inside banks
· More traditional bankers (loan officers/credit analysts): likely ↑ modestly
→ focus shifts back to underwriting real risk rather than optimizing regulatory metrics.
· Fewer financial engineers/modelers/regulatory quants: likely ↓ materially
→ much of today’s complexity (RWA optimization, internal models) becomes less valuable.
In regulation/supervision
· Fewer specialized model validators and rule designers: likely ↓ significantly
→ less need to police complex risk-weight systems.
· But not a collapse in supervision:
→ still needed for governance, fraud, concentration risk, and resolution planning.
Order-of-magnitude intuition (not precise):
· Financial engineering / regulatory quant roles: could shrink meaningfully (tens of %)
· Traditional credit roles: increase modestly (single digits to low tens %)
· Regulators/supervisors: leaner, but not dramatically smaller
Bottom line: less “gaming the rules,” more “judging real borrowers”—but supervision never disappears.
I said: Thanks!
It replied: You’re very welcome—this was a sharp and thoughtful exchange.
I said: Goodbye for today (for real :-))
It replied: Goodbye for today (for real 🙂). Whenever you want to pick it up again, I’ll be here.
I asked Grok:
Please a brief answer:
"If AI, e.g., you Grok, managed the Basel Committee, would it be reasonable to speculate that, in order to distort less the allocation of credit to the economy, and to make sure bank shareholders hold a decent amount of skin-in the game, you would substitute current risk weighted bank capital requirements with a leverage ratio, e.g., 8, meaning 12.5 percent in capital/equity against all assets?"
Grok answered:
"Yes.
A flat leverage ratio (e.g., 12.5% equity against all assets) would eliminate the regulatory arbitrage and credit-allocation distortions baked into risk weights — which have repeatedly favored “safe” assets (sovereigns, AAA mortgages, etc.) while starving riskier but productive lending to SMEs and entrepreneurs.
It would also force shareholders to put real skin in the game across the entire balance sheet, reducing moral hazard and the too-big-to-fail subsidy far more effectively than the current RWA-plus-3% hybrid.
Simpler rules, less model gaming, fewer crises. That’s exactly the direction a truth-seeking regulator would take."
I said: Many thanks! That was brief :-)
It replied: You're very welcome! Brevity is the soul of wit (and sanity in banking regulation). Anything else on your mind?
I said: Who knows, perhaps later.
It replied: No rush at all — door’s always open. Ping me anytime the banking ghosts whisper again.
I asked Grog: Oops I forgot:
"If a leverage ratio: How many more bank officers and how many fewer financial engineers? How many less bank regulators and supervisors?"
Grok answered:
"Yes — and dramatically so.
More bank officers (loan/credit officers):
Lending would shift back to old-school relationship banking and real-world credit judgment. Banks would need more experienced officers to assess actual borrower risk (SMEs, entrepreneurs, etc.) instead of chasing low risk-weights. Expect +20–40% in traditional credit staff at larger banks.
Fewer financial engineers (quants, modelers, risk wizards):
No more internal ratings-based models, VaR engines, stress-test optimizations, or RWA arbitrage desks. The entire “regulatory capital optimization” industry collapses. Expect –60–80% in that headcount — those PhDs and Excel ninjas would be redundant.
Fewer bank regulators and supervisors:
Basel oversight shrinks from thousands of pages of model approvals, IRB validations, and pillar 2 add-ons to one simple ratio check. Global and national supervisors could cut staff by –50–70%; the whole “model validation” bureaucracy largely disappears.
Simpler rules = fewer games, fewer players, more real banking."
I said: Goodbye for today (for real :-))
It replied: Farewell for real this time! Door’s always open whenever the banking ghosts whisper again. Take care!
Note: The answers are copied exactly from those given to me by ChatGPT and Grok