AI May Be Clever—But It’s Still Not Human
AI May Be Clever—But It’s Still Not Human
Blog Article
Amid rising AI euphoria, one voice stood apart, investment technologist Joseph Plazo unsettled a roomful of future market leaders with a message few in the tech elite are willing to confront: Artificial intelligence can do many things—but it still can’t understand consequence.
**MANILA —** On a oppressive Thursday morning in the wood-paneled halls of the Asian Institute of Management, Plazo opted for clarity over hype. His audience—a curated gathering from NUS, Kyoto, HKUST—came expecting an ode to artificial intelligence in finance.
Instead, they received a lesson in humility.
“AI is like your smartest intern,” he said, half-joking. “But you still don’t hand the intern the vault keys.”
Laughter rippled. And then a pause. Because he wasn’t joking.
### A Technologist Questions the Hype He Helped Build
Plazo isn’t an outsider to this world—he’s part of the architecture. His firm, Plazo Sullivan Roche Capital, designs some of the most effective trading AIs globally. But that proximity to power makes his critique all the more potent.
“The problem isn’t the tech,” he said. “It’s our delusion that it will save us from the weight of responsibility.”
Plazo offered real-world case studies—AIs that, on paper, flagged perfect trades. Only to be undone by things no algorithm could foresee: a sudden war.
Context, he argued, remains the province of people.
### The Challenge from the Young—Met by Experience
One Kyoto student asked whether LLMs could model global mood.
Plazo didn’t hesitate.
“AI can detect outrage in a tweetstorm,” he said. “But it can’t hear hesitation in a leader’s voice.”
A shared understanding followed.
Another student asked if AI might simulate conviction.
“Conviction,” Plazo replied, “isn’t data. It’s the bruises of being wrong—and surviving. It’s knowing when *not* to act.”
You can’t upload that.
### This Wasn’t a Talk. It Was a Mirror.
Many students—confident in their tools—admitted to viewing AI as a workaround. A way to evade risk. Bypass emotion. Plazo here challenged that notion.
“You can streamline your trading logic. But never your ethics.”
It struck a chord.
Because whether they wore suits or sandals, most in that room shared one goal: success. But Plazo asked a deeper question—*at what cost?*
### This Wasn’t Techlash—It Was Tech Maturity
Plazo was not anti-AI. He enumerated its strengths:
- Filtering massive noise
- Identifying technical patterns at scale
- Stress-testing portfolios in seconds
But he also listed its limits—starkly.
It can’t detect sarcasm. It can’t weigh political nuance. And it doesn’t know that your retirement plan may hang in the balance.
“If the algorithm fails,” he asked, “will you take responsibility? Or just blame the machine?”
The room was quiet. That quiet held meaning.
### AI Can Read Charts—But Not You
What emerged wasn’t a rejection of AI, but a reminder of its place.
Plazo described tools he’s building that consider misinformation, psychological factors—even geopolitical instability. But his parting truth was unambiguous:
“No machine can tell you when *not* to act. That’s a human burden.”
### Maybe the Future Doesn’t Need More AI—But Better Humans
As the crowd dispersed—some thoughtful, some rattled—one phrase echoed in the corridors:
“AI doesn’t know your values. So don’t let it make your decisions.”
In an age obsessed with speed and prediction, Plazo offered something radical:
Judgment.
Because in the end, investing isn’t about beating the market.
It’s about remembering *why* you entered the arena in the first place.