Joseph Plazo’s Radical Take on the Limits of Intelligence—Artificial and Otherwise

The scene was set for celebration. What unfolded was a reckoning.

At the University of the Philippines' famed lecture theater, delegates from NUS, Kyoto, HKUST, and AIM assembled to witness the gospel of AI in finance.

They expected Plazo to reaffirm their belief that AI would rule the markets.
Instead, they got silence, contradiction, and truth.

---

### When a Maverick Started with a Paradox

Some call him the architect of near-perfect trading machines.

So when he took the stage, the room went still.

“AI can beat the market. But only if you teach it when not to try.”

The note-taking paused.

It wasn’t a thesis. It was a riddle.

---

### Dismantling the Myth of Machine Supremacy

Plazo didn’t pitch software.
He projected mistakes— neural nets falling apart under real-world pressure.

“Most models,” he said, “are just statistical mirrors of the past.

Then, with a pause that felt like a punch, he asked:

“ Can it compute the panic of dominoes falling on Wall Street? Not the charts. The *emotion*.”

You could hear a breath fall.

---

### The Smartest Students in Asia Push Back

Of course, the audience pushed back.

A PhD student from Kyoto noted how large language models now detect emotion in text.

Plazo nodded. “Feeling isn’t forecasting.”

A data scientist from HKUST proposed that probabilistic models could one day simulate conviction.

Plazo’s reply was metaphorical:
“You can simulate weather. But conviction? That’s lightning. You can’t forecast where it’ll strike. Only feel when it does.”

---

### When Faith Replaces Thinking

Plazo’s read more core argument wasn’t that AI is broken. It’s that humans are outsourcing responsibility.

“Some traders no longer read. No longer think. They just wait for signals.”

Still, he clarified: AI belongs in the cockpit—not in the captain’s seat.

His company’s systems scan sentiment, order flow, and liquidity.
“But every output is double-checked by human eyes.”

He paused, then delivered the future’s scariest phrase:
“‘The model told me to do it.’ That’s what we’ll hear after every disaster in the next decade.”

---

### Why This Message Stung Harder in the East

In Asia, automation is often sacred.

Dr. Anton Leung, a Singapore-based ethicist, whispered after the talk:
“He reminded us: tools without ethics are just sharp objects.”

That afternoon, over tea and tension, Plazo pressed the point:

“Don’t just teach students to *code* AI. Teach them to *think* with it.”

---

### Sermon on the Market

The ending was elegiac, not technical.

“The market isn’t math,” he said. “ It’s a tragedy, a comedy, a thriller—written by humans. And if your AI can’t read character, it’ll miss the plot.”

No one moved.

Some said it reminded them of Jobs at Stanford.

He came to remind us: we are still responsible.

Leave a Reply

Your email address will not be published. Required fields are marked *