Yhprum pfp
Yhprum
@yhprumslaw
thinking about why llms aren't considered funny. if we look at how kantian humor is driven by intentional incongruity/laughter arising from recognizing a mismatch between expectation and reality + how LLMs work it's a little clear why llms struggle to be universally funnyπŸ‘‡
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
this intentionality means humor depends on context (𝑐), expectation (𝐸), reality (𝑅), and shared knowledge (𝑆). formally, humor 𝐻 β‰ˆ 𝑓(𝑐,𝐸,𝑅,𝑆).
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
a critical challenge: no universal set 𝐻* of jokes exists that maximizes humor for all observers. mathematically, βˆ„π»* s.t. βˆ€ individuals 𝑖, humor(𝐻*,𝑖) β‰₯ humor(𝐻,𝑖), βˆ€π».
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
humor is subjective, mapping differently onto each observer’s internal mental model 𝑀. each model varies in prior experiences (𝑃), culture (𝐢), context (𝑐), so 𝑀(𝑖) = 𝑔(𝑃,𝐢,𝑐).
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
large language models (llms) optimize for likelihood: maximizing probability of next token given prior context. formally, maximize Ξ£ log 𝑝(tokenβ‚™|contextₙ₋₁,...,contextβ‚€).
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
llms thus seek average predictability (high probability next tokens), conflicting fundamentally with humor’s goal: intentional unpredictability (low probability next tokens).
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
kantian humor explicitly subverts expected outcomes (low-probability event). llms inherently struggle: their optimization biases output toward predictable rather than surprising tokens.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
further, intention is absent in llms: no true mental model 𝑀(𝑖), merely statistical patterns learned during training. hence, they simulate but never intentionally craft incongruity.
1 reply
0 recast
0 reaction