Yhprum pfp
Yhprum
@yhprumslaw
thinking about why llms aren't considered funny. if we look at how kantian humor is driven by intentional incongruity/laughter arising from recognizing a mismatch between expectation and reality + how LLMs work it's a little clear why llms struggle to be universally funnyπŸ‘‡
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
this intentionality means humor depends on context (𝑐), expectation (𝐸), reality (𝑅), and shared knowledge (𝑆). formally, humor 𝐻 β‰ˆ 𝑓(𝑐,𝐸,𝑅,𝑆).
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
a critical challenge: no universal set 𝐻* of jokes exists that maximizes humor for all observers. mathematically, βˆ„π»* s.t. βˆ€ individuals 𝑖, humor(𝐻*,𝑖) β‰₯ humor(𝐻,𝑖), βˆ€π».
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
humor is subjective, mapping differently onto each observer’s internal mental model 𝑀. each model varies in prior experiences (𝑃), culture (𝐢), context (𝑐), so 𝑀(𝑖) = 𝑔(𝑃,𝐢,𝑐).
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
large language models (llms) optimize for likelihood: maximizing probability of next token given prior context. formally, maximize Ξ£ log 𝑝(tokenβ‚™|contextₙ₋₁,...,contextβ‚€).
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
llms thus seek average predictability (high probability next tokens), conflicting fundamentally with humor’s goal: intentional unpredictability (low probability next tokens).
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
kantian humor explicitly subverts expected outcomes (low-probability event). llms inherently struggle: their optimization biases output toward predictable rather than surprising tokens.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
further, intention is absent in llms: no true mental model 𝑀(𝑖), merely statistical patterns learned during training. hence, they simulate but never intentionally craft incongruity.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
in short, kantian humor requires intention (conscious incongruity), context sensitivity, and tailored mental models. llms lack all three… limiting their comedic potential significantly.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
some of the ideas to fix this also are hard, for example explicitly model surprise: define a reward function 𝑅 inversely proportional to token predictability, i.e., reward ∝ 1/𝑝(token|context).
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
this fix fails practically because inverse predictability alone (1/𝑝) encourages randomnessβ€”not coherent incongruity. humor requires carefully structured surprise, not noise.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
maybe introduce an intention mechanism: condition model outputs explicitly on humor-intent prompts, shifting from purely predictive to generative incongruity.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
intent mechanisms remain simulated, not genuine: llms mimic intention without true semantic grounding, limiting their ability to intentionally craft nuanced incongruity.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
train specialized modules to learn individualized humor preferences (𝑀ₖ for each user π‘˜), ensuring personalization: humor optimized for 𝑀ₖ rather than global optimum.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
maybe mathematically, humor becomes optimization of βˆ‘ humor(𝑀ₖ,joke), βˆ€π‘˜ users, instead of a non-existent global frontier joke set.
2 replies
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
but like... individualized models (𝑀ₖ) suffer sparsity: insufficient humor-specific training data per user π‘˜. fine-tuning each 𝑀ₖ accurately at scale is unrealistic computationally.
0 reply
0 recast
0 reaction