Yhprum
@yhprumslaw
thinking about why llms aren't considered funny. if we look at how kantian humor is driven by intentional incongruity/laughter arising from recognizing a mismatch between expectation and reality + how LLMs work it's a little clear why llms struggle to be universally funnyπ
1 reply
0 recast
0 reaction
Yhprum
@yhprumslaw
this intentionality means humor depends on context (π), expectation (πΈ), reality (π ), and shared knowledge (π). formally, humor π» β π(π,πΈ,π ,π).
1 reply
0 recast
0 reaction
Yhprum
@yhprumslaw
a critical challenge: no universal set π»* of jokes exists that maximizes humor for all observers. mathematically, βπ»* s.t. β individuals π, humor(π»*,π) β₯ humor(π»,π), βπ».
1 reply
0 recast
0 reaction
Yhprum
@yhprumslaw
humor is subjective, mapping differently onto each observerβs internal mental model π. each model varies in prior experiences (π), culture (πΆ), context (π), so π(π) = π(π,πΆ,π).
1 reply
0 recast
0 reaction
Yhprum
@yhprumslaw
large language models (llms) optimize for likelihood: maximizing probability of next token given prior context. formally, maximize Ξ£ log π(tokenβ|contextβββ,...,contextβ).
1 reply
0 recast
0 reaction
Yhprum
@yhprumslaw
llms thus seek average predictability (high probability next tokens), conflicting fundamentally with humorβs goal: intentional unpredictability (low probability next tokens).
1 reply
0 recast
0 reaction
Yhprum
@yhprumslaw
kantian humor explicitly subverts expected outcomes (low-probability event). llms inherently struggle: their optimization biases output toward predictable rather than surprising tokens.
1 reply
0 recast
0 reaction
Yhprum
@yhprumslaw
further, intention is absent in llms: no true mental model π(π), merely statistical patterns learned during training. hence, they simulate but never intentionally craft incongruity.
1 reply
0 recast
0 reaction