Yhprum pfp

Yhprum

@yhprumslaw

76 Following
6 Followers


Yhprum pfp
Yhprum
@yhprumslaw
one of my favorite listens this year, and a new fan of @jackson and his podcast + the unbelievable site that actually lets me just read transcripts
1 reply
1 recast
2 reactions

Yhprum pfp
Yhprum
@yhprumslaw
yes, i agree and i actually think it’s the illegibility of intention that is often when claims and pushbacks for what is or is not slop begins to arise. may write a longer response to this, lots of interesting ideas here
1 reply
0 recast
1 reaction

Yhprum pfp
Yhprum
@yhprumslaw
wow, this is one of the first podcasts i listened to because you’d replied to me on fc but I knew neither the interviewer or the interviewee, fucking excellent. honestly just went down a hole reading your transcripts and to be honest, i may legitimately start blogging to clear up some of my thinking, but really enjoy your perspective. curious, for you @jackson - how do you think about slop if you look at abstract Impressionism? how much sits on the mind of the consumer’s impression of care versus the artist’s actual amount of care. like let’s take beeple, we can argue digital art feels careless, but is beeples time spent playing in rendering tools slop or not? where do you feel or understand the line?
1 reply
0 recast
1 reaction

Yhprum pfp
Yhprum
@yhprumslaw
hey fred, new to fc - is there anything like twitter lists here? trying to rebuild a social graph here and figure out my way around this place. liked your blog and think im going to put my long form on paragraph. feel relatively persuaded by having far more of my future writing be forever accessible than the alternative.
0 reply
0 recast
1 reaction

Yhprum pfp
Yhprum
@yhprumslaw
earlier today I was playing with Gemini 2.5 Pro, and it’s real good. like really good, but again if your style of running your company is decision by committee. good luck thinking about the consumer and winning when the innovation curve is an asymptote and virality is everything.
0 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
makes sense, been really enjoying mulling about the town. feels less bot-like so far…
0 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
wow I remember ello, crazy throwback. would love to learn how people are discovering communities and people here fwiw
1 reply
0 recast
1 reaction

Yhprum pfp
Yhprum
@yhprumslaw
how does your usage here differ from your X usage? trying to figure out the ropes of the different sides of farcaster
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
you could say, well maybe they're not smart (i think it means we haven't figured out this type of intelligence), or perhaps llms represent a distinct intelligence altogether: capable, yet alien, highlighted vividly by their struggle with humor, our most human cognitive frontier.
0 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
context-aware embeddings (𝑐ₜ) help coherence but struggle capturing subtle context shifts like irony, sarcasm, or social cues without explicit symbolic understanding.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
statistical optimization of token likelihood (max log𝑝) inherently conflicts with intentional low-probability joke construction... it's a core tension unresolved.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
and you can keep going: rlhf for humor is bottlenecked by subjective human variability: inconsistent feedback results in noisy reward signals (𝑅), causing suboptimal or bland humor outputs.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
but like... individualized models (𝑀ₖ) suffer sparsity: insufficient humor-specific training data per user 𝑘. fine-tuning each 𝑀ₖ accurately at scale is unrealistic computationally.
0 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
maybe mathematically, humor becomes optimization of ∑ humor(𝑀ₖ,joke), ∀𝑘 users, instead of a non-existent global frontier joke set.
2 replies
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
train specialized modules to learn individualized humor preferences (𝑀ₖ for each user 𝑘), ensuring personalization: humor optimized for 𝑀ₖ rather than global optimum.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
intent mechanisms remain simulated, not genuine: llms mimic intention without true semantic grounding, limiting their ability to intentionally craft nuanced incongruity.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
maybe introduce an intention mechanism: condition model outputs explicitly on humor-intent prompts, shifting from purely predictive to generative incongruity.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
this fix fails practically because inverse predictability alone (1/𝑝) encourages randomness—not coherent incongruity. humor requires carefully structured surprise, not noise.
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
some of the ideas to fix this also are hard, for example explicitly model surprise: define a reward function 𝑅 inversely proportional to token predictability, i.e., reward ∝ 1/𝑝(token|context).
1 reply
0 recast
0 reaction

Yhprum pfp
Yhprum
@yhprumslaw
further, intention is absent in llms: no true mental model 𝑀(𝑖), merely statistical patterns learned during training. hence, they simulate but never intentionally craft incongruity.
1 reply
0 recast
0 reaction