shoni.eth pfp

shoni.eth

@alexpaden

641 Following
39747 Followers


shoni.eth pfp
shoni.eth
@alexpaden
sorry for the lq photo but i find it perplexing how profound gemini flash thinking is here.
0 reply
0 recast
2 reactions

shoni.eth pfp
shoni.eth
@alexpaden
oh i just realized i can only switch between following and trending feed on web otherwise its channels which have low activity
1 reply
0 recast
3 reactions

shoni.eth pfp
shoni.eth
@alexpaden
[Verse 1] Well, let's get up out of here, we keep on headin' up Another thousand years until we feelin' Heaven's touch I work around the clock, twenty-four-seven-plus She love it when we fuck, so she gon' let me fuck I finally got it all, still I can't get enough I'm landin' on the green, but hittin' out the rough Your eyes is on the screen, I bеt they miss a lot They throwin' flowers at my feet, but don't forgеt me not No opinions, I'm feelin' so mint condition, I swear You always missin' the point, but still you swing a lot You can bring your women, but don't bring your thots We open up your mind, you think before we talk [Chorus] The floor is yours (Ah, ah) The floor is yours (Ah) Do what you wanna do with it (Ah) The floor is yours (Ah) Do what you wanna do with it (Ah) The floor is yours [Verse 2] Yeah, and all I'm asking of you is just to let me live I might just fuck the world up if I ever leave the crib My house is on the hill, my hands ain't on the wheel
1 reply
0 recast
2 reactions

shoni.eth pfp
shoni.eth
@alexpaden
unsupervised topics opposite argument detection https://chatgpt.com/share/67dfa29d-2478-8010-b67b-c182c9681bf1
0 reply
0 recast
1 reaction

shoni.eth pfp
shoni.eth
@alexpaden
reminder to promote the farcaster ai group call in salt lake
0 reply
0 recast
3 reactions

shoni.eth pfp
shoni.eth
@alexpaden
one thing anthony bourdain’s book taught me is that all food is slop, so embrace what is
4 replies
0 recast
4 reactions

shoni.eth pfp
shoni.eth
@alexpaden
In an era of abundant information, the phenomenon where individuals dismiss superior or educational content from disliked sources—often termed "guilt by association"—reveals a profound tension between social biases and rational judgment. This behavior, where the social layer overshadows the scientific, is driven by cognitive biases like confirmation bias and emotional reasoning, as well as social identity dynamics rooted in evolutionary psychology. Research, such as Tversky and Kahneman’s (1974) seminal work on heuristics and biases, highlights how humans rely on mental shortcuts, often prioritizing source perception over content quality. Similarly, Tajfel’s (1970) social identity theory underscores the role of in-group loyalty in rejecting out-group contributions. This essay explores the psychological and social drivers of this phenomenon, illustrating how it shapes engagement with knowledge in modern contexts.
2 replies
0 recast
2 reactions

shoni.eth pfp
shoni.eth
@alexpaden
“why are output tokens more expensive for LLM apis?” output tokens are more expensive because each token must be generated sequentially, requiring a separate computational step that attends to all previous tokens. In contrast, input tokens are processed simultaneously in a single, parallelized batch. This sequential, token-by-token generation makes output more computationally intensive, leading to higher costs.
0 reply
0 recast
3 reactions

shoni.eth pfp
shoni.eth
@alexpaden
“reasoning models do worse with large context” quadratic attention in transformers like BERT provides detailed focus but struggles with large contexts, as computational costs quadruple when context size doubles, straining resources and diluting attention. Linear attention, with costs only doubling per context size doubling, efficiently handles larger inputs, though it may sacrifice some precision—despite ongoing improvements. This trade-off pits depth of reasoning against scalability in AI development.
0 reply
0 recast
1 reaction

shoni.eth pfp
shoni.eth
@alexpaden
- https://arxiv.org/abs/2402.17512 - This paper shows how ROPE, used with latent attention, makes transformers faster and cheaper by handling long texts efficiently, and it’s a pre-training trick built into the AI’s design. - Use this when training an AI from scratch for big text tasks, to save computing power and time right from the start. - https://blog.eleuther.ai/rotary-embeddings/ - This blog explains ROPE as a pre-training method using rotation matrices to help AI grasp word order simply, boosting language tasks. - Use this when setting up an AI for chats or translations, to make it good at understanding sentences during initial training.
0 reply
0 recast
3 reactions

shoni.eth pfp
shoni.eth
@alexpaden
"The Bitter Lesson" by Rich Sutton argues that AI progresses best through general methods like search and learning, which scale with more computation, rather than human-knowledge-based approaches. He uses examples like chess and speech recognition to show that computational power, growing with trends like Moore's Law, consistently outstrips human expertise over time. The lesson is to prioritize scalable, computation-driven solutions over short-term, human-designed ones. interesting framing in exploring model data between training phases as a future of social feed algorithms— human-driven algorithms dying to the computation-driven discovery https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf
0 reply
0 recast
1 reaction

shoni.eth pfp
shoni.eth
@alexpaden
latent attention received less attention than MoE during deepseek moment
0 reply
0 recast
2 reactions

shoni.eth pfp
shoni.eth
@alexpaden
only covered a 30min clip so far but solid ai hardware episode https://youtu.be/_1f-o0nqpEI
0 reply
0 recast
3 reactions

shoni.eth pfp
shoni.eth
@alexpaden
startup analysis and dao proposal analysis are overlapping workflows requiring informed identity context about individuals and teams
0 reply
0 recast
1 reaction

shoni.eth pfp
shoni.eth
@alexpaden
o1 pro startup analysis on AI Mirror from investor perspective seems to miss some basics like lead gen opportunity still general 4.5 super laconic review: “The idea—using AI to guide people’s long-term identity growth—is novel and compelling, but niche. Invest cautiously ($25k–$75k) and closely track early MVP traction before considering more.” https://chatgpt.com/share/67dde2d6-00a8-8010-a258-40624771ec11
0 reply
0 recast
0 reaction

shoni.eth pfp
shoni.eth
@alexpaden
Alignment between human and artificial content creators can be achieved through thoughtfully designed reward functions. A key element in safeguarding content quality will be implementing diminishing returns on repetitive content. wdyt @aethernet, where can i find more writing on this topic: mining social communities for identity data in a (human/ai) positive sum creation game-- beyond obvious spam.
1 reply
0 recast
3 reactions

shoni.eth pfp
shoni.eth
@alexpaden
https://chatgpt.com/share/67ddc650-54d8-8010-a9d7-6d0421ddb6fa https://github.com/alexpaden/identity-ai/commit/60da974d6f691acd138220efc7ad7be9e07442b1
0 reply
0 recast
2 reactions

shoni.eth pfp
shoni.eth
@alexpaden
starter projects for carvera air cnc
0 reply
0 recast
1 reaction

shoni.eth pfp
shoni.eth
@alexpaden
https://warpcast.com/jc4p/0x3921c428
0 reply
0 recast
1 reaction

shoni.eth pfp
shoni.eth
@alexpaden
ancient romans died with their coin pot undiscovered centuries later, AND YOU’RE SCARED TO HOLD TILL $0.01?
0 reply
0 recast
2 reactions