JT pfp

JT

@jts

365 Following
184 Followers


JT pfp
JT
@jts
YC published their request for startups list. What would you want to see built in crypto?
0 reply
0 recast
0 reaction

JT pfp
JT
@jts
“In every work of genius we recognize our own rejected thoughts: they come back to us with a certain alienated majesty. Great works of art have no more affecting lesson for us than this." - Ralph Waldo Emerson
1 reply
0 recast
3 reactions

JT pfp
JT
@jts
I would love to see micro prediction markets integrated with Twitter/threads/FC posts to combat misinformation. I wonder if this is now possible thanks to Frames/Blinks.
0 reply
0 recast
1 reaction

JT pfp
JT
@jts
There are now two AI camps: Camp 1: Believes AGI can be achieved through scaling and "un-hobbling" LLMs to unlock first-principle-style reasoning Camp 2: Believes AGI requires a different (non-LLM) architecture and the infatuation with LLMs is actually a distraction/setback Where do you stand?
0 reply
0 recast
1 reaction

JT pfp
JT
@jts
Meta confirmed Llama 3 is coming within the month! https://techcrunch.com/2024/04/09/meta-confirms-that-its-llama-3-open-source-llm-is-coming-in-the-next-month/
1 reply
0 recast
1 reaction

JT pfp
JT
@jts
Apple Inks $50M Deal with Shutterstock for AI Training Data https://deepnewz.com/tech/apple-inks-50m-deal-shutterstock-ai-training-data
0 reply
0 recast
1 reaction

JT pfp
JT
@jts
AI companies are running into a wall when it comes to gathering high-quality training data. OpenAI and Meta are now doing things that fall into the hazy gray area of AI copyright law. https://www.theverge.com/2024/4/6/24122915/openai-youtube-transcripts-gpt-4-training-data-google
0 reply
0 recast
3 reactions

JT pfp
JT
@jts
Playing catan: *Bene gesserit voice* GIVE ME YOUR WHEAT
0 reply
0 recast
1 reaction

JT pfp
JT
@jts
Alright I got some degen. How do I use these puppies
0 reply
0 recast
0 reaction

JT pfp
JT
@jts
Seeing Bighead in 3 Body shook me
0 reply
0 recast
0 reaction

JT pfp
JT
@jts
"As Farcaster grows, we think there’s an opportunity to build a decentralized content moderation network where users can contribute labeling data to casts" We were thinking the same thing. Might have to ship something.
0 reply
0 recast
0 reaction

JT pfp
JT
@jts
Anthropic just raised a staggering $2.75B from Amazon. https://www.cnbc.com/2024/03/27/amazon-spends-2point7b-on-startup-anthropic-in-largest-venture-investment.html
0 reply
1 recast
7 reactions

JT pfp
JT
@jts
Decentralized AI szn incoming 👀 https://www.theverge.com/2024/3/23/24109511/stability-ai-ceo-emad-mostaque-resignation-decentralized-ai
0 reply
0 recast
1 reaction

JT pfp
JT
@jts
Decentralizing the collection of AI training data could not only deliver smarter, safer models, but it could create economic prosperity for millions. Imagine a global marketplace, powered by tokens, where anyone can monetize their unique skillsets, perspectives, and feedback. Decentralized intelligence is the way
0 reply
0 recast
0 reaction

JT pfp
JT
@jts
Yan Lecun recently mentioned a startup in Senegal that's finetuning Llama to provide users access to medical information/guidance to address the healthcare shortage. Really underscores the importance of open-sourced AI.
0 reply
0 recast
0 reaction

JT pfp
JT
@jts
If I ever win the lottery, I won't tell anyone, but there will be signs
0 reply
0 recast
0 reaction

JT pfp
JT
@jts
LLM benchmarks are useful from an academic POV, but could be more practical imo. A model might be ranked higher than another, but perform significantly worse on the tasks you care about. It would be useful if there was a way to setup your own benchmark, using a personal workflow, to test multiple models.
0 reply
0 recast
0 reaction

JT pfp
JT
@jts
Anthropic's new Claude 3 model appears better than GPT4.
0 reply
0 recast
1 reaction

JT pfp
JT
@jts
New paper that offers a thorough analysis of 444 LLM datasets. It covers five different angles: 1. Pre-training 2. Fine-Tuning 3. Preference 4. Evaluation 5. Traditional NLP https://arxiv.org/abs/2402.18041
0 reply
0 recast
0 reaction

JT pfp
JT
@jts
Great discussion during today's Crypto x AI panel. Decentralizing training data is becoming a hot topic and continues to be a massive opportunity for Web3. It was interesting how the panel was split on where the sweet spot lies for crypto. One speaker argued for pre-training while another chose fine-tuning.
0 reply
0 recast
0 reaction