Constantinus pfp

Constantinus

@constantinus

737 Following
203 Followers


Constantinus pfp
Constantinus
@constantinus
Fix email verification error when buying warps.
0 reply
0 recast
0 reaction

Constantinus pfp
Constantinus
@constantinus
What AI agents inside warpcast do you know? Please links.
0 reply
0 recast
0 reaction

Constantinus pfp
Constantinus
@constantinus
It's interesting to watch the trend of onchain AI agents that are valued in the tens and hundreds of $M. Realizing that many of them have no technology at all except APIs and function colling and RAG.
0 reply
0 recast
0 reaction

Constantinus pfp
Constantinus
@constantinus
Is there a chat room for developers where you can consult and exchange information?
0 reply
0 recast
0 reaction

Dan Romero pfp
Dan Romero
@dwr.eth
Frames v2 will allow developers: - make their frames "installable" for easy access - send in-app notifications for users that opt in - fully custom app UX like a Telegram Mini App (no more clunky images) https://github.com/farcasterxyz/protocol/discussions/205
7 replies
25 recasts
115 reactions

Constantinus pfp
Constantinus
@constantinus
Orchestration will help with the last point.
0 reply
0 recast
0 reaction

Constantinus pfp
Constantinus
@constantinus
TIPO (Text to Image with text presampling for Prompt Optimization) is a technique that improves the quality and usability of text-2-image models. TIPO uses LLM to preprocess text queries, making them more accurate and informative. It accepts both natural language Prompts and Danbooru tag format. The basic idea behind the method is that more detailed and specific queries lead to more accurate image generation, while unspecific queries lead to a wider range but less accurate results. TIPO generates multiple detailed variants of a query from one simple query, thereby expanding the space of possible results and increasing the probability of obtaining the desired image. We present 2 models of TIPO, both built on LLaMA 400M, trained on Danbooru2023, GBC10M and Coyo-HD-11M sets with a total of 30 mlrn tokens. https://github.com/KohakuBlueleaf/KGen
0 reply
0 recast
0 reaction

Constantinus pfp
Constantinus
@constantinus
SmolLM2: the second generation of compact LLMs from HuggingFace. Hugging Face introduced SmolLM2, a new series of SLMs optimized for resource-constrained devices and designed for English text generation, summarization and function calling tasks. SmolLM2 models were trained on a mix of FineWeb-Edu, DCLM and Stack datasets. Post-training testing showed the superiority of the older SmolLM2-1.7B model over Meta Llama 3.2 1B and Qwen2.5-1.5B. The models are available in three configurations: 135M, 360M and 1.7B parameters, each model has its own Instruct-version, and the 1.7B and 360M are also official GGUF quantized versions: SmolLM2-1.7B https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B SmolLM2-360M https://huggingface.co/HuggingFaceTB/SmolLM2-360M SmolLM2-135M https://huggingface.co/HuggingFaceTB/SmolLM2-135M
0 reply
0 recast
0 reaction

𝚐𝔪𝟾𝚡𝚡𝟾 pfp
𝚐𝔪𝟾𝚡𝚡𝟾
@gm8xx8
Powered by Qwen Code 2.5 & WebLLM https://huggingface.co/spaces/cfahlgren1/qwen-2.5-code-interpreter
0 reply
1 recast
3 reactions

tldr (tim reilly) pfp
tldr (tim reilly)
@tldr
“i wasn’t actually mad in that thread!”
1 reply
2 recasts
16 reactions

Dan Romero pfp
Dan Romero
@dwr.eth
Looking for feedback on this iteration for channels The single biggest remaining complaint is people feeling like the pre-existing, large, topic based channels like /food or /founders should be more accesible. It will also reduce work for channel mods who don't want to have to approve every single person who wants to cast in their channel, but are OK giving up some control of who (and they can always invite someone to be a member to guarantee it). Curious: 1. Would you turn this on or off for your channel? 2. Any concerns? https://warpcast.notion.site/Public-mode-11f6a6c0c10180869699c725fa9e02e3
45 replies
211 recasts
951 reactions

Constantinus pfp
Constantinus
@constantinus
A project on Github that offers to run LLM on AMD graphics gas pedals using the Docker container. The image is designed to work with Hugging Face models, primarily the LLama family of models. To run it, you need to have an AMD GPU with ROCm support (version 5.4.2 or higher) and Docker installed. To adapt the inference logic to your needs, make appropriate changes to the run_inference.py file followed by rebuilding the Docker image. The project provides an Aptfile file containing a list of required ROCm packages (rocm-dev, rocm-libs, rocm-cmake, miopen-hip and rocblas) to be installed in the Docker container. https://github.com/slashml/amd_inference
0 reply
0 recast
1 reaction

vrypan |--o--| pfp
vrypan |--o--|
@vrypan.eth
If you've been following the Ordering FIP, you may have noticed somewhere in the comments that I'm trying to propose an alternative solution that does not require sequencers with special privileges. I've done many revisions since the last time I shared it here, but my problem is I don't have someone to debate it and find its weak spots. It may or may not be a good approach, but I'll never know unless others take a look and point out its weaknesses. So, if you're into the protocol stuff, please take a look, and let me know its weaknesses: https://gist.github.com/vrypan/1ae6a60ecb3741ab031b5b06c974acab
2 replies
5 recasts
16 reactions

ns pfp
ns
@nickysap
From Inside Farcaster Dev Day
5 replies
8 recasts
29 reactions

jesse.base.eth 🔵 pfp
jesse.base.eth 🔵
@jessepollak
hey everyone - some news: in addition to leading the @base team, i’ll be stepping up to lead @coinbasewallet and joining the @coinbase exec team. i’m really excited to take on this new mandate and to accelerate our mission of bringing a billion people and a million builders onchain. -- @base and @coinbasewallet share the same north star: make it dead simple for the world to come onchain, and connect everyone who does to the incredible products being built across the entire onchain economy. now, we’ll be able to work together more closely to make this happen. one important note: @base will continue to uphold its core values of being for everyone, a bridge not an island, and decentralized and open source. @coinbasewallet will continue to work across the entire onchain economy, and we’ll start the work of embodying the other @base values in even more ways. it’s a new day one. time to put our heads back down. keep building and stay based.
210 replies
365 recasts
2254 reactions

Dan Romero pfp
Dan Romero
@dwr.eth
“What are you doing to help small accounts on Farcaster?” We have a lot more work to do, but here’s what we’ve done in the last few months to improve: 1. Moved initial feeds away from a follow-based model to algorithmic and interest-based model 2. A daily set of boosted accounts with <10K followers 3. USDC rewards normalized by followers 4. A bunch of experiments and tweaks to improve the home feed to show interesting casts from people you don’t follow yet More to come!
39 replies
129 recasts
754 reactions

Erik pfp
Erik
@eriks
today is a good day to build on @base, create on base, use base, and be based 🔵↑
66 replies
112 recasts
483 reactions

raz pfp
raz
@raz
We're almost ready to continue the /onchain competition. Guild UI updates: ✅ Guild stability at scale: ✅ /radmemes competitions: ✅ First onchain metagame release: 🤩 Guild token & liquidity incentives: 🤩
8 replies
20 recasts
48 reactions

Nick Smith pfp
Nick Smith
@iamnick.eth
language is powerful the next wave of users won’t buy their first crypto, they’ll earn it making ✧1000 vs 0.001 ETH is a subtle but important difference
1 reply
2 recasts
13 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
Linking over an explanation from the other app: https://x.com/VitalikButerin/status/1817408883897593911
4 replies
133 recasts
735 reactions