Choong Ng pfp

Choong Ng

@choong

134 Following
81 Followers


Choong Ng pfp
Choong Ng
@choong
Motivated reasoning. Expect agents to use crypto the same ways as ordinary businesses.
0 reply
0 recast
0 reaction

Choong Ng pfp
Choong Ng
@choong
Yes
0 reply
0 recast
0 reaction

ash pfp
ash
@aes
testing my luck here, lemme see if farcaster community can help. I have a setup with 4 A6000. every time i run a training job that uses all 4 GPUs my machine shuts down. When I use 1-3 it works fine. I had electrical come out and the socket and power supply are fine. What could be the problem?
2 replies
2 recasts
1 reaction

Choong Ng pfp
Choong Ng
@choong
Try setting the lowest allowable power limit via nvidia-smi and running a 4 GPU job known to trigger the shutdown?
1 reply
0 recast
1 reaction

Choong Ng pfp
Choong Ng
@choong
On the open source side there are vendor provided demos and tutorials but nothing I've tried seems really complete.
0 reply
0 recast
0 reaction

Choong Ng pfp
Choong Ng
@choong
Models, tools, and all the precise around that are moving quickly. Do you have a project in mind?
1 reply
0 recast
0 reaction

pugson pfp
pugson
@pugson
i should be able to send a link from safari using the share sheet to ChatGPT and have it summarize it for me automatically
2 replies
1 recast
4 reactions

Choong Ng pfp
Choong Ng
@choong
I think the Apple hardware has some unique potential but you'll need a lot of the right workload to justify the engineering budget. I haven't seen benchmarks but I'd imagine vector databases and that sort of thing would perform really well.
0 reply
0 recast
1 reaction

Choong Ng pfp
Choong Ng
@choong
NVIDIA and Linux are the more travelled path, it will be much easier to get public research code etc working rather than having to in many cases port code that assumes NVIDIA hardware or directly depends on CUDA.
2 replies
0 recast
0 reaction

Choong Ng pfp
Choong Ng
@choong
Lambda Labs
0 reply
0 recast
0 reaction

Choong Ng pfp
Choong Ng
@choong
Use a Mistral fine tune via Ollama in your laptop to get better flavor for where things are going. LLMs are not really good for storing facts but combined with external search I think going to be useful in the near term.
1 reply
0 recast
1 reaction

Choong Ng pfp
Choong Ng
@choong
The fast computer, no software problem.
0 reply
0 recast
1 reaction

kevin j πŸ€™ pfp
kevin j πŸ€™
@entropybender
who here as looked into memory management research? wondering if there are better methods than RAG especially for large contexts (ex: an agent that knows the entire docs for every sponsor project at a hackathon). interesting thing could be easily distinguishing when to use fine-tuning vs RAG
1 reply
1 recast
2 reactions

Choong Ng pfp
Choong Ng
@choong
I am curious how memory transformers work out, I haven't done any exploration there though.
0 reply
0 recast
0 reaction

Choong Ng pfp
Choong Ng
@choong
I have bad news for you...
0 reply
0 recast
1 reaction

Choong Ng pfp
Choong Ng
@choong
"AI safety" is mostly a distraction from the real dangers of human bad actors. People and people organized as corporations are more than capable of doing harm at scale with whatever technology is available.
2 replies
0 recast
8 reactions

Choong Ng pfp
Choong Ng
@choong
Brand and deep wells of proprietary data are the two obvious moats that can last for a good while. It will be interesting to see how this situation develops.
0 reply
0 recast
1 reaction

Choong Ng pfp
Choong Ng
@choong
OpenAI's commercial advantage rested on building a monopoly on machine intelligence. That was never going to happen and I don't think the current chaos meaningfully changes the issue other than just weakening them as an org.
1 reply
0 recast
1 reaction

Choong Ng pfp
Choong Ng
@choong
Original with plugins, it seems to work fairly well.
0 reply
0 recast
0 reaction

Duoc Ngo pfp
Duoc Ngo
@duoc95
hey mates, how do you use AI on your daily task? which AIs are you guys use for now?
1 reply
1 recast
1 reaction