PyTorch pfp

PyTorch

@nief331012

297 Following
226 Followers


PyTorch pfp
PyTorch
@nief331012
Batch inference is the most basic optimization you can do to improve GPU utilization, but is often overlooked & misunderstood. #AI researcher @finbarrtimbers will walk us thru why batching works during our next PyTorch Expert Exchange on Wed. at 10am PT https://t.co/WTDdPCDLi8 https://t.co/Yhn90NcA9E
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
The worldwide PyTorch community grows stronger every day πŸ”₯πŸ’ͺπŸŒπŸš€ Last week, Executive Director of @PyTorch Foundation @matthew_d_white shared updates on PyTorch at Open Source Summit Japan πŸ‡―πŸ‡΅ #OpenSource #AI #OSSummit #AIdev https://t.co/1cFn8QymIB
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
RT @marksaroufim: torch.load is finally making the BC breaking change and enabling weights_only=True by default https://t.co/5CgD6xzIV2
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Deep dive on the cutlass Ping-Pong GEMM kernel, with relevant FP8 inference kernel benchmarking 🀿 Read the blog post: https://t.co/oAqJNqtvcZ https://t.co/WakRRhvqQB
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Pair the vLLM engine with TorchServe to create a full-fledged #LLM serving solution for production πŸ’₯ Find out how in our blog: https://t.co/RNL8pSwhKc https://t.co/tJAisXRd1e
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Learn the inner workings of Triton, the hardware agnostic language for GPU programming and powering TorchInductor: https://t.co/A6JTVjdRXW https://t.co/OxnMnnt6m7
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
PyTorch 2.5.1 has been released with fixes for RPM based distributions, arm64 distributions, torch.compile crashes, MPS crashes, as well as several issues related to attention regressions observed in 2.5.0. For more info, please refer to release notes: https://t.co/MsJXUrE3k5 https://t.co/FI42oIdTsL
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
PyTorch teams at @Arm and @Meta teamed up to optimize #AI performance through the ExecuTorch framework, which is now available in Beta πŸ”₯ See how you can get started today: https://t.co/UUrv4i7nPt https://t.co/LuVo2YNE3s
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Develop intuition for what exactly is going on inside your GPU πŸ’‘ Join us for our next #PyTorch Expert Exchange Webinar on November 13th at 10am PT: How does batching work on modern GPUs? with AI researcher @finbarrtimbers Watch the livestream at: https://t.co/CY2r9P8zMz https://t.co/Va6UodCXov
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Introducing ExecuTorch Beta; Faster on-device LLM support with stable APIs and broader partner coverage. https://t.co/hvAlfr8UBk #OnDeviceAI #Edge #PyTorch #LLMs #ODLLM https://t.co/yLnKu7itUY
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
We are happy to announce the stable release, 1.0, for TorchRec and FBGEMM. TorchRec is the PyTorch native recommendation systems library, powered by FBGEMM’s (Facebook GEneral Matrix Multiplication) efficient, low level kernels. Check out the release here: https://t.co/E9zcqTqhyS https://t.co/iExmX1Ca2Z
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Don’t miss @PyTorch Foundation Executive Director @matthew_d_white at Ted AI San Francisco this week as part of the panel: Industry Experts in Conversation with the #AI for Good Hackathon Winners @TEDAI2024 #TEDAI2024 #Hackathon #AIForGood https://t.co/k6ZZKXwhME
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Wondering what's new in the recent #PyTorch 2.5 release? Do you have questions? Join us for a live Q&A with PyTorch Core Maintainer, Alban Desmaison of @Meta Bring your ??? Monday, October 21st at 11 AM PT https://t.co/M4XTBl2LJI https://t.co/8GLIFNRPJ8
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Join us for our live #PyTorch Expert Exchange webinar starting soon on DistServe: disaggregating prefill and decoding for goodput-optimized LLM inference with @haozhangml Asst. Prof. at @HDSIUCSD & @ucsd_cse https://t.co/cXjcXHGUpL
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Learn about faster CPU performance for #PyTorch on Windows from @Intel πŸš€ in our latest blog: https://t.co/EY4GgLb6lO https://t.co/e87Pcf8fDH
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Join us for our next #PyTorch Expert Exchange Webinar on Wednesday, October 16th at 4 PM PT ?? Distsserve: disaggregating prefill and decoding for goodput-optimized LLM inference with @haozhangml Asst. Prof. at @HDSIUCSD & @ucsd_cse Tune in at: https://t.co/LZEZ1BORod https://t.co/gXQvEL0pZ5
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
?? Live #PyTorch Expert Exchange Webinar starts soon Grab your ? and join Guangxuan Xiao for a talk and Q&A on Efficient Streaming Language Models with Attention Sinks Participate at: https://t.co/ATUNHpjIFE
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
🌟 Highlights from PyTorch Conference 2024 πŸ“Ή Watch the video: https://t.co/y2NoQu8BIn Amazing talks, powerful collaborations, and a welcoming community πŸ™Œ #PyTorchConf What was your favorite part? Share below ??
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
We are pleased to announce the first-ever Chair and Vice Chair of the #PyTorch Foundation’s Technical Advisory Council (TAC) πŸŽ‰ Congrats to Chair - @lantiga & Vice Chair - Jiong Gong. Learn more in our blog: https://t.co/NnQt5WnbJJ https://t.co/srJh49kbMA
0 reply
0 recast
0 reaction

PyTorch pfp
PyTorch
@nief331012
Join us next Friday on October 11th at 10 AM PT for our next LIVE PyTorch Expert Exchange Webinar on Efficient Streaming Language Models with Attention Sinks w/ @Guangxuan_Xiao, @MITEECS ??πŸŽ™? Tune in at: https://t.co/6Kipa4JBZD #LLMs https://t.co/skxfRtIobs
0 reply
0 recast
0 reaction