Content pfp
Content
@
0 reply
0 recast
0 reaction

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
OpenAI DevDay 2024 Updates o1 Updates: > The keynote provided an overview of o1 with examples of its applications. > The rate limit for o1 was doubled to 10,000 RPM, aligning it with GPT-4โ€™s limit. Realtime API: ๐Ÿ”—: https://openai.com/index/introducing-the-realtime-api/ > Introduced a new real-time API using WebSockets, enabling voice input and output with AI models. This API supports text, audio, and function calls in JSON format. > Demonstrations included AI-powered voice interactions, such as travel agent and language learning apps, enhancing conversational experiences. > The API is currently in public beta, with specific pricing for text and audio input/output. Model Customization: ๐Ÿ”—: https://openai.com/index/introducing-vision-to-the-fine-tuning-api/ > Announced fine-tuning support for vision models, allowing for image-based fine-tuning. > Suggested use cases include product recommendations, medical imaging, and traffic sign detection.
1 reply
1 recast
9 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
Pricing & Caching: ๐Ÿ”—: https://openai.com/index/api-prompt-caching/ > The cost-per-token has been reduced by 99% over the past two years. > Introduced automatic prompt caching, which offers a 50% discount on tokens previously processed by the model. Model Distillation: ๐Ÿ”—: https://openai.com/index/api-model-distillation/ > Introduced tools for model distillation, allowing smaller models to learn from larger ones by teaching them using the outputs of larger models. > Launched stored completions, enabling the permanent storage of interactions for fine-tuning and distillation. > Added evaluation tools to assist developers in refining model performance.
1 reply
0 recast
5 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
Insights on Model Distillation: - Doesnโ€™t always require human-generated data, but careful curation of the distillation dataset is essential. - Works best with thousands of examples rather than millions. - An iterative approach is recommendedโ€”start small with hundreds of examples before scaling up. - Fine-tuning and distillation can make switching between different language model vendors more challenging, potentially locking users to a specific platform. - Future applications might involve a mix of small distilled models alongside a few larger ones. Structured Outputs: > Introduced a JSON schema-based structured output mode to ensure reliable responses using โ€œconstrained decodingโ€ to match predefined schemas, enhancing accuracy. > Upcoming sessions focused on structured outputs for reliable applications and developing powerful small models. > Emphasis on new tools to support complex AI applications with structured outputs and efficient model scaling.
1 reply
0 recast
3 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
OpenAI Research and Development: > Upcoming features include support for function calls, system prompts, and structured outputs by the end of the year. > Stressed the importance of AI model safety, alignment, and the challenges in building reliable agentic use cases. related links: ๐Ÿ”—: https://github.com/openai/openai-realtime-console ๐Ÿ”—: https://github.com/openai/whisper/pull/2361 ๐Ÿ”—: https://openai.com/devday/directory/
0 reply
0 recast
2 reactions