Lost Midas pfp
Lost Midas
@lostmidas
As a product designer, I've integrated LLMs into my workflow to amplify productivity They're like having a team of virtual assistants for tasks like ideating features, writing technical specifications, & refining go-to-market strategies Here’s how I make it work
4 replies
2 recasts
7 reactions

Lost Midas pfp
Lost Midas
@lostmidas
One of my key tools is OpenRouter, which lets me leverage multiple LLMs, each with their own strengths (& weaknesses): - ChatGPT: Ideal for long-form, detailed brainstorming - Grok: Great for short, focused, & to-the-point responses - Claude: Perfect for technical or complex questions Each model serves a unique role in my workflow The real power of this setup lies in tailored system prompts I use OpenRouter to create dedicated chat environments for specific tasks, like: - Brainstorming product features & user flows - Writing technical or user-facing documentation - Ideating go-to-market strategies Segmentation is key to focused outputs One trick: I maintain a "system prompt chat" specifically for crafting & testing system prompts By iteratively refining these prompts, I ensure that my task-specific chats deliver accurate & high-quality results
1 reply
0 recast
4 reactions

Lost Midas pfp
Lost Midas
@lostmidas
Using LLMs effectively is an iterative process I rarely accept the first response at face value Instead, here’s how I refine: - Provide sufficient context upfront - Ask follow-up questions to clarify or expand - Experiment through trial & error, tweaking the prompt until the response aligns with my goals Context is everything Most poor outputs stem from insufficient starting information For example, if I’m writing a feature specification, I'll include: - The problem we’re solving - Goals of the feature - User personas - Any constraints or edge cases This improves the output dramatically That said, overloading LLMs with too much info can lead to hallucinations The sweet spot is to balance detail with focus: provide essential context while keeping the task clear & concise
1 reply
0 recast
2 reactions

Lost Midas pfp
Lost Midas
@lostmidas
I’ve also learned that structured prompts dramatically improve the quality of responses Two frameworks I often use (among others): - TAG: Task, Action, Goal - RTF: Role, Task, Format These frameworks force clarity & ensure precise, actionable outputs
1 reply
0 recast
2 reactions

Lost Midas pfp
Lost Midas
@lostmidas
Here’s how I might use the TAG framework for improving user onboarding: - T (Task): Identify pain points in the current onboarding process for new users - A (Action): Act as a first-time user unfamiliar with the product - G (Goal): Highlight areas of confusion & suggest ways to make the process faster & more intuitive By structuring prompts this way, I can pinpoint friction points & get actionable recommendations for enhancing the user experience For planning a usability test, I might use the RTF framework like this: - R (Role): You’re a UX researcher preparing a usability test for a new feature - T (Task): Create usability test scenarios that reveal potential friction points for first-time users - F (Format): Organize responses by scenario objective, user actions to test, & key metrics to evaluate success This approach provides a structured & actionable plan for uncovering UX improvements
1 reply
0 recast
2 reactions