GPT comes in handy when I’m stuck with implementation details - like some C++ syntax or Textual-package intricacies - but diving down from top-level thinking can be jarring. I copy-paste the snippet into the chat, rubberduck my problem, or often just write “fix this”, and it hastens up the horsies.
When others encounter problems with LLMs going on a tangent, is it due to overrelying on contextual understanding? For example, prompts like “Create a C++ framework for UI components” relate to a vast scope while providing ambiguous contextual cues.
In essence, LLMs are probabilistic text generators that follow the provided material. Often, a clear context, such as a small code snippet, and a tight scope - like “fix it” - suffices.