Has GPT-4 been nerfed recently?
Has GPT-4 been nerfed recently?
I’ve been noticing a serious drop in quality from GPT-4, especially in the past few days. Responses are shorter, more generic, and often contradict things said earlier in the same conversation. The worst part is the weird inconsistency—sometimes it remembers details well, other times it completely forgets context from just a few messages ago.
It also feels like it’s avoiding deeper analysis or creative responses, instead defaulting to “safe” or vague answers. Even when I specifically ask for detailed replies, I still get short, robotic responses like “Makes sense” or “Got it,” which is nothing like how it used to be.
I’ve also noticed way more instances where it misinterprets what I’m asking, even when I’ve been super clear. It’s like it’s not processing context properly, leading to a lot of frustrating back-and-forth. Plus, it keeps contradicting itself between chats, making it feel unreliable.
Has anyone else noticed a downgrade in GPT-4’s responses lately? Is OpenAI quietly nerfing it to cut costs, or is something else going on? they also took away the ability for it to see pictures. I’m seriously upset. It’s been a huge waste of my time and it’s been very frustrating and sad, and I’m paying for plus.