Is anyone surprised? By definition LLM can’t be 100% correct and LLM hallucination poses significant challenges in generating accurate and reliable responses. ChatGPT Answers Programming Questions Incorrectly 52% of the Time: Study. To make matters worse, programmers in the study would often overlook the misinformation. https://gizmodo.com/chatgpt-answers-wrong-programming-openai-52-study-1851499417
@nixCraft Finally some numbers to back up what I've been feeling all along. I genuinely find very little use for LLMs in my work as a software dev, it often spits out complete bullshit, and/or sends my own unaltered code back at me with "i made these changes".