I’m sorry but I’m frustrated by the blatant misuse of AI by my students and colleagues alike. It’s so obvious when they don’t understand what they’ve written.
I’m sorry but I’m frustrated by the blatant misuse of AI by my students and colleagues alike. It’s so obvious when they don’t understand what they’ve written.
It’s not just using an LLM to assist. It’s more generating the whole source with an LLM, running it once to check if it seems to work (if it “vibes” good) and then publishing it without even trying to read through and understand the code.
Edit: just to clarify, the odds are that the generated code performs awfully, doesn’t handle even the simplest edge cases and has security problems.