this post was submitted on 12 Apr 2025
1243 points (98.5% liked)
Programmer Humor
22440 readers
2818 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
can, but usually when stuff gets slightly more complex, being a fast typewriter is usually more efficient and results in better code.
I guess it really depends on the aspiration for code-quality, complexity (yes it's good at generating boilerplate). If I don't care about a one-time use script that is quickly written in a prompt I'll use it.
Working on a big codebase, I don't even get the idea to ask an AI, you just can't feed enough context to the AI that it's really able to generate meaningful code...
I actually don't write code professionally anymore, I'm going on what my friend says - according to him he uses chatGPT every day to write code and it's a big help. Once he told it to refactor some code and it used a really novel approach he wouldn't have thought of. He showed it to another dev who said the same thing. It was like, huh, that's a weird way to do it, but it worked. But in general you really can't just tell an AI "Create an accounting system" or whatever and expect coherent working code without thoroughly vetting it.
I'll use it also often. But when the situation is complex and needs a lot of context/knowledge of the codebase (which at least for me is often the case) it seems to be still worse/slower than just coding it yourself (it doesn't grasp details). Though I like how quick I can come up with quick and dirty scripts (in Rust for the Lulz and speed/power).
That's not a hard limit, for example google's models can handle 2-million-token context window.
https://ai.google.dev/gemini-api/docs/long-context
Ughh I tried the gemini model and I'm not too happy with the code it came up with, there's a lot of intrinsities and concepts that the model doesn't grasp enough IMO. That said I'll reevaluate this continuously converting large chunks of code often works ok...
Well, it wasn't a comment on the quality of the model, just that the context limitation has already been largely overcome by one company, and others will probably follow (and improve on it further) over time. Especially as "AI Coding" gets more marketable.
That said, was this the new gemini 2.5 pro you tried, or the old one? I haven't tried the new model myself, but I've heard good things about it.