According to CEOs of top AI companies a large amount of code will be written by AI. Anthropic CEO Dario Amodei mentioned that 90% of code in 2025 and 100% of code in 2026.
This might be the case, that we are at least working in a pair-progrmaming manner aka that the code is generated but there is a human in the loop. The problem I have seen from this project as example is, that LLMs are terrible with new technologies / frameworks. That makes sense since they simply don't have enough training data.
With the hype of LLMs replacing or enhancing many engineers, we most likely will mainly use old framworks where the LLMs can generate good code quality. But this is obviously a huge secuirty issue right? Imagine that 90% of the code will be with outdated versions with known security issues.
So I don't advice of not using those tools, but I do believe we have to keep such things in mind.