It only works with generalized programming. (E.g. write a python script that passes csv files) For any specialized fields this would NOT work (e.g. write a DPDK program that identifies RoCEv2 packets and rewrite the IP address)
It requires the human supervising the AI agent to know how to write the expected code themselves, so they can prompt the agent to use specific techniques (e.g. use python’s csv library instead of string.split). This is not a problem now since even programmers out of college generally know what they are doing.
If companies try to use this to avoid hiring/training skilled programmers, they will have a very bad time in the future when the skilled talent pool runs dry and nobody knows how to identify correct vs incorrectly written code.
It has gone way beyond that. Where I work, we have access to GitHub Copilot experimental SWE Agent. It’s ridiculously smart at looking at your current codebase and implementing a solution. The other day, I used it to build a page in our web app in 3 hours with prompts and minimal code changes myself. If I had done it myself, it would have taken me at least couple of days. But the SWE agent looked at the tech stack, patterns, structures etc of our web app and implemented based on that. Asked if it should add unit test cases for the new files and update the existing ones. Out of curiosity, I said yes. It kept iterating and running the tests until it had 100% coverage. To say I was impressed would be an understatement. To make things even interesting, it said it noticed that we use storybook testing so it went ahead and added couple of storybook tests as well.
As much as I hate the concept, it works. However:
It only works with generalized programming. (E.g. write a python script that passes csv files) For any specialized fields this would NOT work (e.g. write a DPDK program that identifies RoCEv2 packets and rewrite the IP address)
It requires the human supervising the AI agent to know how to write the expected code themselves, so they can prompt the agent to use specific techniques (e.g. use python’s csv library instead of string.split). This is not a problem now since even programmers out of college generally know what they are doing.
If companies try to use this to avoid hiring/training skilled programmers, they will have a very bad time in the future when the skilled talent pool runs dry and nobody knows how to identify correct vs incorrectly written code.
It has gone way beyond that. Where I work, we have access to GitHub Copilot experimental SWE Agent. It’s ridiculously smart at looking at your current codebase and implementing a solution. The other day, I used it to build a page in our web app in 3 hours with prompts and minimal code changes myself. If I had done it myself, it would have taken me at least couple of days. But the SWE agent looked at the tech stack, patterns, structures etc of our web app and implemented based on that. Asked if it should add unit test cases for the new files and update the existing ones. Out of curiosity, I said yes. It kept iterating and running the tests until it had 100% coverage. To say I was impressed would be an understatement. To make things even interesting, it said it noticed that we use storybook testing so it went ahead and added couple of storybook tests as well.