Discussion about this post

User's avatar
Konnor Andersen's avatar

I think this category will be larger than many realize. Let’s say human coders make a security mistake 1% of the time and have an output of 1000, but if coding agents are better than humans then they will only have a mistake 0.1% of the time but the output is 1000x more, then the number of security mistakes increase 100x! I don’t think regulators and compliance teams will let the coding agent company also be responsible for the security/compliance of that code so I’m not sure Anthropic or OpenAI will dominate quite as much as initial reading. I could be wrong.

No posts

Ready for more?