A treaty that prohibits autonomous fire decision but allows remotely controlled and “semiautonomous” weapons presents a more complex set of challenges. If a “semi-autonomous weapon system” may have capabilities to autonomously acquire, track, identify, group and prioritize targets, and to control their engagement once a “go” signal is given, conversion to full lethal autonomy could be as simple as throwing a (software) switch. Given continued trends in technology, the addition of such capabilities to remotely controlled armed vehicles already equipped with sophisticated sensors and general purpose computers might also reduce to a matter of installing new software. Given the potentially high military importance of some kinds of fully autonomous weapons, especially those designed to attack major weapon systems (perhaps in swarms), there would be a significant risk of fully autonomous options being secretly prepared for systems officially declared to be under human control.
However, militarily potent fully autonomous weapons systems will likely require extensive development and testing while being operated under full autonomous control (though perhaps under human supervision). It would be difficult to conceal the large-scale activities that would be involved in such programs, especially if they are made clear violations of accepted norms and of a binding treaty.