What's especially worrisome about autonomous weapons?
The problem of autonomous weapons is not directly related to the AI Safety problem, but both fit into the "be careful what you do with AI" category.
In the short term, these would allow for worse totalitarianism as automated security forces will never rebel. This removes the moderating influence of human personnel as convincing machines to do a horrible thing is easier than convincing humans. Despots need security forces to remain in power. Human security forces betraying a despot is a common way that despots lose power, this would not happen with robots.
Another consideration is that computer security is hard! Autonomous weapons could be hacked, initially by humans but eventually by an AGI. This is not good for humanity's chances of surviving the transition to AGI, although access to autonomous weapons is probably not necessary for this transition to go poorly.
See also