How quickly could an AI go from harmless to existentially dangerous?

If the AI system was deceptively aligned or had been in stealth mode while getting things in place for a takeover, quite possibly within hours. We may get more warning with weaker systems, or if the AGI does not believe itself to be at all threatened by us, or if a complex ecosystem of AI systems is built over time and we gradually lose control.

Paul Christiano writes a story of alignment failure which shows a relatively fast transition.