Russian disinformation activities imitate divisive U.S. political discourse within a polarized social media ecosystem. As part of a multipronged response, U.S. citizens have been urged to increase their personal vigilance and to identify inauthentic messages, hence flagging foreign-made disinformation by studying its content. However, by applying Taylor's concept of “imitation (in)security” to a set of Kremlin-linked Internet Research Agency (IRA) Facebook and Instagram advertisements, this article explains why content-centered approaches to combatting disinformation need to be reimagined. Building upon imitation (in)security, we propose that the strength of the IRA disinformation campaign was not its ability to foist falsehoods upon unsuspecting Americans, but, rather, its uncanny imitation of prevalent themes, images, and arguments within American civic life. Our analysis of IRA-generated advertisements targeting U.S. military veterans demonstrates how IRA “trolls” were imitating American communication patterns to amplify existing positions within a deluge of messages marked by polysemy. Our analysis suggests readers should be less concerned by such Russian-made imitations than was suggested in much of the breathless 2016 post-election coverage, for the traction of such disinformation hinges on domestic crises and injustices that long predate Russian interference. Pointing to foreign-made social media content stokes a sense of threat and crisis—the essence of national insecurity and a main objective of the IRA's efforts—yet our actual security weaknesses are homemade.

You do not currently have access to this content.