Conspiracies are consequential and social, yet online conspiracy groups that consist of individuals (and bots) seeking to explain events or a system have been neglected in sociology. We extract conspiracy talk about the COVID-19 pandemic on Twitter and use the biterm topic model (BTM) to provide a descriptive baseline for the discursive and social structure of online conspiracy groups. We find that individuals enter these communities through a gateway conspiracy theory before proceeding to extreme theories, and humans adopt more diverse conspiracy theories than do bots. Event-history analyses show that individuals tweet new conspiracy theories, and tweet inconsistent theories simultaneously, when they face a threat posed by a rising COVID-19 case rate and receive attention from others via retweets. By contrast, bots are less responsive to rising case rates, but they are more consistent, as they mainly tweet about how COVID-19 was deliberately created by sinister agents. These findings suggest human beings are bricoleurs who use conspiracy theories to make sense of COVID-19, whereas bots are designed to create moral panic. Our findings suggest that conspiracy talk by individuals is defensive in nature, whereas bots engage in offense.