Crowdsourced innovation platforms that enable organizations to outsource ideation to parties external to the firm are proliferating. In many cases, the platforms use open contests that allow the free exchange of ideas with the goal of improving the ideation process. In open contests, participants (“solvers”) observe the ideas of others as well as the feedback received from the contest sponsor (“seeker”). The open nature of such contests generate incentives for free-riding and copying by opportunistic solvers. As such, this creates the possibility of the platform unraveling when good solvers strategically withdraw from the platform, expecting their ideas will be copied. To investigate agent behavior in such a setting, we collect micro-data on design contests, submissions and participants from the inception of Crowdspring, an online crowdsourced open ideation platform. These data include the original image files submitted to the contests, which enables us to compare how similar one image is to another using image comparison algorithms from the computer vision literature. We document that solvers who enter later into contests tend to imitate better rated designs that are submitted prior to their entry, thereby generating significant risk to early entrants that their ideas will be appropriated by later entrants without recompense. As a countervailing force, we document that seekers tend to reward original early designs, and avoid picking as winners those that seem to be plagiarizing and free-riding. Further, in repeated interactions, solvers seem to be adjusting their behavior in response to this reward/punishment policy, suggesting that agent naivety as not being the key factor. These patterns suggest that market behavior on such platforms may have a self-policing component that disincentivizes excessive imitation, rewards originality and prevents unraveling.