I study a dynamic model of trial-and-error search in which agents do not have complete knowledge of how choices are mapped into outcomes. Agents learn about the mapping by observing the choices of earlier agents and the outcomes that are realized. The key novelty is that the mapping is represented as the realized path of a Brownian motion. I characterize for this environment the optimal behavior each period as well as the trajectory of experimentation and learning through time. Applied to new product development, the model shares features of the data with the well-known Product Life Cycle. (JEL D81, D83, D92, L26)