Paper in Journal of Vision
  • Cavanagh, P., Holcombe, A.O. and Chou, W-L. (2008). Mobile computation: Spatiotemporal integration of the properties of objects in motion. Journal of Vision, 8(12): 1 - 23.

    ABSTRACT
    We demonstrate that, as an object moves, color and motion signals from successive, widely spaced locations are integrated, but letter and digit shapes are not. The features that integrate as an object moves match those that integrate when the eyes move but the object is stationary (spatiotopic integration). We suggest that this integration is mediated by large receptive fields gated by attention and that it occurs for surface features (motion and color) that can be summed without precise alignment but not shape features (letters or digits) that require such alignment. Rapidly alternating pairs of colors and motions were presented at several locations around a circle centered at fixation. The same two stimuli alternated at each location with the phase of the alternation reversing from one location to the next. When observers attended to only one location, the stimuli alternated in both retinal coordinates and in the attended stream: feature identification was poor. When the observer's attention shifted around the circle in synchrony with the alternation, the stimuli still alternated at each location in retinal coordinates, but now attention always selected the same color and motion, with the stimulus appearing as a single unchanging object stepping across the locations. The maximum presentation rate at which the color and motion could be reported was twice that for stationary attention, suggesting (as control experiments confirmed) object-based integration of these features. In contrast, the identification of a letter or digit alternating with a mask showed no advantage for moving attention despite the fact that moving attention accessed (within the limits of precision for attentional selection) only the target and never the mask. The masking apparently leaves partial information that cannot be integrated across locations, and we speculate that for spatially defined patterns like letters, integration across large shifts in location may be limited by problems in aligning successive samples. Our results also suggest that as attention moves, the selection of any given location (dwell time) can be as short as 50 ms, far shorter than the typical dwell time for stationary attention. Moving attention can therefore sample a brief instant of a rapidly changing stream if it passes quickly through, giving access to events that are otherwise not seen.