Looking at static pictures of people running versus pictures of people standing still “evokes a delayed response in an area that overlaps with motionsensitive cortex (hMT+)”. Past studies have indicated a similar response for images depicting a falling cup versus a cup resting on a table.
The paper discusses the role of top-down influence from the temporal lobe as a possible cause for the response. How could this kind of brain activity be influencing our ability to recognize objects in scenes? Is this evidence of the activation of a distributed cortical representation of a moving object?
Should the field of AI be trying to figure out how to replicate a similar top-down influence in next-generation object recognition algorithms?
Abstract from the Journal of Cognitive Neuroscience is available here.