BSC of 18 s movie time segments after hyperalignment based on category perception experiment data was markedly worse than BSC after hyperalignment based on movie data (17.6% ± 1.3% versus 65.8% ± 2.7%
for Princeton subjects; 28.3% ± 2.8% versus 74.9% ± 4.1% for Dartmouth subjects; p < 0.001 in both cases; Figure 4). Thus, hyperalignment of data using a set of stimuli that is less diverse than the movie is effective, but the resultant common space has validity that is limited to a small subspace of the representational space in VT cortex. We conducted further analyses to investigate the properties of responses to the movie that afford general MAPK inhibitor validity across a wide range of stimuli. We ON-01910 datasheet tested BSC of single time points in the movie and in the face and object perception experiment, in which we carefully matched the probability of correct classifications for the two experiments. Single TRs in the movie experiment could be classified with accuracies that were more than twice that for single TRs in the category perception experiment (74.5% ± 2.5% versus 32.5% ± 1.8%; chance = 14%; Figure S4A). This result suggests that
VT responses evoked by the cluttered, complex, and dynamic images in the movie are more distinctive than are responses evoked by still images of single faces or objects. We also tested whether the general validity of the model space reflects responses to stimuli
that are in both the movie and the category perception experiments or reflects stimulus properties that are not specific to these stimuli. We recomputed the common model after removing all movie time points in which a monkey, a dog, an insect, or a bird appeared. We also removed time points for the 30 s that followed such episodes to factor out effects of delayed hemodynamic responses. BSC of the face and object and animal species categories, including distinctions among monkeys, dogs, insects, and birds, was not affected by removing corepressor these time points from the movie data (65.0% ± 1.9% versus 64.8% ± 2.3% for faces and objects; 67.1% ± 3.0% versus 67.6% ± 3.1% for animal species; Figure S4B). This result suggests that the movie-based hyperalignment parameters that afford generalization to these stimuli are not stimulus specific but, rather, reflect stimulus properties that are more abstract and of more general utility for object representations. The dimensions that define the common model space are selected as those that most efficiently account for variance in patterns of response to the movie.