At the time of choice, the AI might signal cue negative value (i

At the time of choice, the AI might signal cue negative value (i.e., punishment prediction),

which could drive avoidance behavior. This is in line with theories proposing that brain areas involved in somatic affective representations are causally responsible for making a choice (Jones et al., 2010; Naqvi and Bechara, 2009; Craig, 2003). The flattened punishment-learning curves following DS preferential atrophy in presymptomatic HD patients was specifically captured by a higher choice randomness. Contrary to reinforcement magnitude and learning rate, this parameter impacts the choice, not the learning process. This is consistent with our fMRI finding that the DS was active at punishment cue display (during choice period), but not at outcome display (during learning period). It accords well with the idea that the DS is the “actor” SCH727965 cost part of the striatum, the “critic” part being more ventral (O’Doherty et al., 2004; Atallah et al., 2007). Indeed, the transition from presymptomatic to symptomatic HD, which was characterized by degeneration extending to the VS, was captured by a lower reinforcement magnitude in the gain condition. Thus the VS, which is closely linked to the VMPFC, would play a role similar to that of the insula, but for learning positive instead of negative values. This is in line with studies implicating the VS and VMPFC in encoding both reward predictions

at cue display and reward prediction errors enough at outcome display (Rutledge U0126 et al., 2010; Palminteri et al., 2009a; Hare et al., 2008). However, interpreting the specific role of the DS in choosing between aversive cues remains speculative. The link with choice randomness might suggest that the DS is involved in comparing

negative value estimates or in integrating the precision of these estimates, or in adjusting the balance between exploration and exploitation. Another possibility is that the DS is specifically involved in avoidance behavior, i.e., in inhibiting the selection of the worst option and facilitating the selection of alternatives. This interpretation is endorsed by the observation that input connections to the caudate head come from dorsal prefrontal structures, which have been implicated in inhibitory and executive processes (Draganski et al., 2008; Haber, 2003; Postuma and Dagher, 2006). In conclusion, we found evidence that the AI and DS are causally implicated in punishment-based avoidance learning, but for different reasons. The AI might participate by signaling punishment magnitude, in accordance with its involvement in negative affective reactions, whereas the DS might participate by implementing avoidance choices, in accordance with its involvement in executive processes. These findings suggest the existence of a distinct punishment system underpinning avoidance learning, just as the reward system underpins approach learning.

Comments are closed.