Fig. 1
Topological sensory alignment in the colliculus and its role for social development. We hypothesize that the visual and the somatotopic topographical maps bind together
2 Methods and Models
We make a computational simulation of the maturing superior colliculus connected to a simulated facial tissue that replicates some attributes of the bio-mechanical properties of the fetus’ face. We model how the incoming tactile information is used to direct visual attention toward faces. We suggest that the unisensory superficial visual layer (eye-centered) in SC and the deep somatopic layer (face-centered) in SC are combined into an intermediate layer for visuo-tactile integration and that multimodal alignment in this third layer allows newborns to detect faces and to mimic them (see Fig. 1).
Neural populations are defined with integrate-and-fire neurons that capture the spatio-temporal dynamics from the two sensory modalities. The detection of structured patterns is an important attribute for preserving the topology of each modality in each map. The neural populations works similarly to a Kohonen learning systems except that we model the maturing period of SC. We add an activity-dependent mechanism based on novelty detection in order to construct the topology of the neural map by preserving at the same time the existing neurons’ topology and by adding new neurons that refine it.
3 Results
After we complete the learning stage within each map through Hebbian reinforcement learning, we show that each topology respects the retinotopic topography of the eye and the somatotopic topography of the face, as seen in the SC. Then, we merge the two unisensory layers into a common intermediate layer. The multimodal layer develops synaptic links that align the visuo-tactile sensory information from each other, into a mixed spatial representation based the eye-centered reference frame and the face-centered reference frame.

Stay updated, free articles. Join our Telegram channel

Full access? Get Clinical Tree

