Multichannel Communication Using Chaos in a Recurrent Neural Network
where x i (t) = ±1(i =1 ∼ N) is the firing state of a neuron specified by index i at time t, w ij is connection weight from the neuron x j to the neuron x i . w ii is taken to be 0. r (0 < r < N) is fan-in number for neuron x i , named connectivity that is the most important system parameter in our work. G i (r) is a spatial configuration set of connectivity r for neuron x i,, the number of which are N-1C r . Therefore, with full connectivity r = N − 1, determination of w ij by means of a kind of orthogonalized learning method enables us to embed a group of N dimensional state patterns as cyclic memory attractors. In our works, attractor patterns consists of (K patterns per cycle) × L cycles, and each patterns has N neurons. In this work, we take K = 15, L = 2 (see Fig. 1), or K = 10, L = 3 (see Fig. 4), and N = 400, where the firing states of N = 20 × 20 = 400 neurons are represented by black pixel or white pixel as shown in Fig. 1. Long time updating makes an initial pattern converge into one of embedded cycle attractors. Now, when we reduce connectivity r by blocking signal transfer from other neurons, then attractors gradually become unstable, and the network state changes from the embedded patterns and chaotically wanders in the state space [8, 9]. In our computer experiments, we take the connectivity to be r = 6, and, in chaotic state, apply two external inputs as (Fig. 2) [∑w ij x j (t) + α i cos (2πt/S (A,B)) i ∈ C ]to the two 3 × 3 = 9 firing neurons set on the two corners (A, B), where the two periods S (A,B) are 137 and 181, respectively. The correlation between a sending neuron and a target (receiving) neuron are calculated as shown in Fig. 3. The correlation between the sending neuron and the other neurons are calculated and shown in Fig. 3 as well.