Experimental findings showed that retinal signal processing is not a linear light-to-spike conversion mechanism but is composed of nonlinear computations that realize complex visual functions. However, even before light transduction at the photoreceptor (PR) layer, the image is complicated by sudden saccadic eye movements. It was previously shown that certain retinal ganglion cells (GC) alleviate this problem by segregating object motion from global motion . Even for a stationary scene, due to those saccades, a different image impinges on the retina for short periods, thus complicating retinal computation. Previous reports showed that a class of GCs can encode visual stimulation based on the latencies of the first spikes rather than complex features of the spike train , so that within those short periods, rapid neural coding can be realized efficiently. Although "segregation" and "rapid neural coding" mechanisms are observed at GC layer, it was shown that those mechanisms arised from inter-retinal neurons such as horizontal cells (HC), bipolar cells (BC) and amacrine cells (AC) [1,2]. Specifically, wide-field amacrine cells (wfAC) are believed to play an important role in "segregation" mechanism by providing timed inhibition to blank out excitatory signals produced by global movements . The latency between On- and Off- BC pathways are found to be critical for "rapid neural coding" mechanisms . Although separate models can realize those mechanisms by means of GC responses, each class of retinal neurons were largely omitted in those models [1,2]. Here, we present a spatio-temporal model of the retinal circuit , combining temporal cellular response functions and anatomical inter-cellular connections. The model was constructed in a two-dimensional grid and each pixel of the grid was composed of eight cell elements representing PR, HC, On- and Off- BCs, On- and Off- ACs, On-Off wfAC and On-Off GC. For each of the cell elements, the response function was governed by push-pull differential equations  in which chemical/electrical synaptic inputs as well as intrinsic membrane dynamics were implemented. Additionally, same retinal circuitry was re-designed by using CMOS elements and the model response tested by a SPICE simulator . In the present study, not only center-surround antagonistic receptive fields of BCs and GC, but also complex visual functions such as "object motion segregation from a background motion" (Figure 1, left panel) and "rapid neural coding" mechanisms (Figure 1, right panel) are realized by a computational and a hardware model.
Figure 1. 2D Retina Model. Left Panel: Segregation Mechanism. GCs can differentiate object motion from the background motion. Right Panel: Rapid neural coding. Presented image can be reproduced from latency-encoded output.