How vision guides action Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

How would you build a robot for making apple pie? What kind of ‘computational brain’ would it need?

A

· Robots used to
o Test predictions
- Describe processes – produce description then used as qualitative model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Pancakes made by dual-arm robots from an internet recipe (Beetz et al., 2011)

A

specific tasks that need to be done

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Pancakes made by dual-arm robots from an internet recipe (Beetz et al., 2011) research

A

Tenorth and Beetz (2013)

Tenorth and Beetz (2017)

Holz et al. (2014)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Tenorth and Beetz (2013)

A

Autonomous service robots will have to understand vaguely described tasks, such as “set the table” or “clean up”. Performing such tasks as intended requires robots to fully, precisely, and appropriately parameterize their low-level control programs. We propose knowledge processing as a computational resource for enabling robots to bridge the gap between vague task descriptions and the detailed information needed to actually perform those tasks in the intended way. In this article, we introduce the KnowRobknowledge processing system that is specifically designed to provide autonomous robots with the knowledge needed for performing everyday manipulation tasks. The system allows the realization of “virtual knowledge bases”: collections of knowledge pieces that are not explicitly represented but computed on demand from the robot’s internal data structures, its perception system, or external sources of information. This article gives an overview of the different kinds of knowledge, the different inference mechanisms, and interfaces for acquiring knowledge from external sources, such as the robot’s perception system, observations of human activities, Web sites on the Internet, as well as Web-based knowledge bases for information exchange between robots. We evaluate the system’s scalability and present different integrated experiments that show its versatility and comprehensiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Tenorth and Beetz (2017)

A

In order to robustly perform tasks based on abstract instructions, robots need sophisticated knowledge processing methods. These methods have to supply the difference between the (often shallow and symbolic) information in the instructions and the (detailed, grounded and often real-valued) information needed for execution. For filling these information gaps, a robot first has to identify them in the instructions, reason about suitable information sources, and combine pieces of information from different sources and of different structure into a coherent knowledge base. To this end we propose the KnowRob knowledge processing system for robots. In this article, we discuss why the requirements of a robot knowledge processing system differ from what is commonly investigated in AI research, and propose to re-consider a KR system as a semantically annotated view on information and algorithms that are often already available as part of the robot’s control system. We then introduce representational structures and a common vocabulary for representing knowledge about robot actions, events, objects, environments, and the robot’s hardware as well as inference procedures that operate on this common representation. The KnowRob system has been released as open-source software and is being used on several robots performing complex object manipulation tasks. We evaluate it through prototypical queries that demonstrate the expressive power and its impact on the robot’s performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Holz et al. (2014)

A

Grasping individual objects from an unordered pile in a box has been investigated in stationary scenarios so far. In this work, we present a complete system including active object perception and grasp planning for bin picking with a mobile robot. At the core of our approach is an efficient representation of objects as compounds of simple shape and contour primitives. This representation is used for both robust object perception and efficient grasp planning. For being able to manipulate previously unknown objects, we learn object models from single scans in an offline phase. During operation, objects are detected in the scene using a particularly robust probabilistic graph matching. To cope with severe occlusions we employ active perception considering not only previously unseen volume but also outcomes of primitive and object detection. The combination of shape and contour primitives makes our object perception approach particularly robust even in the presence of noise, occlusions, and missing information. For grasp planning, we efficiently pre-compute possible grasps directly on the learned object models. During operation, grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

From perception-guided pancake making by robots to robotic prosthetics: example of bioinspired engineering

A

• Learning concepts from nature and applying them to the design of artificial systems
◦ “taking autonomous robot control from pick and place tasks to everyday object manipulation is a big step that requires robots to understand much better what they are doing, much more capable perception capabilities, as well as sophisticated force-adaptive control mechanisms (manipulations with a number of fingers) that even involve the operation of tools such as the spatula” (Beetz et al., 2011)
◦ “it’s hard. You have done a hundred or thousand of hours of minimal exercises, just trying to do the different grips, bends, rotations … it takes a lot of time” (Johnny Matheny, 2016, first patient with modular and mind-controlled prosthetic limb implanted directly into skeleton)
- shows how much the brain does that we are not aware of

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

perception-action cycle

A

· Move in specific ways that then bring more information to you
- Memory, sensory and action systems don’t work independently

see flashcards

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

perception-action cycle research

A

Escobar et al. (2020)

Perri et al. (2015)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Escobar et al. (2020)

A

This work presents the HSS-Cognitive project, which is a Healthcare Smart System that can be applied in measuring the efficiency of any therapy where neuronal interaction gives a trace whether the therapy is efficient or not, using mathematical tools. The artificial intelligence of the project underlies in the understanding of brain signals or Electroencephalogram (EEG) by means of the determination of the Power Spectral Density (PSD) over all the EEG bands in order to estimate how efficient was a therapy. Our project HSS-Cognitive was applied, recording the EEG signals from two patients treated for 8 min in a dolphin tank, measuring their activity in five experiments and for 6 min measuring their activity in a pool without dolphin in four experiments. After applying our TEA (Therapeutic Efficiency Assessment) metric for patient 1, we found that this patient had gone from having relaxation states regardless of the dolphin to attention states when the dolphin was presented. For patient 2, we found that he had maintained attention states regardless of the dolphin, that is, the DAT (Dolphin Assisted Therapy) did not have a significant effect in this patient, perhaps because he had a surgery last year in order to remove a tumor, having impact on the DAT effectiveness. However, patient 2 presented the best efficiency when doing physical therapy led by a therapist in a pool without dolphins around him. According to our findings, we concluded that our Brain-Inspired Healthcare Smart System can be considered a reliable tool for measuring the efficiency of a dolphin-assisted therapy and not only for therapist or medical doctors but also for researchers in neurosciences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Perri et al. (2015)

A

The event-related potential (ERP) literature described two error-related brain activities: the error-related negativity (Ne/ERN) and the error positivity (Pe), peaking immediately after the erroneous response. ERP studies on error processing adopted a response-locked approach, thus, the question about the activities preceding the error is still open. In the present study, we tested the hypothesis that the activities preceding the false alarms (FA) are different from those occurring in the correct (responded or inhibited) trials. To this aim, we studied a sample of 36 Go/No-go performers, adopting a stimulus-locked segmentation also including the pre-motor brain activities. Present results showed that neither pre-stimulus nor perceptual activities explain why we commit FA. In contrast, we observed condition-related differences in two pre-response components: the fronto-central N2 and the prefrontal positivity (pP), respectively peaking at 250 ms and 310 ms after the stimulus onset. The N2 amplitude of FA was identical to that recorded in No-go trials, and larger than Hits. Because the new findings challenge the previous interpretations on the N2, a new perspective is discussed. On the other hand, the pP in the FA trials was larger than No-go and smaller than Go, suggesting an erroneous processing at the stimulus-response mapping level: because this stage triggers the response execution, we concluded that the neural processes underlying the pP were mainly responsible for the subsequent error commission. Finally, sLORETA source analyses of the post-error potentials extended previous findings indicating, for the first time in the ERP literature, the right anterior insula as Pe generator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Tea-making: searching, locating, monitoring and grasping objects (Land et al., 1999)

A

· Looked at body and eye movements and manipulations by the hand
- Eye doesn’t always follow movement and vice versa – movement independent of vision

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

eye fixations during tea-making

A

· Vision is an active process: seeing and looking (gaze fixation), sampling of information in time and space
· In primates and humans, eyes are highly mobile
o eye movements can be slow (compensating shifts of gaze, foveal tracking) or fast saccades (relocating gaze, fixating new target)
· Land et al. (1999): 1/3 fixations linked to subsequent actions (first fixations to new objects), 2/3 of fixation after an action
o fixations for locating, directing, guiding and checking
o eye movements are often predictive
· Some fixations for locating were not followed by immediate actions (look ahead fixations)
o suggests that some form of transaccadic memory exist as information is not lost when another saccade is made
- eyes remember where they have been and can move back to a particular location

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

eye fixations during tea-making research

A

Johansson and Flanagan (2009)

Hessels et al. (2018)

Henderson (2017)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Johansson and Flanagan (2009)

A

o Object manipulation tasks comprise sequentially organized action phases that are generally delineated by distinct mechanical contact events representing task subgoals. To achieve these subgoals, the brain selects and implements action-phase controllers that use sensory predictions and afferent signals to tailor motor output in anticipation of requirements imposed by objects’ physical properties.
o Crucial control operations are centred on events that mark transitions between action phases. At these events, the CNS both receives and makes predictions about sensory information from multiple sources. Mismatches between predicted and actual sensory outcomes can be used to quickly and flexibly launch corrective actions as required.
o Signals from tactile afferents provide rich information about both the timing and the physical nature of contact events. In addition, they encode information related to object properties, including the shape and texture of contacted surfaces and the frictional conditions between these surfaces and the skin.
o A central question is how tactile afferent information is encoded and processed by the brain for the rapid detection and analysis of contact events. Recent evidence suggests that the relative timing of spikes in ensembles of tactile afferents provides such information fast enough to account for the speed with which tactile signals are used in object manipulation tasks.
o Contact events in manipulation can also be represented in the visual and auditory modalities and this enables the brain to simultaneously evaluate sensory predictions in different modalities. Multimodal representations of subgoal events also provide an opportunity for the brain to learn and uphold sensorimotor correlations that can be exploited by action-phase controllers.
A current challenge is to learn how the brain implements the control operations that support object manipulations, such as processes involved in detecting sensory mismatches, triggering corrective actions, and creating, recruiting and linking different action-phase controllers during task progression. The signal processing in somatosensory pathways for dynamic context-specific decoding of tactile afferent messages needs to be better understood, as does the role of the descending control of these pathways

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Hessels et al. (2018)

A

Eye movements have been extensively studied in a wide range of research fields. While new methods such as mobile eye tracking and eye tracking in virtual/augmented realities are emerging quickly, the eye-movement terminology has scarcely been revised. We assert that this may cause confusion about two of the main concepts: fixations and saccades. In this study, we assessed the definitions of fixations and saccades held in the eye-movement field, by surveying 124 eye-movement researchers. These eye-movement researchers held a variety of definitions of fixations and saccades, of which the breadth seems even wider than what is reported in the literature. Moreover, these definitions did not seem to be related to researcher background or experience. We urge researchers to make their definitions more explicit by specifying all the relevant components of the eye movement under investigation: (i) the oculomotor component: e.g. whether the eye moves slow or fast; (ii) the functional component: what purposes does the eye movement (or lack thereof) serve; (iii) the coordinate system used: relative to what does the eye move; (iv) the computational definition: how is the event represented in the eye-tracker signal. This should enable eye-movement researchers from different fields to have a discussion without misunderstandings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Henderson (2017)

A

The recent study of overt attention during complex scene viewing has emphasized explaining gaze behavior in terms of image properties and image salience independently of the viewer’s intentions and understanding of the scene. In this Opinion article, I outline an alternative approach proposing that gaze control in natural scenes can be characterized as the result of knowledge-driven prediction. This view provides a theoretical context for integrating and unifying many of the disparate phenomena observed in active scene viewing, offers the potential for integrating the behavioral study of gaze with the neurobiological study of eye movements, and provides a theoretical framework for bridging gaze control and other related areas of perception and cognition at both computational and neurobiological levels of analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Playing cricket: how batsmen hit the ball (Land and McLeod, 2000)

A

· Eyes don’t track the ball all the time
- Move gaze to where predict ball will bounce

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Playing cricket: how batsmen hit the ball (Land and McLeod, 2000) research

A

Hayhoe AND BALLARD (2005)

HASANZADEH ET AL. (2018)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Hayhoe and Ballard (2005)

A

The classic experiments of Yarbus over 50 years ago revealed that saccadic eye movements reflect cognitive processes. But it is only recently that three separate advances have greatly expanded our understanding of the intricate role of eye movements in cognitive function. The first is the demonstration of the pervasive role of the task in guiding where and when to fixate. The second has been the recognition of the role of internal reward in guiding eye and body movements, revealed especially in neurophysiological studies. The third important advance has been the theoretical developments in the fields of reinforcement learning and graphic simulation. All of these advances are proving crucial for understanding how behavioral programs control the selection of visual information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Hasanzahdeh et al. (2018)

A

The risk of major occupational accidents involving tripping hazards is commonly underestimated with a large number of studies having been conducted to better understand variables that affect situation awareness: the ability to detect, perceive, and comprehend constantly evolving surroundings. An important property that affects situation awareness is the limited capacity of the attentional system. To maintain situation awareness while exposed to tripping hazards, a worker needs to obtain feedforward information about hazards, detect immediate tripping hazards, and visually scan surroundings for any potential environmental hazards. Despite the importance of situation awareness, its relationship with attention remains unknown in the construction industry. To fill this theoretical knowledge gap, this study examines differences in attentional allocation between workers with low and high situation awareness levels while exposed to tripping hazards in a real construction site. Participants were exposed to tripping hazards on a real jobsite while walking along a path in the presence of other workers. Situation awareness was measured using the situation awareness rating technique, and subjects’ eye movements were tracked as direct measures of attention via a wearable mobile eye tracker. Investigating the attentional distribution of subjects by examining fixation-count heat maps and scan paths revealed that as workers with higher situation awareness walked, they periodically looked down and scanned ahead to remain fully aware of the environment and its associated hazards. Furthermore, this study quantitatively compared the differences between the eye-tracking metrics of worker with different situation awareness levels (low versus high) using permutation simulation. The results of the statistical analysis indicate that subjects did not allocate their attention equally to all hazardous areas of interest, and these differences in attentional distribution were modulated by the workers’ level of situation awareness. This study advances theory by presenting one of the first attempts to use mobile eye-tracking technology to examine the role of cognitive processes (i.e., attention) in human error (i.e., failure to identify a hazard) and occupational accidents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Events involved in an object related action as the building block of task- or goal-directed action sequences (Land, 2009)

A

Not only does the visual system locate and reorganise objects, it also continuously guides actions in order to produce adaptive behavioural responses

see flashcards

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Events involved in an object related action as the building block of task- or goal-directed action sequences (Land, 2009) research

A

Foulsham et al. (2011)

Lavoie et al. (2018)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Foulsham et al. (2011)

A

o How do people distribute their visual attention in the natural environment? We and our colleagues have usually addressed this question by showing pictures, photographs or videos of natural scenes under controlled conditions and recording participants’ eye movements as they view them. In the present study, we investigated whether people distribute their gaze in the same way when they are immersed and moving in the world compared to when they view video clips taken from the perspective of a walker. Participants wore a mobile eye tracker while walking to buy a coffee, a trip that required a short walk outdoors through the university campus. They subsequently watched first-person videos of the walk in the lab. Our results focused on where people directed their eyes and their head, what objects were gazed at and when attention-grabbing items were selected. Eye movements were more centralised in the real world, and locations around the horizon were selected with head movements. Other pedestrians, the path, and objects in the distance were looked at often in both the lab and the real world. However, there were some subtle differences in how and when these items were selected. For example, pedestrians close to the walker were fixated more often when viewed on video than in the real world. These results provide a crucial test of the relationship between real behaviour and eye movements measured in the lab.
o Gaze of walkers in real environment compared to people watching scene on video.
o Fixations biased to centre; walkers often engage in head-centred looking.
o Walkers look more often at the path than observers in the lab.
People in the scene are fixated early, and rarely close-up in real walking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Lavoie et al. (2018)

A

This study explores the role that vision plays in sequential object interactions. We used a head-mounted eye tracker and upper-limb motion capture to quantify visual behavior while participants performed two standardized functional tasks. By simultaneously recording eye and motion tracking, we precisely segmented participants’ visual data using the movement data, yielding a consistent and highly functionally resolved data set of real-world object-interaction tasks. Our results show that participants spend nearly the full duration of a trial fixating on objects relevant to the task, little time fixating on their own hand when reaching toward an object, and slightly more time—although still very little—fixating on the object in their hand when transporting it. A consistent spatial and temporal pattern of fixations was found across participants. In brief, participants fixate an object to be picked up at least half a second before their hand arrives at the object and stay fixated on the object until they begin to transport it, at which point they shift their fixation directly to the drop-off location of the object, where they stay fixated until the object is successfully released. This pattern provides additional evidence of a common system for the integration of vision and object interaction in humans, and is consistent with theoretical frameworks hypothesizing the distribution of attention to future action targets as part of eye and hand-movement preparation. Our results thus aid the understanding of visual attention allocation during planning of object interactions both inside and outside the field of view.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

prey catching behaviour in toads: Simplest hypothesis: a sensorimotor pathway for each action (Ewert, 1987; Carew, 2000)

A

• Each behavioural segment is mediated by a separate releasing mechanism (RM)
• Motivation can modulate each RM – lowers or raises threshold
- Toads have 4 distinct actions

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

prey catching behaviour in toads: Simplest hypothesis: a sensorimotor pathway for each action (Ewert, 1987; Carew, 2000) research

A

Giese and Poggio (2003)

Pessoa et al. (2019)

Manzano et al. (2017)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Giese and Poggio (2003)

A

o Humans can recognize biological movements, such as walking, accurately and robustly. This review uses a neurophysiologically plausible and quantitative model as a tool for organizing and making sense of the available experimental data, despite its growing size and complexity.
o Most experimental results can be accounted for by simple neural mechanisms, under the two key assumptions that recognition is based on a hierarchical feedforward cortical architecture and learned prototypical patterns. Such prototypes might be stored in specific neurons in the visual system.
o The model shows that recognition of biological movements can be achieved with plausible neural mechanisms in a way that is quantitatively consistent with the experimental data on pattern selectivity, view dependence and robustness of recognition.
o The model comprises two parallel pathways, one corresponding to the dorsal pathway (specialized for the analysis of motion information) and one to the ventral pathway (specialized for the analysis of form information). In each pathway, neural feature detectors extract form or optic-flow features with increasing complexity along the hierarchy. The position and size invariance of the feature detectors also increases along the hierarchy. Experimental data and quantitative simulations indicate that the ventral and dorsal pathways are both needed for the recognition of normal biological movement stimuli, whereas the recognition of point-light stimuli seems mainly to depend on the dorsal pathway.
o The proposed architecture predicts the existence of neurons that can learn to respond selectively to new biological movement patterns. It also predicts that arbitrary complex movement patterns should be learnable, as long as they provide suitable stimulation of the mid- and low-level feature detectors of the two pathways.
o The model predicts the existence of neurons in the dorsal pathway that become selectively activated by complex optic-flow patterns that arise for biological movement patterns.
o It is demonstrated that attention and top–down influences are not required to solve the basic tasks of motion recognition. These factors may be necessary for more sophisticated motion recognition tasks. The model cannot account for such influences of attention and of different tasks.
A number of open questions and predictions of the model are considered. The use of a quantitative model allows us to generate specific predictions and to show that a neurophysiologically consistent, learning-based, feedforward model can reproduce many key experimental results. Open questions include how information from the two pathways is integrated, and which neural mechanisms underlie sequence selectivity in both pathways.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Pessoa et al. (2019)

A

o Integration is a basic features of the vertebrate brain needed to adapt to a changing world.
o This property is not restricted to few isolated brain centers, but resides in neuronal networks working together in a context-dependent manner.
o In different vertebrates, we identify shared large-scale connectional systems.
o There is a high degree of crosstalk and association between these systems at different levels, giving support to the notion that cognition cannot be separated from emotion and motivation.
Cognition is considered a hallmark of the primate brain that requires a high degree of signal integration, such as achieved in the prefrontal cortex. Moreover, it is often assumed that cognitive capabilities imply “superior” computational mechanisms compared to those involved in emotion or motivation. In contrast to these ideas, we review data on the neural architecture across vertebrates that support the concept that association and integration are basic features of the vertebrate brain, which are needed to successfully adapt to a changing world. This property is not restricted to a few isolated brain centers, but rather resides in neuronal networks working collectively in a context-dependent manner. In different vertebrates, we identify shared large-scale connectional systems involving the midbrain, hypothalamus, thalamus, basal ganglia, and amygdala. The high degree of crosstalk and association between these systems at different levels supports the notion that cognition, emotion, and motivation cannot be separated – all of them involve a high degree of signal integration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Manzano et al. (2017)

A

Despite the long‐standing interest in the evolution of the brain, relatively little is known about variation in brain anatomy in frogs. Yet, frogs are ecologically diverse and, as such, variation in brain anatomy linked to differences in lifestyle or locomotor behavior can be expected. Here we present a comparative morphological study focusing on the macro‐ and micro‐anatomy of the six regions of the brain and its choroid plexus: the olfactory bulbs, the telencephalon, the diencephalon, the mesencephalon, the rhombencephalon, and the cerebellum. We also report on the comparative anatomy of the plexus brachialis responsible for the innervation of the forelimbs. It is commonly thought that amphibians have a simplified brain organization, associated with their supposedly limited behavioral complexity and reduced motor skills. We compare frogs with different ecologies that also use their limbs in different contexts and for other functions. Our results show that brain morphology is more complex and more variable than typically assumed. Moreover, variation in brain morphology among species appears related to locomotor behavior as suggested by our quantitative analyses. Thus we propose that brain morphology may be related to the locomotor mode, at least in the frogs included in our analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Toads respond to simple artificial stimuli (Ewert, 1980, 1983; Simmons and Young, 1999)

A

· Toad sits in a glass vessel and is presented with one of 3 shapes (worm, antiworm, square)
o Goes to something if it looks like a worm – turns towards it
· Prey model (P) is moved around it
· Response: no. of turns to follow the model in 1 min
· Glass vessel prevents feedback from interacting with stimulus
· D – effective displacement (angle)
Square works when small but as gets bigger toads lose interest – may think that it’s a predator if gets too big

see notes

32
Q

Toads respond to simple artificial stimuli (Ewert, 1980, 1983; Simmons and Young, 1999) research

A

Ryan and Hector (1992)

Enquist et al. (2016)

33
Q

Ryan and Hector (1992)

A

A review of the literature reveals that, if females prefer traits that deviate from the population mean, they usually prefer traits of greater quantity. In cases in which the sensory bases of these preferences are identified, females prefer traits of greater quantity because these traits elicit greater sensory stimulation. However, two caveats apply. First, the studies surveyed might not represent an unbiased sample of mate choice, because researchers usually study systems characterized by exaggerated traits. Second, a preference for traits of greater quantity does not suggest that preference for average traits is unimportant; it might be more usual than preference for exaggerated traits. Phylogenetic comparisons sometimes allow one to distinguish among competing hypotheses for the evolution of female mating preferences. Two hypotheses, Fisher’s theory of “runaway” sexual selection and the “good genes” hypothesis, predict that traits and preferences coevolve, whereas the “sensory exploitation” hypothesis predicts that males evolve traits to exploit preexisting female biases. Some studies of frogs and fish support the sensory exploitation hypothesis, although this does not exclude the role of other factors in establishing the preexisting bias or in the subsequent elaboration of the preference. It is suggested that studies of mate choice will benefit by a more integrative approach, especially one that combines knowledge of sensory mechanisms with appropriate phylogenetic comparisons.

34
Q

Enquist et al. (2016)

A

Behaving efficiently (optimally or near-optimally) is central to animals’ adaptation to their environment. Much evolutionary biology assumes, implicitly or explicitly, that optimal behavioural strategies are genetically inherited, yet the behaviour of many animals depends crucially on learning. The question of how learning contributes to optimal behaviour is largely open. Here we propose an associative learning model that can learn optimal behaviour in a wide variety of ecologically relevant circumstances. The model learns through chaining, a term introduced by Skinner to indicate learning of behaviour sequences by linking together shorter sequences or single behaviours. Our model formalizes the concept of conditioned reinforcement (the learning process that underlies chaining) and is closely related to optimization algorithms from machine learning. Our analysis dispels the common belief that associative learning is too limited to produce ‘intelligent’ behaviour such as tool use, social learning, self-control or expectations of the future. Furthermore, the model readily accounts for both instinctual and learned aspects of behaviour, clarifying how genetic evolution and individual learning complement each other, and bridging a long-standing divide between ethology and psychology. We conclude that associative learning, supported by genetic predispositions and including the oft-neglected phenomenon of conditioned reinforcement, may suffice to explain the ontogeny of optimal behaviour in most, if not all, non-human animals. Our results establish associative learning as a more powerful optimizing mechanism than acknowledged by current opinion.

35
Q

Configural stimulus properties elicit prey-catching response (Carew, 2000)

A

· Stimulus variations don’t affect response
· Movement direction doesn’t affect response
· Velocity is crucial for worm stim, but not imp for antiworm (dashed line)
Faster stim = stronger responses

see notes

36
Q

Configural stimulus properties elicit prey-catching response (Carew, 2000) research

A

Gahtan et al. (2005)

Harley et al. (2009)

37
Q

Gahtan et al. (2005)

A

Many vertebrates are efficient hunters and recognize their prey by innate neural mechanisms. During prey capture, the internal representation of the prey’s location must be constantly updated and made available to premotor neurons that convey the information to spinal motor circuits. We studied the neural substrate of this specialized visuomotor system using high-speed video recordings of larval zebrafish and laser ablations of candidate brain structures. Seven-day-old zebrafish oriented toward, chased, and consumed paramecia with high accuracy. Lesions of the retinotectal neuropil primarily abolished orienting movements toward the prey. Wild-type fish tested in darkness, as well as blind mutants, were impaired similarly to tectum-ablated animals, suggesting that prey capture is mainly visually mediated. To trace the pathway further, we examined the role of two pairs of identified reticulospinal neurons, MeLc and MeLr, located in the nucleus of the medial longitudinal fasciculus of the tegmentum. These two neurons extend dendrites into the ipsilateral tectum and project axons into the spinal cord. Ablating MeLc and MeLr bilaterally impaired prey capture but spared several other behaviors. Ablating different sets of reticulospinal neurons did not impair prey capture, suggesting a selective function of MeLr and MeLc in this behavior. Ablating MeLc and MeLr neurons unilaterally in conjunction with the contralateral tectum also mostly abolished prey capture, but ablating them together with the ipsilateral tectum had a much smaller effect. These results suggest that MeLc and MeLr function in series with the tectum, as part of a circuit that coordinates prey capture movements.

38
Q

Harley et al. (2009)

A

ithin natural environments, animals must be able to respond to a wide range of obstacles in their path. Such responses require sensory information to facilitate appropriate and effective motor behaviors. The objective of this study was to characterize sensors involved in the complex control of obstacle negotiation behaviors in the cockroach Blaberus discoidalis. Previous studies suggest that antennae are involved in obstacle detection and negotiation behaviors. During climbing attempts, cockroaches swing their front leg that then either successfully reaches the top of the block or misses. The success of these climbing attempts was dependent on their distance from the obstacle. Cockroaches with shortened antennae were closer to the obstacle prior to climbing than controls, suggesting that distance was related to antennal length. Removing the antennal flagellum resulted in delays in obstacle detection and changes in climbing strategy from targeted limb movements to less directed attempts. A more complex scenario – a shelf that the cockroach could either climb over or tunnel under – allowed us to further examine the role of sensory involvement in path selection. Ultimately, antennae contacting the top of the shelf led to climbing whereas contact on the underside led to tunneling However, in the light, cockroaches were biased toward tunnelling; a bias which was absent in the dark. Selective covering of visual structures suggested that this context was determined by the ocelli.

39
Q

Visual pathways in the brain of the toad

A

· Typical vertebrate eye with retina containing ganglion cells that are sensitive to edges (have centre-surround receptive fields) – lens eye
· Ganglion cells project retinotopically to the optic tectum
· Optic tectum contains layers with TS cells which respond to moving stim
· Some ganglion cells project to thalamic pretectal (TP) area – project to same or contralateral side
o TP – neurons that respond to movement
· Tectum has layers
One layer contains neurons that respond to moving stim – T5 cells

see notes

40
Q

Visual pathways in the brain of the toad research

A

Ekstrom et al. (2018)

Lock and Collett (1979)

41
Q

Ekstrom et al. (2018)

A

In toad hopping, the hindlimbs generate the propulsive force for take-off while the forelimbs resist the impact forces associated with landing. Preparing to perform a safe landing, in which impact forces are managed appropriately, likely involves the integration of multiple types of sensory feedback. In toads, vestibular and/or proprioceptive feedback is critical for coordinated landing; however, the role of vision remains unclear. To clarify this, we compare pre-landing forelimb muscle activation patterns before and after removing vision. Specifically, we recorded EMG activity from two antagonistic forelimb muscles, the anconeus and coracoradialis, which demonstrate distance-dependent onset timing and recruitment intensity, respectively. Toads were first recorded hopping normally and then again after their optic nerves were severed to remove visual feedback. When blind, toads exhibited hop kinematics and pm-landing muscle activity similar to when sighted. However, distance-dependent relationships for muscle activity patterns were more variable, if present at all. This study demonstrates that blind toads are still able to perform coordinated landings, reinforcing the importance of proprioceptive and/or vestibular feedback during hopping. But the increased variability in distance-dependent activity patterns indicates that vision is more responsible for fine-tuning the motor control strategy for landing

42
Q

Lock and Collett (1979)

A

o 1. The path taken by a toad to reach its prey has been examined both when a chasm or barrier impedes its approach and when nothing is in its way. In both situations the toad plans its route before starting out, and consequently its path must reveal something of its perception of the three dimensional arrangement of objects in its environment.
o 2. In the absence of obstacles the toad directs its approach to the position of the prey just before the toad starts to move. Prey velocity has no influence on approach direction (Fig. 1) and the toad does not correct its course to allow for movements of the prey that occur while the toad is walking (Fig. 2). The distance the toad walks in a single bout depends on the initial separation between toad and prey and it does not alter should the prey vanish or move during the toad’s approach (Figs. 3 and 4). Both the distance and the direction of an approach are thus preprogrammed and are not corrected by visual feedback until the toad pauses at the end of a movement.
o 3. If a chasm is placed between the toad and its prey, the toad either leaps across the chasm, steps down into it, or turns away. Its choice of behaviour pattern is dictated by both the depth and the width of the chasm, indicating that it measures both parameters (Figs. 6 and 7). When the chasm is deep but not too wide, the toad leaps across, when the chasm is shallow it steps down into it, and when the chasm is both wide and deep it turns away.
o 4. If there is a paling fence between toad and prey, the toad ei~her detours round the fence or attempts to reach the prey directly (Fig. 8). The toad’s choice depends on the distance between fence and prey (Fig. 11). If this is more than 10 to 15 cm the toad tends to detour; if it is less the toad approaches directly. The switch-over point is unaffected by the distance between the starting point of the toad and the fence (over the range 10 to 30 cm), implying that the toad can measure the distance between two objects regardless of its distance from them. The toad thus displays depth constancy.
o 5. Detours are aimed accurately at gaps in the fence (Figs. 8 and 9) and if there is no gap through which the toad can pass, it attempts to reach the worm directly. Toads will make for gaps which are formed by two overlapping barriers placed at different depths (Fig. 12). Such gaps can only be detected if toads can measure the distance between two barriers.
6. A toad’s decision to make a detour thus depends on its appreciation of the relative positions of several objects in its environment. This suggests that the toad constructs an internal representation of its three dimensional world and that its depth vision is not only used in the direct control of motor programmes.

43
Q

Visual map in the tectum (Ewert, 1974)

A

· The spatial information in the image projected onto the photoreceptors of the eye is preserved throughout the visual pathway by orderly arrangements of neurons in the retina and in the subsequent layers (retinotopy or retinal mapping).
Co-located tectal neurons code visual information that comes from adjacent areas in the visual field

see notes

44
Q

Visual map in the tectum (Ewert, 1974) research

A

Kostyk and Grobstein (1987)

Henriques (2019)

45
Q

Kostyk and Grobstein (1987)

A

o A complete transverse hemisection of the neuraxis just caudal to the optic tectum in the frog, Rana pipiens, results in a failure to orient toward stimuli in one visual hemifield [Kostyk and Grobstein (1986) Neuroscience 21, 41–55]. This finding indicates that each tectal lobe gives rise to a crossed descending pathway adequate to cause turns in a direction contralateral to that tectal lobe, and suggests that each may also give rise to an uncrossed descending pathway adequate to cause turns in the ipsilateral direction. To determine whether there is in fact such an uncrossed pathway, we have studied the orienting behavior of frogs after lesions which interrupt crossed pathways.
o Two groups of animals were studied. In one group we made midline lesions of the ansulate commissure, through which run the major crossed descending projections from both tectal lobes. In the other group, we combined a complete transverse hemisection with removal of the tectal lobe on the same side of the brain, leaving intact only an uncrossed pathway from one tectal lobe. A persistence of orienting turns was observed in both groups of animals. In both, the direction of the turns was that expected on the assumption that an uncrossed pathway would cause ipsilateral turns. We conclude that such a pathway exists.
o While both groups of animals turned in the expected directions, they did so for stimuli at unexpected locations. Increasingly eccentric stimulus locations to one side of the mid-sagittal plane were associated with increasing amplitude turns to the other. The observation suggests that tectal regions mapping areas of visual space to one side of the mid-sagittal plane are capable of triggering turns not only in that direction but in the opposite direction as well. In the case of ansulate commissure section, mirrored orienting responses were observed for tactile stimuli as well.
These and other behavioral anomalies described in the preceding papers [Kostyk and Grobstein (1986) Neuroscience 21, 41–55 and 57–82] suggest that between the topographic retinotectal projection and the premotor circuitry for orienting there may exist an intermediate processing step, one in which stimulus location is represented in a generalized spatial coordinate frame.

46
Q

Henriques (2019)

A

o To survive, animals need to sustain behavioural responses towards specific environmental stimuli to achieve an overall goal. One example is the hunting behaviour of zebrafish larvae, which is characterised by a set of discrete visuomotor events that begin with prey detection, followed by target-directed swims and end with prey capture. Several studies have begun elucidating the neuronal circuits that govern prey detection and initiation of hunting routines, which are largely dependent on the midbrain optic tectum (OT). However, it is not known how the brain is able to sustain a behavioural routine directed towards a specific target, especially in complex environments containing distractors. In this study, I have discovered that the nucleus isthmus (NI), a midbrain nucleus implicated in visual attention in other vertebrates, is required for sustained tracking of prey during hunting routines in zebrafish larvae. NI neurons co-express cholinergic and glutamatergic markers and possess two types of axonal projection morphology. The first type targets the ipsilateral OT and AF7, a retinorecipient pretectal region involved in hunting. The second type projects bilaterally to the deep OT layers. Laser ablation of the NI followed by tracking of naturalistic hunting behaviour, revealed that while hunting initiation rates and motor kinematics were unaltered, ablated animals showed an elevated probability of aborting hunting routines midway. Moreover, 2-photon calcium imaging of tethered larvae during a closed-loop virtual reality hunting assay, showed that NI neurons are specifically active during hunting. These results suggest that the NI supports the maintenance of action sequences towards specific prey targets during hunting, most likely by modulating pretectal and tectal activity. This in turn supports its presence at the centre of an evolutionarily conserved circuit to control selective attention to ethologically relevant stimuli in the presence of competing distractors

47
Q

Superior colliculus in the mammalian midbrain

A

· Homologous to optic tectum (or tectum) in the vertebrate midbrain
· Tectal systems directs egocentric behavioural responses
Neurons receive retinotopic visual input

48
Q

Superior colliculus in the mammalian midbrain research

A

Shen et al. (2011)

Sommer and Wurtz (2001)

49
Q

Shen et al. (2011)

A

We review here both the evidence that the functional visuomotor organization of the optic tectum is conserved in the primate superior colliculus (SC) and the evidence for the linking proposition that SC discriminating activity instantiates saccade target selection. We also present new data in response to questions that arose from recent SC visual search studies. First, we observed that SC discriminating activity predicts saccade initiation when monkeys perform an unconstrained search for a target defined by either a single visual feature or a conjunction of two features. Quantitative differences between the results in these two search tasks suggest, however, that SC discriminating activity does not only reflect saccade programming. This finding concurs with visual search studies conducted in posterior parietal cortex and the idea that, during natural active vision, visual attention is shifted concomitantly with saccade programming. Second, the analysis of a large neuronal sample recorded during feature search revealed that visual neurons in the superficial layers do possess discriminating activity. In addition, the hypotheses that there are distinct types of SC neurons in the deeper layers and that they are differently involved in saccade target selection were not substantiated. Third, we found that the discriminating quality of single-neuron activity substantially surpasses the ability of the monkeys to discriminate the target from distracters, raising the possibility that saccade target selection is a noisy process. We discuss these new findings in light of the visual search literature and the view that the SC is a visual salience map for orienting eye movements.

50
Q

Sommer and Wurtz (2001)

A

Many neurons within prefrontal cortex exhibit a tonic discharge between visual stimulation and motor response. This delay activity may contribute to movement, memory, and vision. We studied delay activity sent from the frontal eye field (FEF) in prefrontal cortex to the superior colliculus (SC). We evaluated whether this efferent delay activity was related to movement, memory, or vision, to establish its possible functions. Using antidromic stimulation, we identified 66 FEF neurons projecting to the SC and we recorded from them while monkeys performed a Go/Nogo task. Early in every trial, a monkey was instructed as to whether it would have to make a saccade (Go) or not (Nogo) to a target location, which permitted identification of delay activity related to movement. In half of the trials (memory trials), the target disappeared, which permitted identification of delay activity related to memory. In the remaining trials (visual trials), the target remained visible, which permitted identification of delay activity related to vision. We found that 77% (51/66) of the FEF output neurons had delay activity. In 53% (27/51) of these neurons, delay activity was modulated by Go/Nogo instructions. The modulation preceded saccades made into only part of the visual field, indicating that the modulation was movement-related. In some neurons, delay activity was modulated by Go/Nogo instructions in both memory and visual trials and seemed to represent where to move in general. In other neurons, delay activity was modulated by Go/Nogo instructions only in memory trials, which suggested that it was a correlate of working memory, or only in visual trials, which suggested that it was a correlate of visual attention. In 47% (24/51) of FEF output neurons, delay activity was unaffected by Go/Nogo instructions, which indicated that the activity was related to the visual stimulus. In some of these neurons, delay activity occurred in both memory and visual trials and seemed to represent a coordinate in visual space. In others, delay activity occurred only in memory trials and seemed to represent transient visual memory. In the remainder, delay activity occurred only in visual trials and seemed to be a tonic visual response. In conclusion, the FEF sends diverse delay activity signals related to movement, memory, and vision to the SC, where the signals may be used for saccade generation. Downstream transmission of various delay activity signals may be an important, general way in which the prefrontal cortex contributes to the control of movement.

51
Q

Main sensorimotor projections involved in toad prey-catching behaviour

A

· Sensory pathway – retina to thalamic-pretectal area (TP) and tectum (feature coding and identification of prey items)
o Integration centres
o Integration to motor pathways
·
Motor pathway – tectum and TP to hindbrain (execution of behavioural response if prey is identified)
Pathways overlap in the central brain areas

see notes

52
Q

Main sensorimotor projections involved in toad prey-catching behaviour research

A

Weerasuryia and Ewert (1981)

Wallace et al. (2020)

53
Q

Weerasuryia and Ewert (1981)

A

o 1. In accordance with recent recording experiments in paralyzed toads Bufo bufo (L.) neurons have been identified between layers 6 and 8 of the optic tectum that exhibit selective responses to the configuration of moving prey dummies.
o 2. Injection of HRP into the two extrinsic tongue muscles - which are the effectors of the toad’s snapping response - showed that motoneurons innervating the protractor (m. genioglossus) and the retractor (m. hyoglossus) have distinct topographical distributions within the hypoglossal nucleus of the medulla oblongata.
o 3. Following injection of HRP in the vicinity of the hypoglossal nucleus, retrogradely labelled fibers have been identified in (1) the dorso-lateral tegmenturn, (2) the fasciculus tegmentalis, (3) ansulate commissure of the ventral tegmentum, and (4) layer 7 of the optic tectum. Retrogradely labelled cells were identified in (1) the subtectal region, (2) the nucleus antero-ventralis tegmenti mesencephali, and (3) layer 6 of the optic tectum. Labelled cells were also identified in the caudal part of the area ventrolateralis thalami, occasionally in the lateral part of the posterocentral nucleus, and in the postero-lateral nucleus of the caudal thalamus.
4. The results are discussed with regard to the control of prey-catching behavior, and it is suggested that the toad’s optic tectum contains a substrate for sensorimotor interfacing.

54
Q

Wallace et al. (2020)

A

We have discovered a lamina of visually responsive units in the medulla oblongata of the frog. It spans the entire medial aspect of the rostrocaudal length of the medulla and extends dorsoventrally from the cell-dense dorsal zone into the cell-sparse ventral zone. Most visual units within this lamina have large receptive fields, with the majority extending bilaterally in the frontal visual field. Most of these neurons are binocular, have no apparent directional preference, respond equally well to stimuli of a variety of shapes and sizes, and exhibit strong habituation. More medial locations in the visual lamina represent ipsilateral visual space while more lateral locations within the lamina represent contralateral visual space. Many units in the caudal aspect of the visual lamina are bimodal, responding to both visual and somatosensory stimuli. HRP tracing reveals inputs to the lamina from many primary and secondary visual areas in the midbrain and diencephalon. There is no area-by-area segregation of the projections to the visual lamina. For example, most parts of the tectum project across the visual lamina. The only spatial order in the visual lamina is that at more medial sites there tends to be more input from contralateral tectum; and at more lateral sites there tends to be more input from ipsilateral tectum. There is bilateral input to the visual lamina from tectum, tegmentum, posterior nucleus of the thalamus, posterior tuberculum, and ventromedial thalamic nucleus. There is ipsilateral input to the visual lamina from torus semicircularis, pretectum, nucleus of Bellonci, and ventrolateral thalamic nucleus. There is contralateral input to the visual lamina from basal optic complex. Collectively, these results show the presence of visual influences in regions of the medulla that likely represent an important step in sensorimotor transformation.

55
Q

No retinal feature detectors” responses of retinal ganglion cells do not match behaviour

A

· Ganglion cells imp for recognizing shape – responses may match behavioural responses
Cells didn’t respond well to worm configuration

see notes

56
Q

No retinal feature detectors” responses of retinal ganglion cells do not match behaviour research

A

Hood and Gordon (1981) - see notes?

57
Q

Hood and Gordon (1981)

A

Explains why neither of 2 alternative definitions of “feature detector” accurately describes the properties of the retinal ganglion cells of the frog. Because of a 2nd misconception, also perpetuated in textbooks, these cells are often described as functionally equivalent to the cortical cells of the monkey. The origins of this mistake are suggested. Actually, the frog retina is probably performing an analysis that is quite similar to that performed by the mammalian retina.

58
Q

Feature detectors in TP? (Simmons and Young, 1999)

A

· TH3 cells in the thalamic-pretectal area respond to moving visual stimuli
· T5 cells in tectum belong to several cell classes, some of which respond to moving stim
· T5(2) neurons exhibit stimulus invariant responses
· Link to strong responses for large sized squares
R’ship between avoiding large squares and not responding

59
Q

Feature detectors in TP? (Simmons and Young, 1999) research

A

Springer and Mednick (1985)

Robles et al. (2014)

60
Q

Springer and Mednick (1985)

A

The contribution of retinal ganglion cells situated in different retinal quadrants to the innervation of eight nontectal, retinorecipient targets was examined in goldfish. In some fish, cobaltous‐lysine was used to selectively fill severed intraretinal ganglion cell axons and the number of filled axons within each nucleus was determined. In other fish, either the dorsal or ventral or nasal or temporal retina was ablated and the remaining axons from the intact retina were filled with cobalt. The density of the cobalt‐filled axons within the retinorecipient targets was quantified with a microdensitometer. All of the eight targets received different degrees of innervation when the contributions from dorsal and ventral retina were compared. The suprachiasmatic nucleus received axons from ventral, but not from dorsal, retinal ganglion cells (RGCs), while the nucleus opticus dorsolateralis, nucleus opticus commissurae posterior, and nucleus opticus pretectalis dorsalis received more axons from ventral than from dorsal RGCs. The tuberal region, nucleus corticalis, and the accessory optic nucleus received axons from dorsal, but not from ventral, RGCs. The nucleus opticus pretectalis ventralis received more axons from dorsal then from ventral RGCs. Only one target, nucleus corticalis, appeared to receive more axons from nasal than from temporal RGCs. In general, those nuclei that were closest to the dorsal optic tract were innervated exclusively or predominantly by ventral RGC axons, whereas those nuclei that were closest to the ventral optic tract were innervated exclusively or predominantly by dorsal RGC axons. These data indicate that in this particular vertebrate, the dorsal and ventral retinal regions are not homogeneous with respect to their projections to nontectal nuclei. The possible role that the nontectal nuclei play in determining the course of optic axons is discussed.

61
Q

Robles et al. (2014)

A

o Background
§ Visual information is transmitted to the vertebrate brain exclusively via the axons of retinal ganglion cells (RGCs). The functional diversity of RGCs generates multiple representations of the visual environment that are transmitted to several brain areas. However, in no vertebrate species has a complete wiring diagram of RGC axonal projections been constructed. We employed sparse genetic labeling and in vivo imaging of the larval zebrafish to generate a cellular-resolution map of projections from the retina to the brain.
o Results
§ Our data define 20 stereotyped axonal projection patterns, the majority of which innervate multiple brain areas. Morphometric analysis of pre- and postsynaptic RGC structure revealed more than 50 structural RGC types with unique combinations of dendritic and axonal morphologies, exceeding current estimates of RGC diversity in vertebrates. These single-cell projection mapping data indicate that specific projection patterns are nonuniformly specified in the retina to generate retinotopically biased visual maps throughout the brain. The retinal projectome also successfully predicted a functional subdivision of the pretectum.
o Conclusions
Our data indicate that RGC projection patterns are precisely coordinated to generate brain-area-specific visual representations originating from RGCs with distinct dendritic morphologies and topographic distributions.

62
Q

T5(2) cells receive inhibitory input from T3 cells (Carew, 2000)

A

· (A) experiments with 2 electrodes: one recording in T5(2) and the other stimulating TP neurons (T3 cells)
· When T3 cells not stimulated, T5(2) responsed to stimulus
o When T3 cells excited, the T5(2) cells became inactive
· (B) excitatory input received, by T3 cells from tectal cells
Toads would either move towards or away from stim

see notes

63
Q

Direct brain stimulations elicit orienting or avoidance movements and postures (Ewert, 1974)

A

· White dots – stim of tectum
· Black – stim of thalamus
· Stim of these parts of the brain caused the toad to turn to or turn away from the corresponding part of the visual field
· If thalamic-pretectal region is removed, avoidance responses are lost
· If the tectum is removed, orienting movements are not observed anymore (both to and away from a stim)
This suggests that there are pathways that connect the tectum and TP

64
Q

Direct brain stimulations elicit orienting or avoidance movements and postures (Ewert, 1974) research

A

Giffin (2020)

Yoshida (2016)

65
Q

Giffin (2020)

A

Visual acuity, the clarity of vision, is an important aspect of the visual system. Understanding the visual acuity of various species provides insight into how animals perceive and interact with their environment, and it helps to create a cohesive story of how vision and visual behaviors evolved. However, little is known about the visual acuities of anurans. Thus, we used the optomotor response to behaviorally quantify the acuity of two small, tropical frog species: the green-and-black poison dart frog, Dendrobates auratus, and the tùngara frog, Engystomops pustulosus. These frogs differ slightly in their natural histories, leading us to hypothesize that the poison frogs had higher visual acuities than the tùngara. To determine their visual acuity, we exposed the frogs to rotating black and white stripes of decreasing widths. They would exhibit the optomotor response by reflexively turning when viewing the stripes. We then calculated visual acuity as minimum separable angle (MSA) by determining the threshold of discrimination, or the narrowest stripe widths the frogs could perceive, based on whether they exhibited a positive optomotor response to that stripe width. We calculated the MSA of poison frogs as 4.033° and the MSA of tùngara frogs as 4.839°. These MSAs are weaker than previous visual acuities reported in two other larger species of frogs. This finding is consistent with visual acuity trends, as there is a positive relationship between body size and acuity. Our findings about the perceptual abilities of these small, tropical frogs warrants future research into visually dependent behaviors such as hunting and communication. Furthermore, this study contributes to the field of visual ecology by providing visual acuities for more anuran species, another piece in the puzzle of the evolution of the visual system and the behaviors that rely upon it.

66
Q

Yoshida (2016)

A

Visual object-recognition plays a crucial role in animals that utilize visual information. In this study, we address the prey-predator recognition problem by optimizing artificial convolutional neural networks, based on neuroethological studies on toads. After the optimization of the overall network by supervised learning, the network showed a reasonable performance, even though various types of image noise existed. Also, we modulated the network after the optimization process based on the computational theory of classical conditioning and the reinforcement learning algorithm for the adaptation to environmental changes. This adaptation was implemented by separated modules that implement the “innate” term and “acquired” term of outputs. The modulated network exhibited behaviors that were similar to those of real toads. The neural basis of the amphibian visual information processing and the behavioral modulation mechanism have been substantially studied by biologists. Recent advances in parallel distributed processing technologies may enable us to develop fully autonomous, adaptive artificial agents with high-dimensional input spaces through end-to-end training methodology.

67
Q

Tectal and thalamic TP cells are interconnected

A

A. Experiments with 2 electrodes: one recording in T5(2) and the other stimulating TP neurons (TH3 cells). Whilst TH3 cells not stimulated, T5(2) responded to stim. When TH3 cells excited, the T5(2) cells became inactive
Excitatory input is received by TH3 from tectal cells. In the reversed setting (recording from TH3 cells) the stimulation of some tectal cells excited TH3 cells

see notes

68
Q

Tectal and thalamic TP cells are interconnected research

A

Cohen and Castro-Alamancos (2007)

Yang et al. (2005)

69
Q

Cohen and Castro-Alamancos (2007)

A

Sensory stimuli acquire significance through learning. A neutral sensory stimulus can become a fearful conditioned stimulus (CS) through conditioning. Here we report that the sensory pathways used to detect the CS depend on the conditioning paradigm. Animals trained to detect an electrical somatosensory stimulus delivered to the whisker pad in an active avoidance task were able to detect this CS and perform the task when a reversible or irreversible lesion was placed in either the somatosensory thalamus or the superior colliculus contralateral to the CS. However, simultaneous lesions of the somatosensory thalamus and superior colliculus contralateral to the CS blocked performance in the active avoidance task. In contrast, a lesion only of the somatosensory thalamus contralateral to the same CS, but not of the superior colliculus, blocked performance in a pavlovian fear conditioning task. In conclusion, during pavlovian fear conditioning, which is a situation in which the aversive outcome is not contingent on the behavior of the animal, the sensory thalamus is a critical relay for the detection of the CS. During active avoidance conditioning, a situation in which the aversive outcome is contingent on the behavior of the animal (i. e., the animal can avoid the aversive event), the sensory thalamus and the superior colliculus function as alternative routes for CS detection. Thus, even from early stages of sensory processing, the neural signals representing a CS are highly distributed in parallel and redundant sensory circuits, each of which can accomplish CS detection effectively depending on the conditioned behavior.

70
Q

Yang et al. (2005)

A

The present study is the first attempt to make comparisons of the visual response properties between tectal and thalamic neurons with spatially overlapping receptive fields by using extracellular recording and computer mapping techniques. The results show that in neuronal pairs about 70% of thalamic cells have excitatory receptive field alone, whereas 85% of tectal cells possess an excitatory receptive field surrounded by an inhibitory receptive field. In 70% of pairs the tectal cells are selective for direction of motion different from that which the thalamic cells prefer. Most thalamic cells prefer high speeds ( 80 - 160 degrees/s), whereas tectal cells prefer intermediate (40degrees/s) or low ( 10 - 20 degrees/s) speeds. Photergic and scotergic cells exist in the thalamus but not in the tectum. These results provide evidence that tectal and thalamic cells extract different visual information from the same region of the visual field. The functional significance of these differences is discussed.

71
Q

Proof of causality: lesion studies

A
· Lesions in TP area change T5(2) activity and change behavioural responses
· Higher responses 
· 2 areas interconnected
· Control: 
Worm > square > anti-worm

see notes

72
Q

Proof of causality: lesion studies research

A

Carasig et al. (2006)

73
Q

Carasig et al. (2006)

A

We present a case of spontaneously occurring irrepressible saccades in an experimental Rhesus monkey. Though eye jerks are sometimes associated with cerebellar disease, central demyelination or brainstem lesions, there is little consensus on their neurological mechanisms. From neurological and anatomical investigation we report that these irrepressible saccades were caused by a discrete cerebrovascular accident that involved the rostral superior colliculus along with its commissure, and with minor invasion of periaqueductal gray and adjacent mesencephalic reticular formation. Other suspected structures, like the raphe interpositus, substantia nigra and the cerebellum, were unaffected.

74
Q

.

A

High degree of connectivity
Simple releaser stim that projects onto one cell which then activates a motor pathway – this idea is not supported by the evidence

75
Q

A pick-and-place robot with the toad’s prey-recognition system (Fingerling et al., 1993)

A

· Alternative solutions can be found when considering how and why brains and nervous systems have evolved in different ways in nature
Pick correct object and put it in right area

76
Q

A pick-and-place robot with the toad’s prey-recognition system (Fingerling et al., 1993) research

A

The present review points out that visuomotor functions in anurans are modifiable and provides neurophysiological data which suggest modulatory forebrain functions. The retino-tecto/tegmento-bulbar/spinal serial processing streams are sufficient for stimulus–response mediation in prey-catching behaviour. Without its modulatory connections to forebrain structures, however, these processing streams cannot manage perceptual tasks, directed attention, learning performances, and motor skills. (1) Visual prey/non-prey discrimination is based on the interaction of this processing stream with the pretectal thalamus involving the neurotransmitter neuropeptide-Y. (2) Experiments applying the dopamine agonist apomorphine in combination with 2DG mapping and single neurone recording suggest that prey-catching strategies in terms of hunting prey and waiting for prey depend on dose dependent dopaminergic adjustments in the neural macronetwork in which retinal, pretecto-tectal, basal ganglionic, limbic, and mesolimbic structures participate. (3) Visual response properties of striatal efferent neurones support the concept that ventral striatum is involved in directed attention. (4) Various modulatory loops involving the ventral medial pallium modify prey-recognition in the course of visual or visual-olfactory learning (associative learning) or are responsible for stimulus-specific habituation (non-associative learning). (5) The circuits suggested to underlie modulatory forebrain functions are accentuated in standard schemes of the neural macronetwork. These provide concepts suitable for future decisive experiments.