Aston University researchers receive stimulus funds to study fundamental processes of human vision

NewsGuard 100/100 Score

A team of researchers led by Professor Mark Georgeson and Dr Timothy Meese at Aston University in Birmingham, UK have been awarded over £1 million pounds worth of funding from the BBSRC (Biotechnology and Biological Sciences Research Council) and the EPSRC (Engineering and Physical Sciences Research Council) to investigate some of the fundamental processes involved in human vision.

The BBSRC project will see the Aston research team conduct psychophysical experiments to study the binocular perception and performance of human observers in order to develop a more complete theory and understanding of binocular spatial vision through successive refinements of a computational model.

Professor Georgeson explains: 'We have two eyes but see one world. Although this may seem obvious (there is only one world), there are conditions where we do see double. For example, if you hold your index finger up a few inches in front of your nose, then fix your gaze on a more distant object straight ahead; your finger will be clearly seen as two, side by side. Then, as you switch your gaze to the finger, the two will become one. This is binocular fusion, or 'single vision' and it is achieved by the brain (the visual cortex, containing millions of brain cells) piecing together the information from both eyes. As a result, your ability to detect very faint things, or to see fine details, is better with two eyes than one. Our research aims to study single-vision and double-vision in carefully controlled experiments, and to build a general explanation in the form of a computer model that identifies both the main mechanisms of binocular fusion and how they serve to identify basic visual features such as lines and edges that may be the building blocks of perception.'

Aston University specialises in rigorous research that makes an impact on individuals, organisations and society, and this research project could lead to potential healthcare applications in optometry and vision correction.

Prof Georgeson continued: 'Presbyopia, which is the loss of focussing adjustment by the eye at the age of 40 plus, is a universal problem. One intriguing solution, known as 'monovision', is to give each eye a different contact lens - one for far, the other for near. After a period of adjustment, many users find this works well, but it is not clear why. Optometrists often assume that the brain is locally selecting the sharper image by suppressing the more blurred one. By studying more precisely the conditions under which blurred edges are combined (across the two eyes and within one eye) our experiments - and the theoretical model that emerges - should shed new light on the basis of 'monovision', perhaps leading to improvements in diagnosis, treatment and prescription.

The second research grant is from the EPSRC. Dr Tim Meese explains: 'When we open our eyes, we see, without effort. Our visual experience begins with the mechanics of focussing the image on the back of the eye; but to make sense of the image - to perceive - our brains must identify the various parts of the image, and understand their relations. Just like a silicon-based computer, the brain performs millions of computations quickly and effectively, more efficiently than we can ever sense. But what are the computations that are needed to recognise, say, your mother; to segment an object from its background; or even appreciate that one part of an image belongs with another? The starting point for this analysis is the distribution of light levels across the retinal image, which we can think of as a set of pixels. Interesting parts of the image (eg object boundaries) occur at regions of change: where neighbouring pixels have very different values. These regions are indentified by neurons in the primary visual cortex by computing differences between adjacent pixel values to build a neural image of local contrasts: the 'contrast image'. These contrast -defined local image features are then combined across retinal space at later stages of visual processing to represent elongated contours (for example the branches of a tree) and textured surfaces (for example a ploughed field) in what is sometimes known as a 'feature map'.

'In this project we will utilise a new type of stimulus and modelling framework that we have developed to investigate the computational rules that control the point-by-point integration of information in the 'contrast image.'

'The main users of this research will be vision scientists, neurophysiologists and engineers working on spatial vision, the neural of vision, visual cognition and image processing algorithms.'

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Fentanyl inhalation linked to irreversible brain damage