Vision Quest: how does vision work?

Until recently, the mechanics of how vision works were not been an important question for me.  I’ve always been exceedingly nearsighted, but glasses fixed that.  In my sixties, along with many others, cataract surgery was necessary and repaired my nearsightedness.  Then with the onset of macular degeneration, a common condition of aging, the treatment became monthly or six week eye injections. No matter the amount of Lidocaine used, I have to say that eye injections hurt! I got used to it, however, it was saving my sight. What I did not anticipate two years ago were retinal hemorrhages. They caused legal blindness that can be retarded by treatment but not cured.  Deterioration appears to have stabilized, so I’ve not lost all sight. In decent light, I can see objects, movement and color up to a few hundred feet ahead.   These issues have made me think about how vision works. My elementary understanding is that it’s largely neurological. Visual receptors in the retina receive input from the act of looking at something. They pass the data along to the optic nerve sort of like a digital camera converts analog images into digital code.  The data received at the rear of the brain in the optical cortex where it is decoded into an image of the thing looked at. So far so good. This is where it gets complicated. A decoded image has little meaning if there isn’t a frame of reference that fits it.  Think of a toddler learning the names of objects: dog, cat, cow.  We think we’re teaching language when we point and say the right word for the right object, and so we are.  But we’re also teaching the optical cortex to associate characteristics of a certain shape with an identity that can be easily replicated whenever any new dog, cat or cow is encountered no matter where or what conditions surround it. With a lot of deer in our area, I’ve seen toddlers pointing at a fawn and saying “doggie.”  A good guess but a mistaken one without having encountered the image before.


Back to my issue.  My retinal receptors now send incomplete, distorted data to the optic nerve.  The brain’s magic is that it can decode bad data into recognizable images if it has enough remembered references from which to work.  It means that every day encounters with familiar places and things that I can recall from before my eyesight loss can be composed into recognizable if fuzzy out of focus pictures.  It’s why, white cane in hand, I can confidently walk almost anywhere within a mile or so of home.  It’s also why the tools of ordinary daily life are discernible, most of the time. Occasionally my memory tempts me toward over confidence and I can make stupid mistakes, none serious so far. Little things check the temptations.  Too often I knock over a full glass, lose something in plain sight, or fail to recognize what a common object is.  It’s all but impossible to look at the face of someone, even a good friend, and clearly recognize it. There are some get-around tactics.  Focus on one thing at a time, associate size, shape and movements with particular individuals, and the hardest, intentionally and consistently put things in the same place: something I consistently don’t do.


We’ve learned in recent travels that there are limitations to my comfort levels.  In strange environments where I have no prior frame of reference, even simple objects fail to have meaning. My brain has nothing remembered to associate with new incomplete, distorted input data. I need someone tell me: this is where you are, this is what it is, this is what it does, this is how to understand it.  It’s a more sophisticated version of the child’s dog, cat, cow lessons.  With sufficient verbal and visual clues, the brain is able to construct a reasonable facsimile of what those with full sight see.  Even though it lacks details, it stores the memory for future use should I ever return. My discomfort becomes more acute in crowded places of moving objects going in various directions, with sudden, unpredictable moves.  It means a stationary frame of reference has to be ignored in favor of physical safety.  Airports and crowded streets in strange cities are prime examples.  They create a degree of high anxiety and confusion abated by my wife, Dianna, acting as guide and protector – a bitter sweet reversal of roles.  We’ve adapted well with our new roles and have only once been uncomfortably separated.  Although, she often disappears into the labyrinth of the grocery store, but I have the cart and know she’ll eventually return,.


Committing place characteristics to memory is my key to physical adaptation.  It seems to come without effort, which is to say I don’t have to concentrate on thinking “Oh, I have to remember this.”  It just happens.  On the familiar sidewalks of my daily wandering I know that if I’m at this place there is a bump, step or missing paving stone.  Once, when alone at a sandwich shop, I turned the wrong way to go home.  After a block or two I wondered where the heck I was.  Not to worry, knowing and recalling the streets in that area eventually I found my way.  The walk home was a bit longer than usual, but I made it. I also found with that experience that iPhone walking directions are not helpful: head northeast on Maple and turn left on Pine, fails to tell me which way is northeast, whether I’m on Maple, and how will I know when I get to Pine.  Maybe the Apple people can work on that. 
Well, enough of all this rambling. Important worldly events are afoot, and more needs to be said about religious faith.  So back to work.

3 thoughts on “Vision Quest: how does vision work?”

Leave a Reply