To that end, the researchers plan to directly stimulate the layer underneath the dead photoreceptors using a system that looks like a cousin of the high-tech visor blind engineer Lt. Geordi La Forge wore in Star Trek: The Next Generation. It consists of a tiny video camera mounted on transparent “virtual reality” style goggles. There’s also a wallet-sized computer processor, a solar-powered battery implanted in the iris and a light-sensing chip implanted in the retina.
By Stanford University, On Feb. 22 in the Journal of Neural Engineering, Daniel Palanker, Alexander Vankov and Phil Huie from the Department of Ophthalmology and the Hansen Experimental Physics Laboratory and Stephen Baccus from the Department of Neurobiology published a design of an optoelectronic retinal prosthesis system that can stimulate the retina with resolution corresponding to a visual acuity of 20/80&emdash;sharp enough to orient yourself toward objects, recognize faces, read large fonts, watch TV and, perhaps most important, lead an independent life.
The researchers hope their device may someday bring artificial vision to those blind due to retinal degeneration. They are testing their system in rats, but human trials are at least three years away.
“This is basic research,” said Palanker, a physicist whose primary appointment is in the Ophthalmology Department. “It’s the essence of Bio-X,” he said, referring to Stanford’s interdisciplinary initiative to speed biomedical research from benchtop to bedside.
The project is funded in part by the U.S. Air Force and VISX Corp., which licensed the technology through Stanford’s Office of Technology Licensing. Harvey Fishman, who is not an author of the current paper but directs the Stanford Ophthalmic Tissue Engineering Laboratory, pioneered the project.
Degenerative retinal diseases result in death of photoreceptors&emdash;rod-shaped cells at the retina’s periphery responsible for night vision and cone-shaped cells at its center responsible for color vision. Worldwide, 1.5 million people suffer from retinitis pigmentosa (RP), the leading cause of inherited blindness. In the Western world, age-related macular degeneration (AMD) is the major cause of vision loss in people over age 65, and the issue is becoming more critical as the population ages. Each year, 700,000 people are diagnosed with AMD, with 10 percent becoming legally blind, defined by 20/400 vision. Many AMD patients retain some degree of peripheral vision.
“Currently, there is no effective treatment for most patients with AMD and RP,” the researchers say in their paper. “However, if one could bypass the photoreceptors and directly stimulate the inner retina with visual signals, one might be able to restore some degree of sight.”
To that end, the researchers plan to directly stimulate the layer underneath the dead photoreceptors using a system that looks like a cousin of the high-tech visor blind engineer Lt. Geordi La Forge wore in Star Trek: The Next Generation. It consists of a tiny video camera mounted on transparent “virtual reality” style goggles. There’s also a wallet-sized computer processor, a solar-powered battery implanted in the iris and a light-sensing chip implanted in the retina.
The chip is the size of half a rice grain&emdash;3 millimeters&emdash;and allows users to perceive 10 degrees of visual field at a time. It’s a flat rectangle of plastic (eventually a silicon version will be developed) with one corner snipped off to create asymmetry so surgeons can orient it properly during implantation. One design includes an orchard of pillars: One side of each pillar is a light-sensing pixel and the other side is a cell-stimulating electrode. Pillar density dictates image resolution, or visual acuity. The strip of orchard across the top third of the chip is densely planted. The strip in the middle is moderately dense, and the strip at the bottom is sparser still. Dense electrodes lead to better image resolution but may inhibit the desirable migration of retinal cells into voids near electrodes, so the different electrode densities of a current chip design allow the researchers to explore parameters and come up with a chip that performs optimally. Another design&emdash;pore electrodes&emdash;involves an array of cavities with stimulating electrodes located inside each of them.
How does the system work when viewing, say, a flower? First, light from the flower enters the video camera. (Keep in mind that camera technology is already pretty good at adjusting contrast and other types of image enhancement.) The video camera then sends the image of the flower to the wallet-sized computer for complex processing. The processor then wirelessly sends its image of the flower to an infrared LED-LCD screen mounted on the goggles. The transparent goggles reflect an infrared image into the eye and onto the retinal chip. Just as a person with normal vision cannot see the infrared signal coming out of a TV remote control, this infrared flower image is also invisible to normal photoreceptors. But for those sporting retinal implants, the infrared flower electrically stimulates the implant’s array of photodiodes. The result? They may not have to settle for merely smelling the roses.
Complex processing: The eyes have it
The eye is a complex machine. It has more than 100 million photoreceptors. “If we compare it to modern digital cameras, for example, it will be 100 megapixels,” Palanker said during an interview in the Hansen Experimental Physics Laboratory. “We buy cameras usually of three megapixels, maybe four.”
And if electronic cameras do a good job of image processing, the eye does a spectacular job, compressing information before sending it to the brain through the 1 million axons that make up the optic nerve. “We have a built-in processor in the eye,” Palanker said. “Before it goes into the brain, the image is significantly processed.”
The bottom layer of photoreceptors is where rhodopsin&emdash;a protein pigment that converts light into an electrical signal&emdash;exists. But as far as signal processing is concerned, the rubber meets the road where the signal enters the inner nuclear layer, which is populated with bipolar, amacrine and horizontal cells. These three cellular workhorses process the signals and transfer them to the ganglion cell layer, or “output cascade” of nerves that deliver signal pulses to the brain.
It’s best to place an implant at the earliest accessible level of image processing, Palanker said. “The earliest [accessible level] in degenerated retina is in the nuclear layer, and the more you go along the chain of image processing, the more complex the signals become.”
The Stanford researchers try to utilize most of the processing power remaining in the retina after retinal degeneration by placing their implant on the side of the retina facing the interior of the eye (“subretinal” placement), as opposed to several other groups in the United States, Germany and Japan that place retinal implants on the side of the retina facing the outside of the eyeball (“epiretinal” placement).
Signal processing allows the eye to detect direction of motion, perceive colors, enhance contrast and adjust to different levels of brightness. “Our eye is an amazingly adjustable machine,” Palanker said. It operates in brightness levels that span eight orders of magnitude, meaning it can detect both dim objects and those 100 million times brighter, “from moonless night to bright day,” he said.
It may seem counterintuitive that as it gets processed by the visual system, the signal travels from the back of the eye toward the eye’s interior, rather than from the inner surface of the retina and out the back of the eye. But metabolically active photoreceptors need a lot of support. They are connected to a highly pigmented layer called retinal pigment epithelium (RPE) that grows atop a highly vascularized layer of tissue (choroid) carrying a heavy flow of blood. If the blood supply and the RPE were inside the eye, they would obscure light from the photosensitive cells. Explained Palanker: “That is why it’s built upside down, because those cells on top&emdash;the bipolars and ganglions&emdash;do not require as many nutrients and as much metabolic support as do photoreceptors.”
A crucial aspect of visual perception is eye motion. Palanker said the Stanford system provides a powerful advantage over more basic devices now being tested in humans by a U.S. company because, besides making the most of the eye’s natural image-processing strengths by subretinal placement of implants, the system tracks rapid intermittent eye movements required for natural image perception. Vankov, a physicist, designed the projection and tracking system.
“In reality, when you think you are fixating to a certain point, your eyes are not steady,” Palanker said. “You are microscanning it all the time. So if you would be projecting an image not through the eye, but just deliver it from the camera to the implant, bypassing the moving eye, this will not be natural perception because you will completely eliminate this link.”
Alon Asher, a graduate student in computer science at Tel Aviv University, spent a semester working with Palanker on the software that links image processing to motion detection. He now continues his work on the project from Israel. Assistant Professor of Neurobiology Stephen Baccus, a co-author of the paper who is an expert in retinal signal processing, advises the group about the details of image processing.
In the Stanford system, image amplification and other processing occur in the hardware, outside the eye. If amplification occurred inside the implant’s pixels, as it does in one German design, there’d be no way short of surgery to make adjustments.
The Stanford system also makes new use of an old trick. By co-aligning real and enhanced images, it allows patients to utilize any remaining peripheral vision while making the most of the implant. Virtual reality systems that allow co-alignment of real and simulated views are already in use by pilots and surgeons, Palanker said. “This co-alignment of additional information with the normal view allows surgeons to see in the microscope the operating site, while the other eye is getting a projection of, say, a CT or MRI image of the same patient. So they can relate the information that they don’t see in the operating site to anatomic findings and know exactly where the tumor or other problem is.”
The amazing grace of physics
The new design answers major questions about what’s feasible for bionic devices. Biology imposes limitations, such as the needs for a system that will not heat cells by more than 1 degree Celsius and for electrochemical interfaces that aren’t corrosive.
Current retinal implants provide very low resolution&emdash;just a few pixels. But several thousand pixels would be required for the restoration of functional sight. The Stanford design employs a pixel density of up to 2,500 pixels per millimeter, corresponding to a visual acuity of 20/80, which could provide functional vision for reading books and using the computer.
Physical limitations regarding electrical stimulation most likely make it impossible for implants to impart a visual acuity of 20/10 (the sharpness required to see the bottom line on an eye chart), 20/20 (the so-called standard of good vision) or even 20/40 (the level to which vision must be correctable to be eligible for a California driver’s license).
A major limiting factor in achieving high resolution concerns the proximity of electrodes to target cells. A pixel density of 2,500 pixels per square millimeter corresponds to a pixel size of only 20 micrometers. But for effective stimulation, the target cell should not be more than 10 micrometers from the electrode. It is practically impossible to place thousands of electrodes so close to cells, Palanker said. With subretinal implants but not epiretinal ones, Stanford researchers discovered a phenomenon&emdash;retinal migration&emdash;that they now rely on to encourage retinal cells to move near electrodes&emdash;within 7 to 10 microns. Within three days, cells migrate to fill the spaces between pillars and pores.
“If the mountain doesn’t come to Muhammad, Muhammad goes to the mountain,” Palanker said. “We cannot place electrodes that close to cells. We actually invite cells to come to the electrode site, and they do it happily and very quickly.”
Currently the researchers are testing two designs in parallel because they aren’t yet sure which will be best. One design uses electrodes that protrude up from the chip like pillars. The pillars allow retinal cells greater access to nutrients and let researchers affect specific cell layers by controlling the height of the pillars. But pillars expose more cells to current, potentially heating tissue and increasing the chance for “cross-talk”&emdash;where many electrodes affect one cell. The second design has electrodes recessed into pores, which localizes currents and makes stimulation selective, perhaps allowing researchers to stimulate single cells.
Huie, a cell biologist and histologist, implants the chips in rats using a unique tool he and others developed. So far his short-term rat studies show no rejection of the implants. The next step will be longer tests in rats, as well as tests in larger animals for which models of retinal dystrophy exist. The researchers are currently shipping chips to Joseph Rizzo, a professor of ophthalmology at Harvard Medical School, for implantation into pigs.
Professor Mark Blumenkranz, chair of the Ophthalmology Department, advises the authors about surgical issues, and Professor Michael Marmor in that department, an expert in retinal physiology, provides advice about retinal electrophysiology. Graduate students Ke Wang in applied physics and Neville Mehenti in chemical engineering are currently working with Fishman of the Stanford Ophthalmic Tissue Engineering Laboratory on carbon nanotube electrodes and on chemical stimulation of the retinal cells. Medical student Ian Chan continues to develop lithographic fabrication technology for the implants. Alex Butterwick, a graduate student in applied physics, is studying the mechanisms of cellular damage and the safe limits of electrical stimulation.