pcmag.comRestoring Vision With Bionic Eyes: No Longer Science Fiction - PCMag UK " /> Skip to main content PCMag UK News & Analysis By S.C. Stuart 9 Jul 2019, 3:29 p.m. Dr. Michael Beyeler wants to interface with your brain through bionic vision. Find out why he compares his work to a 'less creepy' version of the brain implants on 'Black Mirror.' We review products independently, but we may earn affiliate commissions from buying links on this page. Terms of use. (Yuichiro Chino / Getty) Bionic vision might sound like science fiction, but Dr. Michael Beyeler is working on just that. Originally from Switzerland, Dr. Beyeler is wrapping up his postdoctoral fellow at the University of Washington before moving to the University of California Santa Barbara this fall to head up the newly formed Bionic Vision Lab in the Departments of Computer Science and Psychological & Brain Sciences. We spoke with him about this "deep fascination with the brain" and how he hopes his work will eventually be able to restore vision to the blind. Here are edited and condensed excerpts from our conversation. Dr. Beyeler, give us an overview of the 'neural engineering' field that will lead to bionic sight in the future. [MB] Neuroengineering is an emerging interdisciplinary field aiming to engineer devices that can interface with the brain. Kind of like the brain implants from Black Mirror, but much less creepy. [Laughs] The human brain has roughly 100 billion nerve cells, or neurons, and trillions of connections between them, organised into different brain areas each supporting a particular task; for example, processing visual or auditory information, making decisions, or getting from A to B. You can imagine that understanding how these neural circuits give rise to perception and action requires bringing together skills from a variety of disciplines, such as neuroscience, engineering, computer science, and statistics. Explain how these BMIs work in your field. I've tested them for mood elevation, but not connected to visual states. Right. "Brain-computer interfaces" can be used both for treating neurological and mental disorders as well as for understanding brain function, and now engineers have developed ways to manipulate these neural circuits with electrical currents, light, ultrasound, and magnetic fields. Remarkably, we can make a finger, arm, or even a leg move just by activating the right neurons in the motor cortex. Similarly, we can activate neurons in the visual cortex to make people see flashes of light. The former allows us to treat neurological conditions such as Parkinson's disease and epilepsy, whereas the latter should eventually allow us to restore vision to the blind. Amazing. And what kinds of devices are currently in the field?The idea of a visual prosthesis, or bionic eye, is no longer science fiction. You might have heard of the Argus II, a device developed by a company called Second Sight, which is available across the US, Europe, and in some Asian countries. It's for people who have lost their sight due to a retinal degenerative disease such as retinitis pigmentosa and macular degeneration. How many people today have these retinal prostheses?I believe there are now more than 300 Argus II users around the world and the manufacturer, Second Sight, has also just started implanting ORION, a device that skips the eye entirely and directly interfaces with the visual cortex. Apart from that, we are also anxiously awaiting the first results of PRIMA, a new subretinal device developed at Stanford University and commercialised by a French company called Pixium Vision. So this is a growing field?Definitely. In fact, some 30 more devices are in development across the globe. Overall there should be a wide variety of sight restoration technologies available within the next decade. (A woman wearing Argus II glasses tests Dr. Beyeler's system) For clarity, explain how the current systems work.When individuals, due to different diseases, no longer have their photoreceptors—the light-gathering cells in the back of the eye—the idea is to replace these cells with a microelectrode array that mimics their functionality. Argus II users also wear a pair of glasses with a small camera embedded, so the visual input of the camera can be translated into a series of electrical pulses that the implant delivers to the neural circuits in the eye. For most patients, Argus II provides "finger-counting" levels of vision—people can differentiate light from dark backgrounds and see motion, but their vision is blurry and often hard to interpret. Unfortunately, with current technology it turns out to be really hard to mimic the neural code in the eye and the visual cortex to fool the brain into thinking that it saw something meaningful. This is where I come in. My goal is basically to understand how to go from camera input to electrical stimulation and come up with a code that the visual system can interpret. This requires both a deep understanding of the underlying neuroscience as well as the technical skills to engineer a viable real-time solution. And how do you do this?By using tools from computer science, neuroscience, and cognitive psychology. For example, we come up with mathematical equations that describe how individual neurons respond to electrical stimulation. We also perform simple psychophysical experiments, such as asking Argus II users to draw what they see when we stimulate different electrodes. We then use insights from these experiments to develop software packages that predict what people should see for any given electrical stimulation pattern, which can be used by the device manufacturer to make the artificial vision, provided by these devices, more interpretable for the user. Are you focusing on bionic (artificial) rather than biomimicry (natural) vision?Yes, because instead of focusing on "natural" vision, we might be better off thinking about how to create "practical" and "useful" artificial vision. We have a real opportunity here to tap into the existing neural circuitry of the blind and augment their visual senses much like Google Glass or the Microsoft HoloLens. For example, make things appear brighter the closer they get, use computer vision to mark safe paths and combine it with GPS to give visual directions, warn users of impending dangers in their immediate surroundings, or even extend the range of "visible" light with the use of an infrared sensor. Once the quality of the generated artificial vision reaches a certain threshold, there are a lot of exciting avenues to pursue. On a practical level, how are you using technology in your research?Since we don't develop our own implants, we often collaborate with different device manufacturers. Recently we have been making extensive use of Argus II, which comes with its own quite sophisticated software development kit. Second Sight have been very forthcoming with us, both by providing access to patients as well as by enduring our nagging requests to make minor software modifications so that we can field-test our crazy theories. In the end, these collaborations should be a win-win for both parties, ideally trading data for insight. What other tools, and software, do you use in your work?The field is currently dominated by different device manufacturers, who (understandably) can be very protective of their intellectual property. However, the Swiss in me regards it as important to provide a neutral academic voice promoting tools and resources that are available to all. We therefore focus heavily on open-science practices. You've developed some open-source projects, right?Yes, in this spirit, we were the first to make our simulation engine, pulse2percept, available as an open-source Python package. The goal of pulse2percept is to predict what a patient should see for any given input stimulus. Interestingly, this approach has already gained the attention of Second Sight and Pixium Vision, who expressed interest in using our software to predict what their patients are seeing. In the future, my goal is to adapt this software to other devices as they become available. What brought you to the US, from Switzerland, in the first place?I started out as an electrical engineer, in Zurich, because I've always been interested in how things work, but became more and more interested in the brain itself, realising my skills as an electrical engineer were directly transferable to understanding how the brain works. I could take signal processing, network theory, and information theory and—through biomedical engineering and neuroscience—work towards brain-inspired neural networks and robotics—and use all these concepts to do something really good. That's how I ended up at the University of California Irvine to continue my studies and do my PhD. I was only planning to come to the US for 9 months or so. And you're still here a decade later.[Laughs] Right. Back to your research today: who are your main supporters?My recent work would not have been possible without generous support from the Washington Research Foundation in combination with the Gordon & Betty Moore Foundation and the Alfred P. Sloan Foundation. On top of that, I've been very fortunate to receive a K99 Pathway to Independence Award from the National Eye Institute at NIH. This a prestigious five-year grant that is meant to ease my transition into starting my own research group as an Assistant Professor. You're about to join the Departments of Computer Science and Psychological & Brain Sciences at the University of California Santa Barbara and build a Bionic Vision Lab, which is such a cool name.Yes, I am super stoked about this opportunity. There are lots of clinical research groups studying the effects of blinding degenerative diseases, and several biomedical groups engineering new devices. But nobody is really focusing on novel methods and algorithms to improve the code with which these devices interact with the human visual system itself. Our group will be an interdisciplinary effort that tries to combine insights from neuroscience with computer science and engineering to build smarter brain-computer interfaces and dream up new ways to maximise the practicality of artificial vision. Finally, how did you become interested in this field in the first place?To me this is the ultimate scientific quest, and it has the potential to cure blindness. In the end, it all comes back to a deep fascination with the brain—this mysterious hunk of meat that uses less power than a light bulb to give rise to our conscious perception of the world. How on earth does the brain do it? It is extraordinary, when you really think about it.It's perhaps one of the last big remaining scientific mysteries. And what better way to test our understanding of the brain than to build a device that can safely and meaningfully interact with it? I mean, the technology to tap into this complex circuitry is coming, there is no way around that, and it will allow us to manipulate our perception, our decisions, our actions. We better start thinking about how to use these powers for good. Next Article More Inside PCMag.com About the Author S. C. Stuart is an award-winning digital strategist and technology commentator for ELLE China, Esquire Latino, Singularity Hub, and PCMag, covering: artificial intelligence; augmented, virtual, and mixed reality; DARPA; NASA; US Army Cyber Command; sci-fi in Hollywood (including interviews with Spike Jonze and Ridley Scott); and robotics (real-life encounters with over 27 robots and counting). Follow S.C. on Twitter @SCStuart2020 See Full Bio Please enable JavaScript to view the comments. Ad \n\n\n\n\nOriginally from Switzerland, Dr. Beyeler is wrapping up his postdoctoral fellow at the University of Washington before moving to the University of California Santa Barbara this fall to head up the newly formed Bionic Vision Lab in the Departments of Computer Science and Psychological & Brain Sciences.\n\nWe spoke with him about this \"deep fascination with the brain\" and how he hopes his work will eventually be able to restore vision to the blind. Here are edited and condensed excerpts from our conversation.\n\nDr. Beyeler, give us an overview of the 'neural engineering' field that will lead to bionic sight in the future. \n[MB] Neuroengineering is an emerging interdisciplinary field aiming to engineer devices that can interface with the brain. Kind of like the brain implants from Black Mirror, but much less creepy. [Laughs]\n\nThe human brain has roughly 100 billion nerve cells, or neurons, and trillions of connections between them, organised into different brain areas each supporting a particular task; for example, processing visual or auditory information, making decisions, or getting from A to B. You can imagine that understanding how these neural circuits give rise to perception and action requires bringing together skills from a variety of disciplines, such as neuroscience, engineering, computer science, and statistics.\n\nExplain how these BMIs work in your field. I've tested them for mood elevation, but not connected to visual states. \nRight. \"Brain-computer interfaces\" can be used both for treating neurological and mental disorders as well as for understanding brain function, and now engineers have developed ways to manipulate these neural circuits with electrical currents, light, ultrasound, and magnetic fields. Remarkably, we can make a finger, arm, or even a leg move just by activating the right neurons in the motor cortex. Similarly, we can activate neurons in the visual cortex to make people see flashes of light. The former allows us to treat neurological conditions such as Parkinson's disease and epilepsy, whereas the latter should eventually allow us to restore vision to the blind.\n\n\n\nAmazing. And what kinds of devices are currently in the field?\nThe idea of a visual prosthesis, or bionic eye, is no longer science fiction. You might have heard of the Argus II, a device developed by a company called Second Sight, which is available across the US, Europe, and in some Asian countries. It's for people who have lost their sight due to a retinal degenerative disease such as retinitis pigmentosa and macular degeneration.\n\nHow many people today have these retinal prostheses?\nI believe there are now more than 300 Argus II users around the world and the manufacturer, Second Sight, has also just started implanting ORION, a device that skips the eye entirely and directly interfaces with the visual cortex. Apart from that, we are also anxiously awaiting the first results of PRIMA, a new subretinal device developed at Stanford University and commercialised by a French company called Pixium Vision.\n\nSo this is a growing field?\nDefinitely. In fact, some 30 more devices are in development across the globe. Overall there should be a wide variety of sight restoration technologies available within the next decade.\n\n\n\n(A woman wearing Argus II glasses tests Dr. Beyeler's system)\n\n\n\nFor clarity, explain how the current systems work.\nWhen individuals, due to different diseases, no longer have their photoreceptors\u2014the light-gathering cells in the back of the eye\u2014the idea is to replace these cells with a microelectrode array that mimics their functionality. Argus II users also wear a pair of glasses with a small camera embedded, so the visual input of the camera can be translated into a series of electrical pulses that the implant delivers to the neural circuits in the eye. For most patients, Argus II provides \"finger-counting\" levels of vision\u2014people can differentiate light from dark backgrounds and see motion, but their vision is blurry and often hard to interpret. Unfortunately, with current technology it turns out to be really hard to mimic the neural code in the eye and the visual cortex to fool the brain into thinking that it saw something meaningful. This is where I come in.\n\nMy goal is basically to understand how to go from camera input to electrical stimulation and come up with a code that the visual system can interpret. This requires both a deep understanding of the underlying neuroscience as well as the technical skills to engineer a viable real-time solution.\n\nAnd how do you do this?\nBy using tools from computer science, neuroscience, and cognitive psychology. For example, we come up with mathematical equations that describe how individual neurons respond to electrical stimulation. We also perform simple psychophysical experiments, such as asking Argus II users to draw what they see when we stimulate different electrodes. We then use insights from these experiments to develop software packages that predict what people should see for any given electrical stimulation pattern, which can be used by the device manufacturer to make the artificial vision, provided by these devices, more interpretable for the user.\n\nAre you focusing on bionic (artificial) rather than biomimicry (natural) vision?\nYes, because instead of focusing on \"natural\" vision, we might be better off thinking about how to create \"practical\" and \"useful\" artificial vision. We have a real opportunity here to tap into the existing neural circuitry of the blind and augment their visual senses much like Google Glass or the Microsoft HoloLens. For example, make things appear brighter the closer they get, use computer vision to mark safe paths and combine it with GPS to give visual directions, warn users of impending dangers in their immediate surroundings, or even extend the range of \"visible\" light with the use of an infrared sensor. Once the quality of the generated artificial vision reaches a certain threshold, there are a lot of exciting avenues to pursue.\n\nOn a practical level, how are you using technology in your research?\nSince we don't develop our own implants, we often collaborate with different device manufacturers. Recently we have been making extensive use of Argus II, which comes with its own quite sophisticated software development kit. Second Sight have been very forthcoming with us, both by providing access to patients as well as by enduring our nagging requests to make minor software modifications so that we can field-test our crazy theories. In the end, these collaborations should be a win-win for both parties, ideally trading data for insight.\n\n\n\nWhat other tools, and software, do you use in your work?\nThe field is currently dominated by different device manufacturers, who (understandably) can be very protective of their intellectual property. However, the Swiss in me regards it as important to provide a neutral academic voice promoting tools and resources that are available to all. We therefore focus heavily on open-science practices.\n\nYou've developed some open-source projects, right?\nYes, in this spirit, we were the first to make our simulation engine, pulse2percept, available as an open-source Python package. The goal of pulse2percept is to predict what a patient should see for any given input stimulus. Interestingly, this approach has already gained the attention of Second Sight and Pixium Vision, who expressed interest in using our software to predict what their patients are seeing. In the future, my goal is to adapt this software to other devices as they become available.\n\nWhat brought you to the US, from Switzerland, in the first place?\nI started out as an electrical engineer, in Zurich, because I've always been interested in how things work, but became more and more interested in the brain itself, realising my skills as an electrical engineer were directly transferable to understanding how the brain works. I could take signal processing, network theory, and information theory and\u2014through biomedical engineering and neuroscience\u2014work towards brain-inspired neural networks and robotics\u2014and use all these concepts to do something really good. That's how I ended up at the University of California Irvine to continue my studies and do my PhD. I was only planning to come to the US for 9 months or so.\n\nAnd you're still here a decade later.\n[Laughs] Right.\n\nBack to your research today: who are your main supporters?\nMy recent work would not have been possible without generous support from the Washington Research Foundation in combination with the Gordon & Betty Moore Foundation and the Alfred P. Sloan Foundation. On top of that, I've been very fortunate to receive a K99 Pathway to Independence Award from the National Eye Institute at NIH. This a prestigious five-year grant that is meant to ease my transition into starting my own research group as an Assistant Professor.\n\n\n\nYou're about to join the Departments of Computer Science and Psychological & Brain Sciences at the University of California Santa Barbara and build a Bionic Vision Lab, which is such a cool name.\nYes, I am super stoked about this opportunity. There are lots of clinical research groups studying the effects of blinding degenerative diseases, and several biomedical groups engineering new devices. But nobody is really focusing on novel methods and algorithms to improve the code with which these devices interact with the human visual system itself. Our group will be an interdisciplinary effort that tries to combine insights from neuroscience with computer science and engineering to build smarter brain-computer interfaces and dream up new ways to maximise the practicality of artificial vision.\n\nFinally, how did you become interested in this field in the first place?\nTo me this is the ultimate scientific quest, and it has the potential to cure blindness. In the end, it all comes back to a deep fascination with the brain\u2014this mysterious hunk of meat that uses less power than a light bulb to give rise to our conscious perception of the world. How on earth does the brain do it?\n\nIt is extraordinary, when you really think about it.\nIt's perhaps one of the last big remaining scientific mysteries. And what better way to test our understanding of the brain than to build a device that can safely and meaningfully interact with it? I mean, the technology to tap into this complex circuitry is coming, there is no way around that, and it will allow us to manipulate our perception, our decisions, our actions. We better start thinking about how to use these powers for good.\n\n\n\n\n\n", "image": [{"url": "https://sm.pcmag.com/pcmag_uk/news/r/restoring-/restoring-vision-with-bionic-eyes-no-longer-science-fiction_h8n2.jpg", "width": 1920, "caption": "Restoring Vision With Bionic Eyes: No Longer Science Fiction", "@type": "ImageObject", "height": 1080}], "datePublished": "2019-07-09 14:29:00+00:00", "publisher": {"url": "https://uk.pcmag.com", "logo": {"url": "('https://uk.pcmag.com/s/',)pcmag/pcmag_logo_micro.png", "width": 245, "@type": "ImageObject", "height": 60}, "@type": "Organization", "name": "PCMag UK"}, "about": {"@type": "Thing", "name": "Microsoft HoloLens Development Edition"}, "author": {"description": "S. C. Stuart is an award-winning digital strategist and technology commentator for ELLE China, Esquire Latino, Singularity Hub, and PCMag, covering: artificial intelligence; augmented, virtual, and mixed reality; DARPA; NASA; US Army Cyber Command; sci-fi in Hollywood (including interviews with Spike Jonze and Ridley Scott); and robotics (real-life encounters with over 27 robots and counting).\n\nFollow S.C. on Twitter @SCStuart2020", "@type": "Person", "image": "https://assets.pcmag.com/media/images/602847-sc-stuart.png?thumb=y&width=95&height=95", "name": "S.C. Stuart"}, "headline": "Restoring Vision With Bionic Eyes: No Longer Science Fiction", "@type": "NewsArticle", "mainEntityOfPage": {"@id": "https://uk.pcmag.com/news-analysis/121598/restoring-vision-with-bionic-eyes-no-longer-science-fiction", "@type": "WebPage"}, "@context": "https://schema.org", "dateModified": "2019-07-10 08:51:17+00:00"} Restoring Vision With Bionic Eyes: No Longer Science Fiction - PCMag UK " /> Skip to main content PCMag UK News & Analysis By S.C. Stuart 9 Jul 2019, 3:29 p.m. Dr. Michael Beyeler wants to interface with your brain through bionic vision. Find out why he compares his work to a 'less creepy' version of the brain implants on 'Black Mirror.' We review products independently, but we may earn affiliate commissions from buying links on this page. Terms of use. (Yuichiro Chino / Getty) Bionic vision might sound like science fiction, but Dr. Michael Beyeler is working on just that. Originally from Switzerland, Dr. Beyeler is wrapping up his postdoctoral fellow at the University of Washington before moving to the University of California Santa Barbara this fall to head up the newly formed Bionic Vision Lab in the Departments of Computer Science and Psychological & Brain Sciences. We spoke with him about this "deep fascination with the brain" and how he hopes his work will eventually be able to restore vision to the blind. Here are edited and condensed excerpts from our conversation. Dr. Beyeler, give us an overview of the 'neural engineering' field that will lead to bionic sight in the future. [MB] Neuroengineering is an emerging interdisciplinary field aiming to engineer devices that can interface with the brain. Kind of like the brain implants from Black Mirror, but much less creepy. [Laughs] The human brain has roughly 100 billion nerve cells, or neurons, and trillions of connections between them, organised into different brain areas each supporting a particular task; for example, processing visual or auditory information, making decisions, or getting from A to B. You can imagine that understanding how these neural circuits give rise to perception and action requires bringing together skills from a variety of disciplines, such as neuroscience, engineering, computer science, and statistics. Explain how these BMIs work in your field. I've tested them for mood elevation, but not connected to visual states. Right. "Brain-computer interfaces" can be used both for treating neurological and mental disorders as well as for understanding brain function, and now engineers have developed ways to manipulate these neural circuits with electrical currents, light, ultrasound, and magnetic fields. Remarkably, we can make a finger, arm, or even a leg move just by activating the right neurons in the motor cortex. Similarly, we can activate neurons in the visual cortex to make people see flashes of light. The former allows us to treat neurological conditions such as Parkinson's disease and epilepsy, whereas the latter should eventually allow us to restore vision to the blind. Amazing. And what kinds of devices are currently in the field?The idea of a visual prosthesis, or bionic eye, is no longer science fiction. You might have heard of the Argus II, a device developed by a company called Second Sight, which is available across the US, Europe, and in some Asian countries. It's for people who have lost their sight due to a retinal degenerative disease such as retinitis pigmentosa and macular degeneration. How many people today have these retinal prostheses?I believe there are now more than 300 Argus II users around the world and the manufacturer, Second Sight, has also just started implanting ORION, a device that skips the eye entirely and directly interfaces with the visual cortex. Apart from that, we are also anxiously awaiting the first results of PRIMA, a new subretinal device developed at Stanford University and commercialised by a French company called Pixium Vision. So this is a growing field?Definitely. In fact, some 30 more devices are in development across the globe. Overall there should be a wide variety of sight restoration technologies available within the next decade. (A woman wearing Argus II glasses tests Dr. Beyeler's system) For clarity, explain how the current systems work.When individuals, due to different diseases, no longer have their photoreceptors—the light-gathering cells in the back of the eye—the idea is to replace these cells with a microelectrode array that mimics their functionality. Argus II users also wear a pair of glasses with a small camera embedded, so the visual input of the camera can be translated into a series of electrical pulses that the implant delivers to the neural circuits in the eye. For most patients, Argus II provides "finger-counting" levels of vision—people can differentiate light from dark backgrounds and see motion, but their vision is blurry and often hard to interpret. Unfortunately, with current technology it turns out to be really hard to mimic the neural code in the eye and the visual cortex to fool the brain into thinking that it saw something meaningful. This is where I come in. My goal is basically to understand how to go from camera input to electrical stimulation and come up with a code that the visual system can interpret. This requires both a deep understanding of the underlying neuroscience as well as the technical skills to engineer a viable real-time solution. And how do you do this?By using tools from computer science, neuroscience, and cognitive psychology. For example, we come up with mathematical equations that describe how individual neurons respond to electrical stimulation. We also perform simple psychophysical experiments, such as asking Argus II users to draw what they see when we stimulate different electrodes. We then use insights from these experiments to develop software packages that predict what people should see for any given electrical stimulation pattern, which can be used by the device manufacturer to make the artificial vision, provided by these devices, more interpretable for the user. Are you focusing on bionic (artificial) rather than biomimicry (natural) vision?Yes, because instead of focusing on "natural" vision, we might be better off thinking about how to create "practical" and "useful" artificial vision. We have a real opportunity here to tap into the existing neural circuitry of the blind and augment their visual senses much like Google Glass or the Microsoft HoloLens. For example, make things appear brighter the closer they get, use computer vision to mark safe paths and combine it with GPS to give visual directions, warn users of impending dangers in their immediate surroundings, or even extend the range of "visible" light with the use of an infrared sensor. Once the quality of the generated artificial vision reaches a certain threshold, there are a lot of exciting avenues to pursue. On a practical level, how are you using technology in your research?Since we don't develop our own implants, we often collaborate with different device manufacturers. Recently we have been making extensive use of Argus II, which comes with its own quite sophisticated software development kit. Second Sight have been very forthcoming with us, both by providing access to patients as well as by enduring our nagging requests to make minor software modifications so that we can field-test our crazy theories. In the end, these collaborations should be a win-win for both parties, ideally trading data for insight. What other tools, and software, do you use in your work?The field is currently dominated by different device manufacturers, who (understandably) can be very protective of their intellectual property. However, the Swiss in me regards it as important to provide a neutral academic voice promoting tools and resources that are available to all. We therefore focus heavily on open-science practices. You've developed some open-source projects, right?Yes, in this spirit, we were the first to make our simulation engine, pulse2percept, available as an open-source Python package. The goal of pulse2percept is to predict what a patient should see for any given input stimulus. Interestingly, this approach has already gained the attention of Second Sight and Pixium Vision, who expressed interest in using our software to predict what their patients are seeing. In the future, my goal is to adapt this software to other devices as they become available. What brought you to the US, from Switzerland, in the first place?I started out as an electrical engineer, in Zurich, because I've always been interested in how things work, but became more and more interested in the brain itself, realising my skills as an electrical engineer were directly transferable to understanding how the brain works. I could take signal processing, network theory, and information theory and—through biomedical engineering and neuroscience—work towards brain-inspired neural networks and robotics—and use all these concepts to do something really good. That's how I ended up at the University of California Irvine to continue my studies and do my PhD. I was only planning to come to the US for 9 months or so. And you're still here a decade later.[Laughs] Right. Back to your research today: who are your main supporters?My recent work would not have been possible without generous support from the Washington Research Foundation in combination with the Gordon & Betty Moore Foundation and the Alfred P. Sloan Foundation. On top of that, I've been very fortunate to receive a K99 Pathway to Independence Award from the National Eye Institute at NIH. This a prestigious five-year grant that is meant to ease my transition into starting my own research group as an Assistant Professor. You're about to join the Departments of Computer Science and Psychological & Brain Sciences at the University of California Santa Barbara and build a Bionic Vision Lab, which is such a cool name.Yes, I am super stoked about this opportunity. There are lots of clinical research groups studying the effects of blinding degenerative diseases, and several biomedical groups engineering new devices. But nobody is really focusing on novel methods and algorithms to improve the code with which these devices interact with the human visual system itself. Our group will be an interdisciplinary effort that tries to combine insights from neuroscience with computer science and engineering to build smarter brain-computer interfaces and dream up new ways to maximise the practicality of artificial vision. Finally, how did you become interested in this field in the first place?To me this is the ultimate scientific quest, and it has the potential to cure blindness. In the end, it all comes back to a deep fascination with the brain—this mysterious hunk of meat that uses less power than a light bulb to give rise to our conscious perception of the world. How on earth does the brain do it? It is extraordinary, when you really think about it.It's perhaps one of the last big remaining scientific mysteries. And what better way to test our understanding of the brain than to build a device that can safely and meaningfully interact with it? I mean, the technology to tap into this complex circuitry is coming, there is no way around that, and it will allow us to manipulate our perception, our decisions, our actions. We better start thinking about how to use these powers for good. Next Article More Inside PCMag.com About the Author S. C. Stuart is an award-winning digital strategist and technology commentator for ELLE China, Esquire Latino, Singularity Hub, and PCMag, covering: artificial intelligence; augmented, virtual, and mixed reality; DARPA; NASA; US Army Cyber Command; sci-fi in Hollywood (including interviews with Spike Jonze and Ridley Scott); and robotics (real-life encounters with over 27 robots and counting). Follow S.C. on Twitter @SCStuart2020 See Full Bio Please enable JavaScript to view the comments. Ad \n\n\n\n\nOriginally from Switzerland, Dr. Beyeler is wrapping up his postdoctoral fellow at the University of Washington before moving to the University of California Santa Barbara this fall to head up the newly formed Bionic Vision Lab in the Departments of Computer Science and Psychological & Brain Sciences.\n\nWe spoke with him about this \"deep fascination with the brain\" and how he hopes his work will eventually be able to restore vision to the blind. Here are edited and condensed excerpts from our conversation.\n\nDr. Beyeler, give us an overview of the 'neural engineering' field that will lead to bionic sight in the future. \n[MB] Neuroengineering is an emerging interdisciplinary field aiming to engineer devices that can interface with the brain. Kind of like the brain implants from Black Mirror, but much less creepy. [Laughs]\n\nThe human brain has roughly 100 billion nerve cells, or neurons, and trillions of connections between them, organised into different brain areas each supporting a particular task; for example, processing visual or auditory information, making decisions, or getting from A to B. You can imagine that understanding how these neural circuits give rise to perception and action requires bringing together skills from a variety of disciplines, such as neuroscience, engineering, computer science, and statistics.\n\nExplain how these BMIs work in your field. I've tested them for mood elevation, but not connected to visual states. \nRight. \"Brain-computer interfaces\" can be used both for treating neurological and mental disorders as well as for understanding brain function, and now engineers have developed ways to manipulate these neural circuits with electrical currents, light, ultrasound, and magnetic fields. Remarkably, we can make a finger, arm, or even a leg move just by activating the right neurons in the motor cortex. Similarly, we can activate neurons in the visual cortex to make people see flashes of light. The former allows us to treat neurological conditions such as Parkinson's disease and epilepsy, whereas the latter should eventually allow us to restore vision to the blind.\n\n\n\nAmazing. And what kinds of devices are currently in the field?\nThe idea of a visual prosthesis, or bionic eye, is no longer science fiction. You might have heard of the Argus II, a device developed by a company called Second Sight, which is available across the US, Europe, and in some Asian countries. It's for people who have lost their sight due to a retinal degenerative disease such as retinitis pigmentosa and macular degeneration.\n\nHow many people today have these retinal prostheses?\nI believe there are now more than 300 Argus II users around the world and the manufacturer, Second Sight, has also just started implanting ORION, a device that skips the eye entirely and directly interfaces with the visual cortex. Apart from that, we are also anxiously awaiting the first results of PRIMA, a new subretinal device developed at Stanford University and commercialised by a French company called Pixium Vision.\n\nSo this is a growing field?\nDefinitely. In fact, some 30 more devices are in development across the globe. Overall there should be a wide variety of sight restoration technologies available within the next decade.\n\n\n\n(A woman wearing Argus II glasses tests Dr. Beyeler's system)\n\n\n\nFor clarity, explain how the current systems work.\nWhen individuals, due to different diseases, no longer have their photoreceptors\u2014the light-gathering cells in the back of the eye\u2014the idea is to replace these cells with a microelectrode array that mimics their functionality. Argus II users also wear a pair of glasses with a small camera embedded, so the visual input of the camera can be translated into a series of electrical pulses that the implant delivers to the neural circuits in the eye. For most patients, Argus II provides \"finger-counting\" levels of vision\u2014people can differentiate light from dark backgrounds and see motion, but their vision is blurry and often hard to interpret. Unfortunately, with current technology it turns out to be really hard to mimic the neural code in the eye and the visual cortex to fool the brain into thinking that it saw something meaningful. This is where I come in.\n\nMy goal is basically to understand how to go from camera input to electrical stimulation and come up with a code that the visual system can interpret. This requires both a deep understanding of the underlying neuroscience as well as the technical skills to engineer a viable real-time solution.\n\nAnd how do you do this?\nBy using tools from computer science, neuroscience, and cognitive psychology. For example, we come up with mathematical equations that describe how individual neurons respond to electrical stimulation. We also perform simple psychophysical experiments, such as asking Argus II users to draw what they see when we stimulate different electrodes. We then use insights from these experiments to develop software packages that predict what people should see for any given electrical stimulation pattern, which can be used by the device manufacturer to make the artificial vision, provided by these devices, more interpretable for the user.\n\nAre you focusing on bionic (artificial) rather than biomimicry (natural) vision?\nYes, because instead of focusing on \"natural\" vision, we might be better off thinking about how to create \"practical\" and \"useful\" artificial vision. We have a real opportunity here to tap into the existing neural circuitry of the blind and augment their visual senses much like Google Glass or the Microsoft HoloLens. For example, make things appear brighter the closer they get, use computer vision to mark safe paths and combine it with GPS to give visual directions, warn users of impending dangers in their immediate surroundings, or even extend the range of \"visible\" light with the use of an infrared sensor. Once the quality of the generated artificial vision reaches a certain threshold, there are a lot of exciting avenues to pursue.\n\nOn a practical level, how are you using technology in your research?\nSince we don't develop our own implants, we often collaborate with different device manufacturers. Recently we have been making extensive use of Argus II, which comes with its own quite sophisticated software development kit. Second Sight have been very forthcoming with us, both by providing access to patients as well as by enduring our nagging requests to make minor software modifications so that we can field-test our crazy theories. In the end, these collaborations should be a win-win for both parties, ideally trading data for insight.\n\n\n\nWhat other tools, and software, do you use in your work?\nThe field is currently dominated by different device manufacturers, who (understandably) can be very protective of their intellectual property. However, the Swiss in me regards it as important to provide a neutral academic voice promoting tools and resources that are available to all. We therefore focus heavily on open-science practices.\n\nYou've developed some open-source projects, right?\nYes, in this spirit, we were the first to make our simulation engine, pulse2percept, available as an open-source Python package. The goal of pulse2percept is to predict what a patient should see for any given input stimulus. Interestingly, this approach has already gained the attention of Second Sight and Pixium Vision, who expressed interest in using our software to predict what their patients are seeing. In the future, my goal is to adapt this software to other devices as they become available.\n\nWhat brought you to the US, from Switzerland, in the first place?\nI started out as an electrical engineer, in Zurich, because I've always been interested in how things work, but became more and more interested in the brain itself, realising my skills as an electrical engineer were directly transferable to understanding how the brain works. I could take signal processing, network theory, and information theory and\u2014through biomedical engineering and neuroscience\u2014work towards brain-inspired neural networks and robotics\u2014and use all these concepts to do something really good. That's how I ended up at the University of California Irvine to continue my studies and do my PhD. I was only planning to come to the US for 9 months or so.\n\nAnd you're still here a decade later.\n[Laughs] Right.\n\nBack to your research today: who are your main supporters?\nMy recent work would not have been possible without generous support from the Washington Research Foundation in combination with the Gordon & Betty Moore Foundation and the Alfred P. Sloan Foundation. On top of that, I've been very fortunate to receive a K99 Pathway to Independence Award from the National Eye Institute at NIH. This a prestigious five-year grant that is meant to ease my transition into starting my own research group as an Assistant Professor.\n\n\n\nYou're about to join the Departments of Computer Science and Psychological & Brain Sciences at the University of California Santa Barbara and build a Bionic Vision Lab, which is such a cool name.\nYes, I am super stoked about this opportunity. There are lots of clinical research groups studying the effects of blinding degenerative diseases, and several biomedical groups engineering new devices. But nobody is really focusing on novel methods and algorithms to improve the code with which these devices interact with the human visual system itself. Our group will be an interdisciplinary effort that tries to combine insights from neuroscience with computer science and engineering to build smarter brain-computer interfaces and dream up new ways to maximise the practicality of artificial vision.\n\nFinally, how did you become interested in this field in the first place?\nTo me this is the ultimate scientific quest, and it has the potential to cure blindness. In the end, it all comes back to a deep fascination with the brain\u2014this mysterious hunk of meat that uses less power than a light bulb to give rise to our conscious perception of the world. How on earth does the brain do it?\n\nIt is extraordinary, when you really think about it.\nIt's perhaps one of the last big remaining scientific mysteries. And what better way to test our understanding of the brain than to build a device that can safely and meaningfully interact with it? I mean, the technology to tap into this complex circuitry is coming, there is no way around that, and it will allow us to manipulate our perception, our decisions, our actions. We better start thinking about how to use these powers for good.\n\n\n\n\n\n", "image": [{"url": "https://sm.pcmag.com/pcmag_uk/news/r/restoring-/restoring-vision-with-bionic-eyes-no-longer-science-fiction_h8n2.jpg", "width": 1920, "caption": "Restoring Vision With Bionic Eyes: No Longer Science Fiction", "@type": "ImageObject", "height": 1080}], "datePublished": "2019-07-09 14:29:00+00:00", "publisher": {"url": "https://uk.pcmag.com", "logo": {"url": "('https://uk.pcmag.com/s/',)pcmag/pcmag_logo_micro.png", "width": 245, "@type": "ImageObject", "height": 60}, "@type": "Organization", "name": "PCMag UK"}, "about": {"@type": "Thing", "name": "Microsoft HoloLens Development Edition"}, "author": {"description": "S. C. Stuart is an award-winning digital strategist and technology commentator for ELLE China, Esquire Latino, Singularity Hub, and PCMag, covering: artificial intelligence; augmented, virtual, and mixed reality; DARPA; NASA; US Army Cyber Command; sci-fi in Hollywood (including interviews with Spike Jonze and Ridley Scott); and robotics (real-life encounters with over 27 robots and counting).\n\nFollow S.C. on Twitter @SCStuart2020", "@type": "Person", "image": "https://assets.pcmag.com/media/images/602847-sc-stuart.png?thumb=y&width=95&height=95", "name": "S.C. Stuart"}, "headline": "Restoring Vision With Bionic Eyes: No Longer Science Fiction", "@type": "NewsArticle", "mainEntityOfPage": {"@id": "https://uk.pcmag.com/news-analysis/121598/restoring-vision-with-bionic-eyes-no-longer-science-fiction", "@type": "WebPage"}, "@context": "https://schema.org", "dateModified": "2019-07-10 08:51:17+00:00"}

weiterlesen: RSS Quelle öffnen