Wolfgang Fink received his undergraduate degree in physics and his master’s degree in experimental physics, theoretical physics, physical chemistry and astronomy and astrophysics from Goettingen University in northern Germany.
He obtained his doctorate in theoretical physics from the University of Tubingen in southern Germany and did post-doctoral work at the University of Tubingen and at the California Institute of Technology in 1998 for three years in the divisions of physics, mathematics and astronomy. He became a visiting associate in physics before transitioning to the Jet Propulsion Laboratory, a NASA facility operated and managed by Caltech, as a senior researcher.
He left the Jet Propulsion Laboratory in October 2009 to transition to the UA.
He is currently a visiting associate in physics at Caltech and a visiting research associate professor in ophthalmology and neurological surgery at the University of Southern California. He also is the founder and director of the Visual and Autonomous Exploration Systems Research Laboratory, which he founded in 2003 at Caltech. This lab is currently being transferred to the UA.
He is also a concert pianist, commercially licensed helicopter pilot and will become the first Edward and Maria Keonjian distinguished professor in the UA College of Engineering. Fink sat down with the Daily Wildcat’s Michelle Cohen to discuss his research combining optical sciences with robotics engineering.
Why UA?
It’s a fantastic opportunity ‘cause my lab at Caltech has already a longstanding collaboration with UA. UA is excellent, second to none in the optical science department and has a medical school, which Caltech doesn’t have. My research is threefold. I do a lot of research in biomedical engineering, especially dealing with eyes and vision. I’m also in autonomous space exploration and the third component is computer optimization. The three fit together.
In what ways do these fields fit together?
For example, we have developed a robot that can mimic the behavior of a blind person wearing an artificial vision implant. So basically, in lieu of testing all these algorithms out on a blind subject, we can test it on a robotic platform first, instead of a blind person. We learn everything on the robot then we can apply that to the actual blind patient with the implant.
The same robotic platform also serves as a prototype or test platform for autonomous planetary exploration. That’s why this robotic platform has dual uses, and how these fields fit together. We use that same robotic platform for multiple purposes. Also, the image processing done for artificial vision prosthesis is very similar to what’s done on a planetary rover going over a planetary surface. That’s how these fields are beneficial to both sides.
Which field did you start in?
Thanks to my Ph.D. thesis I was introduced to the field of ophthalmology, so I applied physics in ophthalmology in the medical field. That’s how I got interested in this interdisciplinary field.
When I came to Caltech and JPL (Jet Propulsion Labratory), I also got exposed to space exploration and became interested in that as well. Soon after, I realized there is a link between those fields. What I’m ultimately doing is trying to create systems that can synthetically reason — not artificial intelligence, because that is a rule-based approach. If (a system) encounters a situation they’re following a ton of prescribed rules. However, if (a system) encounters something where you don’t have the rule, (it) won’t be able to respond. That’s where we come in and try to create systems to ultimately attain something beyond the rule-based system.
For example, in Star Wars “”The Empire Strikes Back,”” at the beginning there are those probes the empire sent out to impact on planetary surface. Out comes this robot that hovers on the surface. Those droids can investigate the planet by themselves, think for themselves and maneuver once they find something interesting. They then communicate it back to the mother ship.
One of the goals that we are trying to do is get to a point where we can create a space explorer on its own that’s not Earth controlled. It goes out there to find exciting stuff then reports back. That has the advantage of being able to react to a situation right there in real time without having to communicate back to Earth and get the orders.
What is autonomy?
True autonomy would be if you send out a spacecraft to explore a planet and the spacecraft goes and explores and all the sudden tells you there’s nothing interesting. It will go somewhere else and tell you where it’s going. It basically has a mind of its own. That’s what we do not have at this point. That would be great. That’s the high-level goal.
Are there any risks with an autonomous machine like in the movie Terminator?
We’re very far away from anything risky. If you ultimately have a truly autonomous system, that could be dangerous. If the systems can fly and have tools then it could do things on its own. It would be harder to control, but if you have it confined to, lets say, a computer, then it would not be dangerous. But it’s definitely something one has to discuss.
If you think in terms of Terminator or The Matrix, I think it’s not necessarily too far-fetched. But in “”I, Robot,”” for that matter, it’s a very interesting discussion because it discusses ethics with machines. So the question is: at what point we have to draw the line. Is it still a machine or something more? At what point in time do you have a machine that is no longer a machine and has become an intelligent being? That’s an interesting question but of course all speculative.
It’s definitely important to think about as researchers. We, of course, carry responsibilities of what we create, so it definitely needs to be developed alongside the research.
Can you tell me about current research you’re working on?
For the autonomous exploration of planetary bodies, such as Titan and Mars, we created a novel mission architecture called Tier-Scalable Reconnaissance. That’s a new paradigm we developed at Caltech together with colleagues from the U.S. Geological Survey and UA of how to explore foreign planets. It means you have several tiers: orbiters, airships, and ground units such as rovers and landers. The idea is that this mimics the geological approach of global to regional to local reconnaissance. The orbiter has a global
view, the airship has a more regional to local view, and the rover has a very local view. It’s very important to have all of that integrated with each other. The Tier-Scalable Reconnaissance paradigm lends itself to autonomous space exploration missions that will
become necessary in the future.