Many people are afraid of the idea of making computers smarter. In reality, however, artificial Intelligence in the form of computer vision may be just what we need in order to understand the world around us. The UA’s Interdisciplinary Visual Intelligence Lab is working on a variety of projects — ranging from biology, to astronomy, to social psychology — that all involve computer vision.
“Computer vision itself is fascinating, because every moment our brains achieve an amazing thing as they translate patterns of light into great detail about what is in the world and where it is,” said IVILAB Director Kobus Barnard. “This begs the questions — how does it work, and can it be made to work in a computer?”
In other words, is it possible to program computers to “see” 3D images in the same way humans do?
The lab develops software infrastructure that connects data from, as Barnard puts it, “disparate projects,” so meaningful models can accurately explain data and apply it to real world problems.
In the past, even the most up-to-date software allowed computers to only see in 2D. The IVILAB-generated models are 3D in order to eliminate variability in data and to better characterize data.
The lab is testing how well its models fit sets of data. “This is important because we need sophisticated models for many problems, but sophisticated models can easily fit random idiosyncrasies of the data, and not be generalizable to other data,” Barnard said.
Yekaterina Kharitonova, a computer science graduate student, is working on a project in the IVILAB called “Semantically Linked Instructional Content.” She is helping the learning process of individuals by improving information accessibility and retrievability. Her work takes presentations that use electric slides and transforms them into a browsing format where content of interest can be found more efficiently.
In the SLIC portal she is creating, Kharitonova explained, “If [a user is] seeking information about a specific topic, they can first find the relevant slides(s), and start watching the corresponding video only from the segment where the slide of interest was shown.”
Jinyan Guan, another computer science graduate student in the IVILAB, aims to help improve the understanding of human functioning by using Bayesian modeling to predict emotions.
Guan is using Bayesian models in order to understand emotional dynamics that occur within a person, as well as between people during social interactions. Data is gathered based on current emotional observations, and the Bayesian models will predict the most likely emotional response that will follow.
“By working with emotional data, we introduce a new domain for predictive modeling to the machine learning and Bayesian modeling community,” Guan said.
Although Guan has faced many challenges because of the interdisciplinary nature of her work, she believes her research will help advance the understanding of emotional systems.
“[Working in the IVILAB] has been the most rewarding throughout my 10 years at UA,” said Bonnie Kermgard, a senior studying information science technology and behavioral science.
Bonnie joined the lab in 2011 with a strong passion for computer programming. “I now have an understanding of the importance working with IVILAB,” she said.
Bonnie was part of many projects in the IVILAB, and she even co-authored a paper titled “Using Bayesian rooms using composite 3D object models.” The main goal of her work was to digitally reconstruct indoor 3D environments from 2D images.
Computer vision is especially remarkable because it applies to so many subjects in the world. According to Barnard, computer programs are important for people at large because computers are beginning to recognize objects in images. This feature allows them to perform many useful functions, including assisting the visually impaired, constructing virtual 3D representations of tourist attractions, carrying out medical imaging and directing self-driving cars.
To learn more, click this link to the IVILAB website: http://vision.sista.arizona.edu/ivilab/index.html
Follow Renee Conway on Twitter.