Counterpoint: Artificial Intelligence

 

 

 

Roy-Chowdhury headshot

Can Cameras Learn?

Looking at the future of AI-powered
image analysis.

By Professor Amit K. Roy-Chowdhury

Many intelligent systems, such as camera networks, rely on artificial intelligence to analyze images and track individuals. As imaging sensor technology has advanced, many systems have seen a remarkable increase in applications, including law enforcement, retail, facility access, and environment monitoring.

However, even though devices are becoming cheaper, monitoring a wide area with many cameras is still not feasible due to the amount of human supervision, privacy concerns, and maintenance costs involved.

This creates blind spots in which no information can be obtained, and it raises the need for automated methods to extract useful information from the extremely high volume of recorded video. As an example, when a camera loses a person from its field-of-view, it is extremely challenging to reassociate the same person at a different location and time among multiple persons. This is known as the person re-identification problem.

Despite a host of research into the issue, re-identification still faces hard challenges. First, uncontrolled environments, such as city streets or airport terminals, require video footage to be recorded by cameras with large fields-of-view, which offer only low resolution images of the targets.

Making the problem more complicated, a target’s appearance often undergoes large variations across nonoverlapping camera views due to significant changes in viewing angle, lighting, background clutter, and occlusion. This makes it harder to discern biometric features, e.g. face and gait, and the computer’s output becomes more unreliable. As a result, these methods often perform poorly, making visual appearance features, such as colors and shapes, still the first choice in re-identification problems.

The most successful approaches to solve these issues have involved supervised tests in which labeled data across pairs of cameras is used to “teach” computers how to define the transformation between the views in two cameras. Essentially, a computer has to be shown images of every type of object it could potentially view, and its algorithms must successfully interpolate the data — no easy feat.

The robotic networks that power autonomous cars also face the hurdle of excessive reliance on supervision due to the wide variety of unanticipated conditions that may be encountered on the road. Some AI experts believe it could be several years before self-driving systems can reliably avoid accidents.

The level of human involvement required to label data hampers scalability of the problem, growing more severe as the size of the camera network and the variety of conditions that may be encountered grow. Reducing the level of supervision, and possibly developing unsupervised methods, remains a challenge to computer vision and machine-learning researchers.

One approach involves having humans label a small, but highly informative, subset of training examples designed to enable the computer to learn about all possible conditions and create its own “pseudo labels” based on the data input by humans. For example, if a human created a label identifying a “vintage Chevy” as a vintage car, the computer could interpolate that data and identify a vintage Ford as another vintage car.

Furthermore, the computer vision problems we have identified thus far, such as re-identification and object recognition, concern static platforms, e.g. an airport’s fixed camera network.

Mobile cameras, e.g. drones,present even more difficult problems to researchers, due in large part to their use in highly uncontrolled environments. While many in the field of artificial intelligence are working on these questions, the prospect of a drone autonomously navigating through a crowded city street and deciding where to go based on data it views through a camera, remains a challenge.


Roy-Chowdhury, a Bourns Family Faculty Fellow in the Marlan and Rosemary Bourns College of Engineering, studies the foundational principles of computer vision, image processing, and vision-based statistical learning, with applications in cyber-physical, autonomous, and intelligent systems.
Schwitzgebel_headshot

Will We Ever Know the Moral Status of Robots?

Getting to the root of consciousness.

By Professor Eric Schwitzgebel

Someday, we might create robots as cognitively sophisticated as humans, or more so, and capable of real “general intelligence.” This stands in contrast to robots with limited, specific intelligence, such as the ability to play chess or make medical diagnoses from symptom lists — think C-3P0 from “Star Wars” or Delores from “Westworld.”

It would be appropriate to call such a robot a “living thing,” despite its lack of biological components. After all, C-3P0 could die, right? And we would probably intuitively regard such robots as deserving of rights similar to those of human beings, especially if — like C-3P0 and Delores — they look and talk like us.

But will we ever know whether they truly deserve such treatment?

Premise 1:

To truly deserve rights similar to those of human beings, an entity needs to be capable of genuine conscious experiences, such as joy or suffering.

Most people agree that there’s “something it’s like” to be a dog or a mouse — using Thomas Nagel’s famous phrase — while presumably there’s nothing it’s like to be a toy dinosaur. I could program an ordinary laptop to wail when its battery runs low, but unless something like the “light of consciousness” is present, recycling my laptop isn’t like killing an animal or human.

Premise 2:

We will never know whether androids like C-3P0 or Delores are genuinely conscious.

This might seem less plausible. Surely, if we met C-3P0 or Delores and gazed into their eyes, we would know they genuinely have a stream of conscious experience behind those eyes? That judgment would be hard to resist, especially if you had adventures with them, befriended them, or had trouble distinguishing them, by external criteria, from humans.

Yet, not all philosophically and scientifically viable theories of consciousness imply our intuitive judgments about such cases would be a reliable guide.

Although our intuitive judgments are great for planning lunch, they are rotten concerning matters beyond the usual run of human experience, such as the behavior of rockets traveling at 99% the speed of light. Cognitively sophisticated robots might be the social-intuitional equivalent of rockets traveling near light speed — a case where our ordinary intuitive judgments can’t be trusted.

Some theorists hold outward behavior might be discorrelated with the existence of consciousness. Others argue some biological feature or detail of internal processing is, or might be, essential to consciousness, and that an entity programmed, trained, or selected on the basis of acting similarly to entities like us might lack genuine humanlike conscious experience.

Religious believers in a God-given soul might doubt human-made robots would have one. And some, including many futurists, believe if we are living within a giant “matrix” or computer simulation, our consciousness might depend on facts outside the matrix or simulation, unknowable to us.

As long as there is no scientific consensus on a theory of consciousness, there will be possibilities in which even highly sophisticated robots might lack genuine consciousness despite being broadly humanlike in general intelligence.

You might think that if there is some doubt about the status of robots, the best approach would be to treat anything that might be conscious as conscious. However, there is moral risk in this seemingly conservative position. If robots aren’t conscious, but we treat them as equals, then we will sometimes need to sacrifice human interests for them. In a fire, we might choose to save six robots rather than five humans.

If those robots turn out to be merely complicated Furby dolls, that would be tragic.

The genuinely more cautious approach would be this: Decline to build any robots whose moral status would be in doubt, prioritize research in the science of consciousness, and proceed slowly toward general artificial intelligence, so that we know exactly what we are doing before we do it.

If having rights depends on consciousness, and if we can’t adequately narrow the range of viable theories of consciousness, we might never know whether our future robotic pals deserve equal rights, or whether instead there is only emptiness behind their camera eyes. 


Schwitzgebel’s research explores connections between empirical psychology and philosophy of mind, especially the nature of belief, the inaccuracy of our judgments about our stream of conscious experience, and the tenuous relationship between philosophical ethics and actual moral behavior.