Artificial Intelligence

Those robots may yet change your life, but you'll still have to tell them what to do.

There's a movie due for release any week now, a would-be summer blockbuster entitled AI (Artificial Intelligence, for those unfamiliar with the lingo) relating the trials and yearnings for self-definition of the world's first android, a child model named David; Pinocchio recast for the 21st century. ("His love is real. But he is not.")

Though this maudlin fable is set only a few decades hence, trailers suggest David bears little resemblance to those cybernetic entities who would be his immediate forebears, none of whom look anything like doe-eyed pre-pubescent actor Haley Joel Osment.

Take, for instance, the gaggle of proto-robots sitting, apparently patient and contented, in the robotics lab on the Oak Ridge National Laboratory compound just outside Oak Ridge proper. Ada, Edith, Grace, and Alexandra (the sobriquets are taken from female pioneers in the realm of computer science) resemble nothing so much as industrial-size vacuum cleaners, 4-foot-high blue-on-black cylinders perched on wheels and capped with strangely-shaped steel and plastic conformations.

The girls' younger brothers, meanwhile, Constantine and Augustus and Theodosius and Hadrian and Vespasian (named for Roman Emperors, natch) could pass for miniature lawn mower units, boxy little red fellows mounted on what look like thick-set Tonka truck wheels.

While this little family of robots lacks the fictional David's endearing childlike aspect, and are most certainly devoid of his alleged sentience, they are pretty adept at figuring and accomplishing some simple tasks once thought far beyond the ken of mechanical entities. They're sorta cute too; on command, little Augustus follows a visiting photographer puppy-like across the laboratory's oft-marred white tile floors.

"Most people have no real understanding of what's possible in artificial intelligence," says Dr. Lynne Parker, a group leader at the lab's computational intelligence division. "They think tomorrow we'll be taken over by machines, but nothing could be further from the truth."

Parker's brood consists of experimental models, employed in research on multi-robot learning and heterogeneous distributed sensing. In simpler terms, they have the capacity to perform certain cooperative tasks by "learning" from the data they receive, rather than relying on human proxy. In a sample exercise, one set of 'bots "tracks" the other as they move randomly across the floor, with the parameter that all of the trackees must be kept within a certain radius of the followers.

Parker explains that the maneuvers require each tracking robot to react in concert with the behavior of its fellows. "It's a bit like a basketball game, and they're playing zone defense," Parker chuckles. "But the zones aren't static ones; it's like they have fewer players than they really need."

Her research holds promise in such unglamorous fields as hazardous waste disposal, far removed from the fanciful visions of wholly interactive android companions that sci-fi novelists and Spielbergian storytellers serve up on a regular basis. And according to several local researchers, AI technology may never reach a level of complexity that will enable a holistic simulation of human characteristics.

"There's no comparison to human intelligence, and I don't think there ever will be," says Dr. Robert Uhrig, an accomplished researcher in the University of Tennessee's nuclear engineering department. "I think there will be robots that do specific tasks very well, but never robots with the full breadth of human intelligence."

But speculating on the possibilities of AI inevitably points backward to the knotty question of how to define it. "It's a question no one agrees on," says Parker. "I've got a half-dozen books, and none of them say the same thing."

"Most of what we call 'AI' involves software," explains Dr. Wesley Hines, a 38-year-old nuclear engineering prof. whose research picks up where the retiring Uhrig's leaves off. "What makes it AI is that it in some way mimics a specific function of a human being. And usually, the software will do it better."

The pioneering work in what's now recognized as AI began in 1943, when researchers Warren McCulloch and Walter Pitts proposed a model of artificial neurons derived from the physiology of the human brain. Their work suggested that since any deducible function could be computed by such a neural network, those networks should also be capable of augmenting their own functionality by "learning" from the data they assimilate.

It wasn't until 1956 that the science came into its own, however, when Princeton graduate John McCarthy organized a two-month workshop on neural networks and the study of intelligence at Dartmouth University. Perhaps most significantly, the 10 researchers in attendance agreed to adopt McCarthy's nomenclature for the budding field of study: artificial intelligence.

MIT alumnus Dr. Carroll Johnson, one of the key figures in introducing AI technology to Oak Ridge researchers, recalls that inklings of local AI involvement came with the onset of the Cold War, when explorers in the field of computational linguistics sought to create programs that might intercept and translate Russian language transmissions.

A physicist by training, Johnson's own work was in neutron diffraction and crystallography—study of the atomic particles which comprise most non-amorphous substances. When a workshop he was attending at Columbia University introduced him to a group of scientists applying AI techniques to computer graphics, the young physicist was sparked to integrate those ideas into his own field of study.

"I wanted a machine that would evaluate crystals, compare their composition to the existing body of scientific literature, and then actually write the final manuscript, the analysis," Johnson says. "This was blue-sky stuff at the time; the community reaction was 'you can't do that,' and that kind of irritated me. I was even more determined after they snubbed the idea."

In the quarter-century that followed Johnson's initial foray into AI applications, the ORNL researcher organized a multi-departmental artificial intelligence consortium drawing from the lab's mathematics, energy, and instruments divisions. He also made inroads with institutions outside the laboratory and its overseeing Department of Energy, working on projects for the armed force (in chemical warfare), and the U.S. Treasury, where he enabled automated troubleshooting systems in the department's printing/engraving functions.

Though his own vision of a comprehensive crystal-analysis program has yet to be fully realized, he says AI technology has yielded a quantum leap in the efficiency of the process. "When I started in the late '50s, it often took six months to 'solve' a crystal structure," says Johnson, who retired from ORNL in 1996. "Now you can do it as quickly as overnight. And you can write some of the manuscript, although people don't like to talk about that; they're loathe to consider that a computer program might author a paper they feel should be done by a human researcher."

The work of Johnson and his colleagues paved the way for the mid-'80s ORNL robotics program—the Center for Engineering Science Advanced Research (CESAR)—which took as its focus the realm of so-called "intelligent systems." Today, Parker's division carries the torch, with perhaps 10 scientists and technicians engaged in AI research.

The inner workings of artificial intelligence are not readily digestible for the layman. Hines explains, in the simplest terms possible when speaking to a slow-witted reporter, that AI can be broadly divided into four categories: neural networks, expert systems, "fuzzy logic" systems, and genetic algorithms. He teaches undergraduate classes that deal with each of those approaches at UT.

The AI manifestation most akin to our own physiology, neural networking takes the human brain as its template. The neuron, says Hines, is the "building block of the brain," and the process by which neurons make connections is the process by which humans develop cognitive structures such as reasoning and memory.

"When you're born, your neurons haven't been connected yet," he says. "As you grow, you develop those connections; we learn through experience."

Just as human neurons assimilate the inputs of experience, so artificial neurons assimilate data, and acquire the ability to formulate predictive models based on that data. Hines sites applications such as so-called "data mining" and stock market analysis software as examples of neural-based AI.

"The difference is that while a neural network might have 100 'neurons,' the brain has billions," Hines explains. "The scale is way off there. And with our current technology, the process of interconnection doesn't work too well as the number of artificial neurons increases."

Expert systems, by contrast, are rule-based, employing a body of established knowledge to evaluate and make judgments based on the conditions at hand. In instances where experienced personnel retire from positions that rely on the holders' aggregate knowledge, Hines says the know-how of the retiree is often codified into a program that allows a particular system to self-regulate. A waste processing plant, for instance, might operate via software that takes the measure of temperatures or chemical balances and adjusts them accordingly, doing away with the need for a savvy human monitor.

To demonstrate what researchers call "fuzzy logic systems," Hines designed a program (in jest, one assumes) that determines the percentage of a gratuity according to the overall quality of experience at a restaurant. The program breaks the seemingly very subjective notion of "good service" into a number of smaller, more quantifiable components. (For an experience that yields especially bad service and food, Hines produces an estimate for a 5.97 percent tip.)

"People don't always think in black or white, in what we call 'crisp logic' like computers do," Hines explains. "Fuzzy logic deals with the 'maybes,' it makes a deal with uncertainty. You want your dishwasher to make the dishes clean. But how clean is 'clean'? It's all about breaking down 'linguistic hedges' into characteristics with assigned values. You put in crisp inputs, the program applies rules, then aggregates the rules so that you get a crisp output."

Algorithms, says Parker, are simply computer instructions for problem-solving. Genetic algorithms, or optimization techniques, says Hines, are fashioned from a Darwinian survivalist model, banking on the notion that a finite number of optimal solutions exist for a given problem, and that given the chance, those solutions will outlast their inferior brethren.

"You're trying to solve a problem, and your program comes up with 100 solutions," says Hines. "It eliminates all but 10 of those solutions, then develops from those 100 new solutions. It's like generations of people; the best 'children,' or the best solutions, are the ones that survive."

These four cardinal approaches are rarely used in isolation, Hines continues; most AI applications incorporate more than one methodology. He cites a spectrum of implementations, from dishwasher controls ("I had the 'Maytag man' visit a seminar last year") to control systems in automobile electronics to credit card fraud detection, wherein AI software inputs credit card data and flags anomalies which appear within a particular user's recent billings.

"They detect a lot of fraudulently applied cards simply by analyzing data," Hines says. "If you've ever gotten a call about your credit card, it was singled out for attention by an AI system."

In his own research, Hines works with the Tennessee Valley Authority to improve sensors that regulate conditions within the organization's power stations. While such sensors would ordinarily need a comprehensive evaluation every few months to identify those which have strayed from precision's way, Hines' software performs the evaluation itself, then pinpoints the trouble spots for a human operator.

"It's like having a sensor that says 'Your oil is no longer working properly,' rather than automatically changing it every 3,000," Hines explains.

In another noteworthy example of local AI innovation—another example of its diversity, and of its cross-pollination of methodologies—engineers at Knoxville's Perceptics, located on Mabry Hood Road off the original Pellissippi Parkway, apply artificial intelligence to a variety of inspection and control systems, including a license plate reader used to image, sort and evaluate plates on cars at customs checkpoints and toll booths, and a so-called "container code reader" used by shippers to keep inventory of the generic corrugated metal hulks commonly employed in overseas transport.

Engineer Juan Herrera describes both readers as expert systems, of a sort, though like other technologies, they hardly exist within a methodological vacuum. The LPR software program, for instance, is capable of reading license plates—identifying countries of origin and even geopolitical subdivisions through analysis of aspect ratios and character spacings—and then sending the collected info to a larger database for comparison, the results of which are sent back to an inspector in the customs booth.

Those plates which yield data that falls outside certain standard parameters are flagged and left to the inspector's discretion.

"The LPR generates a 'decision tree' from a set of rules," says Herrera. "A set of data and some rules of thumb are compiled quickly into a comprehensive decision-making process."

The evaluation takes place over less than a quarter-second; it's now used at 38 U.S. Customs sites along the Canadian and Mexican borders, as well as a few crossings abroad. "When you get pulled over a border crossing, it's rarely at random," says Perceptics communications manager Marcella Simmerman.

If the Hollywood movie "David" has a close ancestor among today's generation of artificially intelligent robots and software programs, that ancestor would surely be Kismet, a more-or-less humanoid automaton, property of the Massachusetts Institute of Technology—long one of the wellsprings of seminal AI research.

Kismet's tacked-on features—eyes, nose, a mouth—possess a limited range of motion, derived from the mechanical apparati in her "face," which constitute a very rudimentary miming of human muscular reaction. Kismet's countenance is capable of crinkling into a smile, drooping into a sulk, even recoiling in shock or horror.

And Kismet's circuits are hardwired to manifest those very reactions; her AI programming permits her to interact with her human caretaker in a familiar way, to react with sadness when left alone for long periods of time, to smile when greeted, to evince robotic fright when threatened with a stick or a weapon. In a startling example of "fuzzy logic" applications, Kismet's emotions are even marked by gradations of intensity, manifesting themselves in proportions concomitant to her inputs.

Parker tempers the story of Kismet with skepticism, however. "Will we have a robot in all aspects like a human?" she submits. "We're at the very least decades and decades away from anything like that. How far are we? For certain kinds of tasks, a robot does better than a human—playing chess, for instance. And we've even begun experimenting with imbuing robots with 'motivation.' But some things can't be expressed in an algorithm."

Hines says current technology enables household robots that could conceivably perform a variety of tasks in response to human commands. "We're at the point, we have the voice recognition software such that you could have a house and tell it, 'Start my bath water, 98 degrees and two-thirds full, then start the coffee.'

"The process power is growing fast, but it's still removed from the realm of human interaction by exponential factors. We're talking about billions of connections, or neurons separating an AI program from a human brain."

Parker speculates that the possibility of wholly sentient AI may be

quandary for philosophers as much as computer scientists, a metaphysical question that harkens to our notions of consciousness and soul and self-definition. Ultimately, the unfathomable spark that separates what we call "life" from simple mechanical mimicry has yet to be expressed in the language of algorithms and flow charts.

"Humans have self-awareness, whereas AI programs don't," Parker avers. "They have no idea why they're doing something or showing emotion. And there's still little understanding of the neurologic processes which constitute self-awareness. There are a few scientists looking into that, but most of us feel like robots and AI have more than enough things to learn and deal with right now."

© 2001 MetroPulse. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Comments » 0

Be the first to post a comment!

Share your thoughts

Comments are the sole responsibility of the person posting them. You agree not to post comments that are off topic, defamatory, obscene, abusive, threatening or an invasion of privacy. Violators may be banned. Click here for our full user agreement.

Comments can be shared on Facebook and Yahoo!. Add both options by connecting your profiles.