Pages

Saturday, December 31, 2011

Drone-Ethics Briefing: What a Leading Robot Expert Told the CIA



>
Last month, philosopher Patrick Lin delivered this briefing about the ethics of drones at an event hosted by In-Q-Tel, the CIA's venture-capital arm. It's a thorough and unnerving survey of what it might mean for the intelligence service to deploy different kinds of robots. 



Robots are replacing humans on the battlefield--but could they also be used to interrogate and torture suspects? This would avoid a serious ethical conflict between physicians' duty to do no harm, or nonmaleficence, and their questionable role in monitoring vital signs and health of the interrogated. A robot, on the other hand, wouldn't be bound by the Hippocratic oath, though its very existence creates new dilemmas of its own.


The ethics of military robots is quickly marching ahead, judging by news coverage and academic research. Yet there's little discussion about robots in the service of national intelligence and espionage, which are omnipresent activities in the background. This is surprising, because most military robots are used for surveillance and reconnaissance, and their most controversial uses are traced back to the Central Intelligence Agency (CIA) in targeted strikes against suspected terrorists. Just this month, a CIA drone --a RQ-170 Sentinel--crash-landed intact into the hands of the Iranians, exposing the secret US spy program in the volatile region.


The US intelligence community, to be sure, is very much interested in robot ethics. At the least, they don't want to be ambushed by public criticism or worse, since that could derail programs, waste resources, and erode international support. Many in government and policy also have a genuine concern about "doing the right thing" and the impact of war technologies on society. To those ends, In-Q-Tel--the CIA's technology venture-capital arm (the "Q" is a nod to the technology-gadget genius in the James Bond spy movies)--had invited me to give a briefing to the intelligence community on ethical surprises in their line of work, beyond familiar concerns over possible privacy violations and illegal assassinations. This article is based on that briefing, and while I refer mainly to the US intelligence community, this discussion could apply just as well to intelligence programs abroad.


BACKGROUND


Robotics is a game-changer in national security. We now find military robots in just about every environment: land, sea, air, and even outer space. They have a full range of form-factors from tiny robots that look like insects to aerial drones with wingspans greater than a Boeing 737 airliner. Some are fixed onto battleships, while others patrol borders in Israel and South Korea; these have fully-auto modes and can make their own targeting and attack decisions. There's interesting work going on now with micro robots, swarm robots, humanoids, chemical bots, and biological-machine integrations. As you'd expect, military robots have fierce names like: TALON SWORDS, Crusher, BEAR, Big Dog, Predator, Reaper, Harpy, Raven, Global Hawk, Vulture, Switchblade, and so on. But not all are weapons--for instance, BEAR is designed to retrieve wounded soldiers on an active battlefield.


The usual reason why we'd want robots in the service of national security and intelligence is that they can do jobs known as the 3 "D"s: Dull jobs, such as extended reconnaissance or patrol beyond limits of human endurance, and standing guard over perimeters; dirty jobs, such as work with hazardous materials and after nuclear or biochemical attacks, and in environments unsuitable for humans, such as underwater and outer space; and dangerous jobs, such as tunneling in terrorist caves, or controlling hostile crowds, or clearing improvised explosive devices (IEDs).


Robots don't act with malice or hatred or other emotions that can lead to war crimes and other abuses, such as rape.
But there's a new, fourth "D" that's worth considering, and that's the ability to act with dispassion. (This is motivated by Prof. Ronald Arkin's work at Georgia Tech, though others remain skeptical, such as Prof. Noel Sharkey at University of Sheffield in the UK.) Robots wouldn't act with malice or hatred or other emotions that may lead to war crimes and other abuses, such as rape. They're unaffected by emotion and adrenaline and hunger. They're immune to sleep deprivation, low morale, fatigue, etc. that would cloud our judgment. They can see through the "fog of war", to reduce unlawful and accidental killings. And they can be objective, unblinking observers to ensure ethical conduct in wartime. So robots can do many of our jobs better than we can, and maybe even act more ethically, at least in the high-stress environment of war.

SCENARIOS


With that background, let's look at some current and future scenarios. These go beyond obvious intelligence, surveillance, and reconnaissance (ISR), strike, and sentry applications, as most robots are being used for today. I'll limit these scenarios to a time horizon of about 10-15 years from now.


Military surveillance applications are well known, but there are also important civilian applications, such as robots that patrol playgrounds for pedophiles (for instance, in South Korea) and major sporting events for suspicious activity (such as the 2006 World Cup in Seoul and 2008 Beijing Olympics). Current and future biometric capabilities may enable robots to detect faces, drugs, and weapons at a distance and underneath clothing. In the future, robot swarms and "smart dust" (sometimes called nanosensors) may be used in this role.


Robots can be used for alerting purposes, such as a humanoid police robot in China that gives out information, and a Russian police robot that recites laws and issues warnings. So there's potential for educational or communication roles and on-the-spot community reporting, as related to intelligence gathering.


In delivery applications, SWAT police teams already use robots to interact with hostage-takers and in other dangerous situations. So robots could be used to deliver other items or plant surveillance devices in inaccessible places. Likewise, they can be used for extractions too. As mentioned earlier, the BEAR robot can retrieve wounded soldiers from the battlefield, as well as handle hazardous or heavy materials. In the future, an autonomous car or helicopter might be deployed to extract or transport suspects and assets, to limit US personnel inside hostile or foreign borders.


In detention applications, robots could also be used to not just guard buildings but also people. Some advantages here would be the elimination of prison abuses like we saw at Guantanamo Bay Naval Base in Cuba and Abu Ghraib prison in Iraq. This speaks to the dispassionate way robots can operate. Relatedly--and I'm not advocating any of these scenarios, just speculating on possible uses--robots can solve the dilemma of using physicians in interrogations and torture. These activities conflict with their duty to care and the Hippocratic oath to do no harm. Robots can monitor vital signs of interrogated suspects, as well as a human doctor can. They could also administer injections and even inflict pain in a more controlled way, free from malice and prejudices that might take things too far (or much further than already).


And robots could act as Trojan horses, or gifts with a hidden surprise. I'll talk more about these scenarios and others as we discuss possible ethical surprises next.


ETHICAL AND POLICY SURPRISES


Limitations


While robots can be seen as replacements for humans, in most situations, humans will still be in the loop, or at least on the loop--either in significant control of the robot, or able to veto a robot's course of action. And robots will likely be interacting with humans. This points to a possible weak link in applications: the human factor.


For instance, unmanned aerial vehicles (UAVs), such as Predator and Global Hawk, may be able to fly the skies for longer than a normal human can endure, but there are still human operators who must stay awake to monitor activities. Some military UAV operators may be overworked and fatigued, which may lead to errors in judgment. Even without fatigue, humans may still make bad decisions, so errors and even mischief are always a possibility and may include friendly-fire deaths and crashes.


Some critics have worried that UAV operators--controlling drones from half a world away--could become detached and less caring about killing, given the distance, and this may lead to more unjustified strikes and collateral damage. But other reports seem to indicate an opposite effect: These controllers have an intimate view of their targets by video streaming, following them for hours and days, and they can also see the aftermath of a strike, which may include strewn body parts of nearby children. So there's a real risk of post-traumatic stress disorder (PTSD) with these operators.


Another source of liability is how we frame our use of robots to the public and international communities. In a recent broadcast interview, one US military officer was responding to a concern that drones are making war easier to wage, given that we can safely strike from longer distances with these drones. He compared our use of drones with the biblical David's use of a sling against Goliath: both are about using missile or long-range weapons and presumably have righteousness on their side. Now, whether or not you're Christian, it's clear that our adversaries might not be. So rhetoric like this might inflame or exacerbate tensions, and this reflects badly on our use of technology.


One more human weak-link is that robots may likely have better situational awareness, if they're outfitted with sensors that can let them see in the dark, through walls, networked with other computers, and so on. This raises the following problem: Could a robot ever refuse a human order, if it knows better? For instance, if a human orders a robot to shoot a target or destroy a safehouse, but it turns out that the robot identifies the target as a child or a safehouse full of noncombatants, could it refuse that order? Does having the technical ability to collect better intelligence before we conduct a strike obligate us to do everything we can to collect that data? That is, would we be liable for not knowing things that we might have known by deploying intelligence-gathering robots? Similarly, given that UAVs can enable more precise strikes, are we obligated to use them to minimize collateral damage?


On the other hand, robots themselves could be the weak link. While they can replace us in physical tasks like heavy lifting or working with dangerous materials, it doesn't seem likely that they can take over psychological jobs such as gaining the confidence of an agent, which involves humor, mirroring, and other social tricks. So human intelligence, or HUMINT, will still be necessary in the foreseeable future.


Relatedly, we already hear criticisms that the use of technology in war or peacekeeping missions aren't helping to win the hearts and minds of local foreign populations. For instance, sending in robot patrols into Baghdad to keep the peace would send the wrong message about our willingness to connect with the residents; we will still need human diplomacy for that. In war, this could backfire against us, as our enemies mark us as dishonorable and cowardly for not willing to engage them man to man. This serves to make them more resolute in fighting us; it fuels their propaganda and recruitment efforts; and this leads to a new crop of determined terrorists.


Also, robots might not be taken seriously by humans interacting with them. We tend to disrespect machines more than humans, abusing them more often, for instance, beating up printers and computers that annoy us. So we could be impatient with robots, as well as distrustful--and this reduces their effectiveness.


Without defenses, robot could be easy targets for capture, yet they may contain critical technologies and classified data that we don't want to fall into the wrong hands. Robotic self-destruct measures could go off at the wrong time and place, injuring people and creating an international crisis. So do we give them defensive capabilities, such as evasive maneuvers or maybe nonlethal weapons like repellent spray or Taser guns or rubber bullets? Well, any of these "nonlethal" measures could turn deadly too. In running away, a robot could mow down a small child or enemy combatant, which would escalate a crisis. And we see news reports all too often about unintended deaths caused by Tasers and other supposedly nonlethal weapons.


 

International humanitarian law (IHL)

What if we designed robots with lethal defenses or offensive capabilities? We already do that with some robots, like the Predator, Reaper, CIWS, and others. And there, we run into familiar concerns that robots might not comply with international humanitarian law, that is, the laws of war. For instance, critics have noted that we shouldn't allow robots to make their own attack decisions (as some do now), because they don't have the technical ability to distinguish combatants from noncombatants, that is, to satisfy the principle of distinction, which is found in various places such as the Geneva Conventions and the underlying just-war tradition. This principle requires that we never target noncombatants. But a robot already has a hard time distinguishing a terrorist pointing a gun at it from, say, a girl pointing an ice cream cone at it. These days, even humans have a hard time with this principle, since a terrorist might look exactly like an Afghani shepherd with an AK-47 who's just protecting his flock of goats.


Another worry is that the use of lethal robots represents a disproportionate use of force, relative to the military objective. This speaks to the collateral damage, or unintended death of nearby innocent civilians, caused by, say, a Hellfire missile launched by a Reaper UAV. What's an acceptable rate of innocents killed for every bad guy killed: 2:1, 10:1, 50:1? That number hasn't been nailed down and continues to be a source of criticism. It's conceivable that there might be a target of such high value that even a 1,000:1 collateral-damage rate, or greater, would be acceptable to us.


Even if we could solve these problems, there may be another one we'd then have to worry about. Let's say we were able to create a robot that targets only combatants and that leaves no collateral damage--an armed robot with a perfectly accurate targeting system. Well, oddly enough, this may violate a rule by the International Committee of the Red Cross (ICRC), which bans weapons that cause more than 25% field mortality and 5% hospital mortality. ICRC is the only institution named as a controlling authority in IHL, so we comply with their rules. A robot that kills most everything it aims at could have a mortality rate approaching 100%, well over ICRC's 25% threshold. And this may be possible given the superhuman accuracy of machines, again assuming we can eventually solve the distinction problem. Such a robot would be so fearsome, inhumane, and devastating that it threatens an implicit value of a fair fight, even in war. For instance, poison is also banned for being inhumane and too effective. This notion of a fair fight comes from just-war theory, which is the basis for IHL. Further, this kind of robot would force questions about the ethics of creating machines that kill people on its own.


Other conventions in IHL may be relevant to robotics too. As we develop human enhancements for soldiers, whether pharmaceutical or robotic integrations, it's unclear whether we've just created a biological weapon. The Biological Weapons Convention (BWC) doesn't specify that bioweapons need to be microbial or a pathogen. So, in theory and without explicit clarification, a cyborg with super-strength or super-endurance could count as a biological weapon. Of course, the intent of the BWC was to prohibit indiscriminate weapons of mass destruction (again, related to the issue of humane weapons). But the vague language of the BWC could open the door for this criticism.


If a soldier could resist pain through robotics or genetic engineering or drugs, are we still prohibited from torturing that person?
Speaking of cyborgs, there are many issues related to these enhanced warfighters, for instance: If a soldier could resist pain through robotics or genetic engineering or drugs, are we still prohibited from torturing that person? Would taking a hammer to a robotic limb count as torture? Soldiers don't sign away all their rights at the recruitment door: what kind of consent, if any, is needed to perform biomedical experiments on soldiers, such as cybernetics research? (This echoes past controversies related to mandatory anthrax vaccinations and, even now, required amphetamine use by some military pilots.) Do enhancements justify treating soldiers differently, either in terms of duties, promotion, or length of service? How does it affect unit cohesion if enhanced soldiers, who may take more risks, work alongside normal soldiers? Back more squarely to robotics: How does it affect unit cohesion if humans work alongside robots that might be equipped with cameras to record their every action?

And back more squarely to the intelligence community, the line between war and espionage is getting fuzzier all the time. Historically, espionage isn't considered to be casus belli or a good cause for going to war. War is traditionally defined as armed, physical conflict between political communities. But because so much of our assets are digital or information-based, we can attack--and be attacked--by nonkinetic means now, namely by cyberweapons that take down computer systems or steal information. Indeed, earlier this year, the US declared as part of its cyberpolicy that we may retaliate kinetically to a nonkinetic attack. Or as one US Department of Defense official said, "If you shut down our power grid, maybe we'll put a missile down one of your smokestacks."


As it applies to our focus here: if the line between espionage and war is becoming more blurry, and a robot is used for espionage, under what conditions could that count as an act of war? What if the spy robot, while trying to evade capture, accidentally harmed a foreign national: could that be a flashpoint for armed conflict? (What if the CIA drone in Iran recently had crashed into a school or military base, killing children or soldiers?)


Law & responsibility


Accidents are entirely plausible and have happened elsewhere: In September 2011, an RQ-Shadow UAV crashed into a military cargo plane in Afghanistan, forcing an emergency landing. Last summer, test-flight operators of a MQ-8B Fire Scout helicopter UAV lost control of the drone for about half an hour, which traveled for over 20 miles towards restricted airspace over Washington DC. A few years ago in South Africa, a robotic cannon went haywire and killed 9 friendly soldiers and wounded 14 more.


Errors and accidents happen all the time with our technologies, so it would be naïve to think that anything as complex as a robot would be immune to these problems. Further, a robot with a certain degree of autonomy may raise questions of who (or what) is responsible for harm caused by the robot, either accidental or intentional: could it be the robot itself, or its operator, or the programmer? Will manufacturers insist on a release of liability, like the EULA or end-user licensing agreements we agree to when we use software--or should we insist that those products should be thoroughly tested and proven safe? (Imagine if buying a car required signing a EULA that covers a car's mechanical or digital malfunctions.)


We're seeing more robotics in society, from Roombas at home to robotics on factory floors. In Japan, about 1 in 25 workers is a robot, given their labor shortage. So it's plausible that robots in the service of national intelligence may interact with society at large, such as autonomous cars or domestic surveillance robots or rescue robots. If so, they need to comply with society's laws too, such as rules of the road or sharing airspace and waterways.


But, to the extent that robots can replace humans, what about complying with something like a legal obligation to assist others in need, such as required by a Good Samaritan Law or basic international laws that require ships to assist other naval vessels in distress? Would an unmanned surface vehicle, or robotic boat, be obligated to stop and save a crew of a sinking ship? This was a highly contested issue in World War 2--the Laconia incident--when submarine commanders refused to save stranded sailors at sea, as required by the governing laws of war at the time. It's not unreasonable to say that this obligation shouldn't apply to a submarine, since surfacing to rescue would give away its position, and stealth is its primary advantage. Could we therefore release unmanned underwater vehicles (UUVs) and unmanned surface vehicles (USVs) from this obligation for similar reasons?


We also need to keep in mind environmental, health, and safety issues. Microbots and disposable robots could be deployed in swarms, but we need to think about the end of that product lifecycle. How do we clean up after them? If we don't, and they're tiny--for instance, nanosensors--then they could then be ingested or inhaled by animals or people. (Think about all the natural allergens that affect our health, never mind engineered stuff.) They may contain hazardous materials, like mercury or other chemicals in their battery, that can leak into the environment. Not just on land, but we also need to think about underwater and even space environments, at least with respect to space litter.


For the sake of completeness, I'll also mention privacy concerns, though these are familiar in current discussions. The worry is not just with microbots, which may look like harmless insects and birds, that can peek into your window or crawl into your house, but also with the increasing biometrics capabilities that robots could be outfitted with. The ability to detect faces from a distance as well as drugs or weapons under clothing or inside a house from the outside blurs the distinction between a surveillance and a search. The difference is that a search requires a judicial warrant. As technology allows intelligence-gathering to be more intrusive, we'll certainly hear more from these critics.


Finally, we need to be aware of the temptation to use technology in ways we otherwise wouldn't do, especially activites that are legally questionable--we'll always get called out for that. For instance, this charge has already been made against our use of UAVs to hunt down terrorists. Some call it "targeted killing", while others maintain that it's an "assassination." This is still very much an open question, because "assassination" has not been clearly defined in international law or domestic law, e.g., Executive Order 12333. And the problem is exacerbated in asymmetrical warfare, where enemy combatants don't wear uniforms: Singling them out by name may be permitted when it otherwise wouldn't be; but others argue that it amounts to declaring targets as outlaws without due process, especially if it's not clearly a military action (and the CIA is not formally a military agency).


Beyond this familiar charge, the risk of committing other legally-controversial acts still exists. For instance, we could be tempted to use robots in extraditions, torture, actual assassinations, transport of guns and drugs, and so on, in some of the scenarios described earlier. Even if not illegal, there are some things that seem very unwise to do, such as a recent fake-vaccination operation in Pakistan to get DNA samples that might help to find Osama bin Laden. In this case, perhaps robotic mosquitoes could have been deployed, avoiding the suspicion and backlash that humanitarian workers had suffered consequently.


 


Deception


Had the fake-vaccination program been done in the context of an actual military conflict, then it could be illegal under Geneva and Hague Conventions, which prohibit perfidy or treacherous deceit. Posing as a humanitarian or Red Cross worker to gain access behind enemy lines is an example of perfidy: it breaches what little mutual trust we have with our adversaries, and this is counterproductive to arriving at a lasting peace. But, even if not illegally, we can still act in bad faith and need to be mindful of that risk.


The same concern about perfidy could arise with robot insects and animals, for instance. Animals and insects are typically not considered to be combatants or anything of concern to our enemies, like Red Cross workers. Yet we would be trading on that faith to gain deep access to our enemy. By the way, such a program could also get the attention of animal-rights activists, if it involves experimentation on animals.


More broadly, the public could be worried about whether we should be creating machines that intentionally deceive, manipulate, or coerce people. That's just disconcerting to a lot of folks, and the ethics of that would be challenged. One example might be this: Consider that we've been paying off Afghani warlords with Viagra, which is a less-obvious bribe than money. Sex is one of the most basic incentives for human beings, so potentially some informants might want a sex-robot, which exist today. Without getting into the ethics of sex-robots here, let's point out that these robots could also have secret surveillance and strike capabilities--a femme fatale of sorts.


The same deception could work with other robots, not just the pleasure models, as it were. We could think of these as Trojan horses. Imagine that we captured an enemy robot, hacked into it or implanted a surveillance device, and sent it back home: How is this different from masquerading as the enemy in their own uniform, which is another perfidious ruse? Other questionable scenarios include commandeering robotic cars or planes owned by others, and creating robots with back-door chips that allow us to hijack the machine while in someone else's possession.


Broader effects


This point about deception and bad faith is related to a criticism we're already hearing about military robots, which I mentioned earlier: that the US is afraid to send people to fight its battles; we're afraid to meet the enemy face to face, and that makes us cowards and dishonorable. Terrorists would use that resentment to recruit more supporters and terrorists.


But what about on our side: do we need to think how the use of robotics might impact recruitment in our own intelligence community? If we increasing rely on robots in national intelligence--like the US Air Force is relying on UAVs--that could hurt or disrupt efforts in bringing in good people. After all, a robotic spy doesn't have the same allure as a James Bond.


And if we are relying on robots more in the intelligence community, there's a concern about technology dependency and a resulting loss of human skill. For instance, even inventions we love have this effect: we don't remember as well because of the printing press, which immortalizes our stories on paper; we can't do math as well because of calculators; we can't recognize spelling errors as well because of word-processing programs with spell-check; and we don't remember phone numbers because they're stored in our mobile phones. In medical robots, some are worried that human surgeons will lose their skill in performing difficult procedures, if we outsource the job to machines. What happens when we don't have access to those robots, either in a remote location or power outage? So it's conceivable that robots in the service of our intelligence community, whatever those scenarios may be, could also have similar effects.


Even if the scenarios we've been considering end up being unworkable, the mere plausibility of their existence may put our enemies on point and drive their conversations deeper underground. It's not crazy for people living in caves and huts to think that we're so technologically advanced that we already have robotic spy-bugs deployed in the field. (Maybe we do, but I'm not privileged to that information.) Anyway, this all could drive an intelligence arms race--an evolution of hunter and prey, as spy satellites had done to force our adversaries to build underground bunkers, even for nuclear testing. And what about us? How do we process and analyze all the extra information we're collecting from our drones and digital networks? If we can't handle the data flood, and something there could have prevented a disaster, then the intelligence community may be blamed, rightly or wrongly.


Related to this is the all-too-real worry about proliferation, that our adversaries will develop or acquire the same technologies and use them against us. This has borne out already with every military technology we have, from tanks to nuclear bombs to stealth technologies. Already, over 50 nations have or are developing military robots like we have, including China, Iran, Libyan rebels, and others.


CONCLUSION


The issues above--from inherent limitations, to specific laws or ethical principles, to big-picture effects-- give us much to consider, as we must. These are critical not only for self-interest, such as avoiding international controversies, but also as a matter of sound and just policy. For either reason, it's encouraging that the intelligence and defense communities are engaging ethical issues in robotics and other emerging technologies. Integrating ethics may be more cautious and less agile than a "do first, think later" (or worse "do first, apologize later") approach, but it helps us win the moral high ground--perhaps the most strategic of battlefields.

Testing critical thinking skills

Lafaytte's team, back row, from left: Keith Bimbi, Ryan Lucey, Charlie Toepfer, Samantha Lange, Mikayla Schappert, Julia Tomlinson. Front row, from left: Caleigh Fortunato, Gavin Waple, Daniel Banas, Marisa Puzina
Children study a problem and then build a robot to fix it


Lafayette ? Twelve teams of robot makers comprised of fifth- through eighth-graders from various towns in Sussex County rose to the challenge posed by FIRST (For Inspiration and Recognition of Science and Technology) and Lego at a FIRST Lego Tournament held at Lafayette Township School last Saturday. Each year FIRST announces a new theme for the competition, based upon a real-life problem that scientists and researchers are currently trying to solve. The theme for 2011 is The Food Factor Challenge, and teams worked to find a way to minimize disease and pathogens in food using robotics.

According to Pete Domasky of Sparta, one of the Pope John robotics team managers, each team must first investigate the topic to discover a problem that they can solve by designing a robot, a playing field and programming the robot to complete certain tasks. For example, research conducted by the Pope John team included a field trip to a farm where they learned how different types of disease invaded agriculture. Ultimately, the Pope John team decided to build a laser that would destroy pathogens and disease on food that was being transported by conveyor belts.


The Lafayette team, "looked at the problem of contaminated lettuce and all the recent recalls of bagged lettuce," said their coach Nancy Estevez. "Their solution was to build a device that tumbled the lettuce in ozone gas, which killed contaminants. It is like a salad spinner, but is installed inside the refrigerator. The device also included an ecoli indicator, which turns blue if any of the bacteria is present on the lettuce."

This was the first year Lafayette's team, the Kung Food Fighters, participated in the event. "They were rookies," said Estevez, "but they did an excellent job. In the Robot Games Lafayette came in 10th, and in the Project they scored 39 out of a possible 40 points."


Scoring


There are 15 possible tasks that can be accomplished during the Robot Games part of the challenge, but teams can choose how many they want to do. Points are awarded for each task completed within a 2.5-minute time frame. The playing field and robot are built using Legos and Lego-based attachments; programming is done via the “block” which is inserted into the robot and contains the computer module.

“The Food Factor Challenge is not really a competition,” says Domasky. “Two teams, working side-by-side, put their robots through the paces. Both teams are working independently, doing their own thing. But, all of the kids are acquiring building, engineering and strategic thinking skills.”


What the program is all about


“The best way to summarize First Lego League is to say that it is a robotics program for 9- to 14-year-olds, which is designed to get children excited about science and technology ? and teach them valuable employment and life skills. [It] can be used in a classroom setting but is not solely designed for this purpose. Teams, composed of up to 10 children with at least one adult coach, can also be associated with a pre-existing club or organization, homeschooled or just be a group of friends who wish to do something awesome.


... Teams of up to 10 children, with one adult coach, participate in the Challenge by programming an autonomous robot to score points on a themed playing field (Robot Game), developing a solution to a problem they have identified (Project), all guided by the First Lego League Core Values. Teams may then choose to attend an official tournament.”


Robot videojournalist uses cuteness to get vox pops

IP is over the quota Imagine a cardboard version of Pixar's Wall-e character, but with added über-cute human voice, and you've got a fair picture of Boxie, Alexander Reben's documentary-video-making robot.

Designed to wander the streets shooting video, the diminutive droid trundles up to people and asks them to tell it an interesting story.

Sounds crazy? Surprisingly, not entirely: a good few people did actually cooperate with Boxie – enough to make a short movie – though one malcontent dumped the robot in a trash can and a child tried to kidnap it.

"The idea was to create a robot that was interesting enough for people to engage with it and offer to help it, carrying it around and up and down stairs to show it things," says Reben, a researcher at the Massachusetts Institute of Technology's Media Lab.

To win cooperation from the person in the street, cuteness is Boxie's stock-in-trade. In addition to being a squat, doe-eyed creature, it is also made of cardboard, a material Reben says people perceive as non-threatening, even friendly. When his team tried to build Boxie from white plastic, it looked scarily skull-like.

Based on an off-the-shelf caterpillar-tracked chassis, the robot – presented at the ACM Multimedia Conference in Scottsdale, Arizona, in late November – uses ultrasound sonar to detect walls. That keeps it straight and true while it trundles along sidewalks and corridors, and a body-heat sensor tells it when it's found a person – though a large dog could fool it, Reben concedes.

It then sets to work with its not-very-hard-nosed interview technique. "Boxie has a script in which it asks people questions and asks them to pick it up and show it around an area like a lab or mall. To move on to the next series of questions, people are asked to press buttons on either side of it," says Reben.

Boxie would set off on its own at the beginning of the day and it would generally spend 6 hours or so collecting video – limited more by the video recording time available than battery power. It would report its condition to the research team regularly, via whatever open Wi-Fi it could find, but not its position: location-sensing tech was dropped to save development time.

"That meant I'd have to go out and search for Boxie at the end of the day. Once I found it in the trash and another time an intern spotted a child trying to put it in its parents' car," he says.

Over a few days Boxie collected about 50 interviews, which the MIT team has edited down to a 5-minute documentary. Overall, Reben and colleague Joe Paradiso reckon robot-mediated story acquisition works: "A coherent movie was easily produced from the video clips captured, proving that their content and organisation were viable for story-making," they say in their conference paper.

Chris Melhuish, director of the Bristol Robotics Laboratory in the UK, says MIT was right to focus on perfecting Boxie's social acceptability. "As robots become everyday objects in our environment, the way they behave will become increasingly important. Future smart machines will need such social intelligence to interact naturally – utilising appropriate gestures, body pose and non-verbal communication, for instance."

However, as any journalist on a vox-pop assignment soon finds out, people can be cranky – and Boxie took its share of abuse from the public. Force sensors in the robot recorded that it had suffered violent shaking – or been thrown to the ground – a number of times. So the researchers have some advice for future builders of robotic reporters: "Try not to be annoying."

If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.


Only subscribers may leave comments on this article. Please log in.


Only personal subscribers may leave comments on this article

Friday, December 30, 2011

At Brookside, inspiring braniacs — one robot at a time

 a fourth-fifth-grade combo teacher at Beaumont’s Brookside Elementary School, is having her Robotics Club members build a Mars Rover-type robot.Published: Friday, November 4, 2011 12:11 AM CDTBeaumont’s Brookside Elementary School has its share of braniacs.In fact, Janae Keels, 9, is so tech-savvy that when her mother got a new phone and had trouble programming it, the girl jumped in to help. So it’s no surprise that Keels was attracted to Brookside’s after-school Robotics Club.“(The club) is pretty fun,” she said while helping her three teammates build a robot during a Thursday afternoon meeting. “I like to take things apart and put them back together.”Putting things together, in this case a motorized Mars Rover-like contraption, is what the club is all about.Brielan Sadler, a fourth/fifth-grade combo teacher at Brookside, runs the club. “When I started this, I just wanted to give kids the opportunity to experience new technology, to see computers as more than just game-playing devices,” said Sadler, who has been at Brookside since it opened in 2004.Although the club was her brainchild, Sadler was influenced by her mother, who is a teacher at Colton’s Sycamore Hills Elementary School, which is a NASA Explorer School. At such a school, students are encouraged to study science, technology, engineering and mathematics using NASA’s resources. Sadler said Brookside is seeking the NASA designation.The Colton teachers “did some robotics there and went through some training at JPL,” she said, referring to NASA’s Jet Propulsion Laboratory in Pasadena. “And I got to spend a day at JPL, which was really cool.”So, she decided to bring some of that coolness to Brookside. As the Gifted and Talented Education coordinator, she applied for and won a grant to begin the club.With the money, she purchased eight Mindstorm kits, which cost between $200 and $250 each, to use in the club. Each robot-building kit contains 828 pieces, enough to build an assortment of devices. When the GATE program was discontinued, Sadler opened the club up to all fourth- and fifth-graders. Although the kits are out of date and need upgrading, Sadler said that will have to wait until the money is available.“We were looking for kids who could work together, who were innovative thinkers and problem-solvers,” she said. “We were not looking for techno geniuses.”The first assignment of the year is to build a mini-Mars Rover, then use the included computer program to tell the robot what to do.“It’s fun to activate them and see them move in different directions,” said Keels, who wants to be a robot builder someday.The computer program, called LabVIEW, teaches the students the basics of computer programming through the use of icons instead of a complicated coded program. For example, when the student is ready to program the robot, a sensor is plugged into a computer, and the action starts.First, an icon that tells the robot to start is dragged from a menu to a position on a blank screen. Should the student want the robot to move forward for 10 seconds, a second icon is dragged next to the first. To stop and change direction, other icons are dragged over until a string of commands is in place.The robot reads the senor wirelessly, and off it goes.“I like technology and computers,” said Josh Cash, 10, who wants to be a marine biologist or an inventor. “It’s fun hanging out with my friends and making computer stuff together.”For Kaitlyn Mixon, 10, the club is a first step toward the future.“This is a good way for me to interact and get smart with robots,” she said. “I want to be a scientist and build rovers. My parents are really proud of me.”Sadler said parents have been very supportive.Beverly Kelley’s daughter, Selah, 10, is in the club.“I’m seeing them learn teamwork, which is just as important as everything else,” she said as she waited for her daughter to finish up. “They’re having so much fun they don’t realize they’re learning. And Miss Sadler is an amazing person.“This is really great, especially for the girls.”Sadler said when the club started, mostly boys showed interest. Now, the 28-member club has about half boys and half girls.“Girls are better problem-solvers,” Sadler said. “When a problem comes up, the boys are more likely to just throw up their hands and say, “It doesn’t work.”But, despite any problems that may arise, all members would agree with Cash, when he calls the club “awesome.”

3D Printers: Almost Mainstream



>
Richard Smith needed to build a wall-climbing robot for a customer -- so he printed one.


Smith, director of Smith Engineering Gb Ltd., used a CAD program to design a 3D model of the WallRover, a dual-track roving robot with a spinning rotor in the chassis that creates enough suction to hold the device to a wall. He then sent the design file for each component to a 3D printer, which sliced the objects into sections less than 1/100th of an inch thick by printing it, one layer at a time, using molten ABS plastic as the "ink."


As a 3D printer begins fabricating an object, each layer gets fused or glued to the previous one and the product gradually gets built up. Under the hood, 3D printers use a variety of different fabrication techniques, several of which are based on ink-jet technology, and can use many different types of "build" materials to print three-dimensional objects. (To learn more about the different types of 3D printers, check out our comparison chart.)


Before buying a 3D printer, Smith would send its designs to a service bureau for fabrication, and parts took three or four days to turn around. Had Smith used a service bureau for the WallRover project -- which went through 22 design iterations -- it would have taken six months to complete, Smith says.


Smith Engineering used an inexpensive 3D printer to build the ABS plastic parts for its wall-climbing robot prototype for client WallRover


Instead, Smith was able to get a final design and fully functional prototype to the client within two weeks.


And he did it using a consumer-grade 3D "plastic jet printer" that he built from a kit. The RapMan, from 3D Systems' Bits From Bytes division, cost just $1,500. Smith spent another $180 for plastic filament -- the "ink" consumed by the printer. "It saved five months of development time and somewhere in the neighborhood of $15,000 to $20,000 in models" that were created in-house instead of being sent to a service bureau, he says.


Smaller and cheaper


3D printing isn't new. The manufacturing technique known today as 3D printing, also called additive manufacturing or direct digital manufacturing, has been used for rapid prototyping for decades. But over the last 24 months, prices have dropped to a level that makes it appealing to a wider audience.


The technology is more compact, particularly in the plastic jet-printing category. Cathy Lewis, vice president of global marketing at printer manufacturer 3D Systems Inc., says today's models are "ideal" for personal use.


3D design gets easier


It's relatively easy to use a free tool such as Google SketchUp to create simple objects for 3D printing. But for complex shapes and geometries, designers still reach for professional modeling tools like SolidWorks.


"Visualization software such as Google's SketchUp provides a fast entry route" to 3D computer-aided design (CAD), says Nick Grace, manager of RapidformRCA, which acts as a 3D printing service bureau for students at the Royal College of Art in London and uses many different software design tools and 3D printer technologies.


But, he adds, "the shortcuts made by these tools are not allowed for in the 3D printer's slicing routine." For example, some software may not fully render elements of an object that aren't needed from a particular viewpoint. That causes problems when the file is sent to the 3D printer. "We still regularly get unbuildable surface files or haphazardly constructed and translated data from files that render a perfectly coherent image," he says. In other words, they look fine on screen but won't print correctly.


Professional solid modeling tools do better job, but usually require specialized training and expertise. "The products today are pretty difficult to use," admits Gonzalo Martinez, director of strategic research at Autodesk.


CAD software makers are addressing the 3D content creation challenge in three ways: By introducing easier-to-use solid modeling tools for 3D content creation, by offering libraries of 3D objects that give users a head start on a design and by using specialized software such as Autodesk's 123D Photofly. This tool can combine a series of photographs of an object, taken from all sides, into a usable 3D model -- a process known as photogrammetry.


Professional tool developers are working "to make complex operations more simple," says Martinez. "Things that require training today you will be able to do with little training to create complex geometries."


For example, Autodesk 123D, a free tool for CAD novices, is a much-simplified version of the vendor's professional tools.


Other products, such as Rhino, a $995 program from McNeel, are edging closer to that middle ground between complexity and capability. "It is a high-end surface/mesh modeler, but has accessible controls and an excellent context-sensitive help with video clips," says Grace.


"We are still some way off the point when a novice can draw, model and print without help from a specialist," Grace says, "But that day will come."


But creating a printable 3D object can be tricky. Designs created in a CAD program need to be "water tight," or complete. "All surfaces have to be closed and lie on top of each other or you get holes in your part," says Jon Cobb, vice president of marketing at 3D printer vendor Stratasys.


The design then needs to be exported to a standard file format 3D printers can use, most often the stereolithography (STL) format, originally developed by 3D Systems, that has become a de-facto industry standard.


Until recently, the quality of STL files produced by CAD programs wasn't sufficient for 3D printing and required additional cleanup. But, Cobb says, that problem has largely gone away in professional solid modeling tools such as AutoCAD or SolidWorks. (Consumer-oriented design tools are a different story; see sidebar at left.)


Even so, Pete Basiliere, an analyst at Gartner who covers 3D printing, doesn't see consumers using the technology for personal printing of unique, one-off household items. "What's inhibiting consumer use is cost. It's too expensive for most people." Instead, he says, service bureaus may step in to fill those needs.


Another issue is that some objects need to have supports added so they don't collapse or sag before the materials fully harden. If an object needs to be supported during the printing process, the pre-processing driver software that comes with the 3D printer makes that determination and automatically adds any needed structural supports to the design.


Before printing this figure, an artist cleaned up the image, and extended the weapon and cape to the base on which the figure stands to provide additional support. When a figure first comes out of the printer, it is quite fragile, so support material must be removed carefully to avoid breaking off fine details such as fingertips.


The support material is usually different from the build material, and must be removed during a post-processing step that typically involves blowing off, breaking off, dissolving, melting or cutting away the unwanted material.


Price is right


Declining prices, improved quality and easier to use software are opening up demand for 3D printers. Commercial models -- capable of cranking out industrial manufacturing prototypes -- that once cost $100,000 now start at about $15,000, while personal 3D printers for the hobbyist and education market sell for less than $1,500.


"It used to be a six- or seven-figure cost," says Gartner's Basiliere.


Among industrial offerings, higher-end models add features such as the ability to print colors (although most can only print one color at a time), to run jobs faster, to print thinner layers for finer detail and to offer a larger printable area for creating larger objects.


For industrial designers doing prototyping, even an entry-level 3D printer is faster than going to a service bureau, and operating costs can come in as low as one-tenth of service bureau rates. The prices of 3D printers are now low enough to justify in departmental budgets, says Gartner's Basiliere.


Manufacturers, such as automakers, have traditionally used 3D printers in a lab or as part of a separate internal "service bureau," says Terry Wohlers, principal consultant and president of Wohlers Associates Inc., which tracks the 3D printing market. Now they are showing up in corporate offices, where they sit on the network like any other networked printer. "Because they're more affordable, now they're spread all across General Motors and Chrysler," he says.


Other industries use the technology, too. Ben White uses a 3D printer from Z Corp. to produce prototypes of window curtain poles, tracks, blinds and other hardware for Integra Products Ltd. "It's more economical to lease a printer than it is to keep sending products out for fabrication," says White, senior product design engineer. "We're at 10 to 15% of the cost of the service bureau," he says, the turnaround is faster and the models are more accurately rendered to the original design specifications. After six months the company is using the printer to produce 95% of its prototypes.


Hewlett-Packard's DesignJet 3D printer is available only in Europe.


Others report similar savings. By using an HP DesignJet for rapid prototyping, Tintometer Ltd. sped up its product development times by 40% to 60%, says industrial designer Amy Penn. And the company, which manufactures industrial instruments that measure color, also uses the 3D printer to build finished products.


The DesignJet builds testing jigs that calibrate components before they're inserted into the final instrument during the manufacturing process. The parts more precisely meet the original specifications compared to what Tintometer was able to get from a service bureau, and are just as sturdy and a lot cheaper, says Penn. The 3D printer also made it cost effective to print concept parts that sales people can show to customers. "The ROI was about six months," she says.


Penn did not disclose what she paid, but she has the DesignJet 3D color unit, which sells for 16,200 Euros, or about $21,000 U.S. The monochrome version of the DesignJet 3D printer sells for 12,500 Euros.


In terms of shipments, the market for 3D printers remains relatively small. Unit shipments for professional use grew at a compound annual rate of 37% in 2010, according to Wohlers. This includes usage by industrial engineers, architects, engineers in traditional markets such as aerospace, consumer products, electronics, tool makers and other manufacturing concerns. But that 2010 growth amounted to just 6,164 units -- a tiny fraction of the 2D printer market. In 2010 there were over 44 million traditional printers shipped worldwide, according to IDC.


With only 51,000 3D printers sold worldwide since 1988 and 2.7 million solid modeling CAD seats worldwide, Wohlers estimates that there's plenty of room for growth. By 2015, Wohlers expects, shipments of industrial 3D printers will more than double to 15,000 units.


The potential for growth is one reason why Hewlett Packard dipped a toe in the water with the introduction of the DesignJet 3D, which HP sells only in Europe. The printer is a re-branded version of market leader Stratasys' uPrint 3D printer.


Objects made easy


Although they lack the capabilities of professional solid-modeling tools, all of the tools below can generate printable 3D objects -- and they're free.


Google SketchUp


Autodesk 123D


TinkerCAD


3DTin


Hobbyist Market


A growing hobbyist market has also developed for 3D printers; people use the technology to make everything from toys to drawer pulls. Free 3D modeling tools for hobbyists (see sidebar at right) make the creation process easier, while companies such as MakerBot Industries, LLC provide low-cost plastic extrusion, or plastic jet printers.


Manufacturers also offer libraries of preconfigured objects that users can work with. For example, MakerBot offers Thingiverse, a website where users can share objects they've created. Autodesk 123D offers a similar community.


Many personal 3D printers go to educational institutions, rather than homes. "We want to get these into the hands of kids," says MakerBot CEO Bre Pettis. "It gives them access to the raw power of innovation."


Unfortunately, simple 3D design software for home hobbyists isn't suitable for professional use, and professional tools are still quite complicated to use. That leaves a big gap between consumers and industrial designers. "Today you need to be an expert CAD user to create digital content, or you need a fancy scanner to capture 3D geometry of an object you want to print," says Lewis at 3D Systems.


 


The MakerBot 3D printer, which sells for $1,500, makes 3D objects by applying successive layers of molten ABS plastic. While designed for the home/hobby market, professional designers are finding the devices usable for some commercial applications. For example, Smith Engineering used a similar product to build and assemble the parts for a commercial robot prototype.


In 2010, 3D printer vendors shipped 5,978 personal 3D printers -- almost as many as sold into the professional market. But Wohlers doubts that a broad do-it-yourself at-home market will develop for personal 3D printers.


The bigger market, he says, will be the emergence of on-demand manufacturers that use industrial 3D printers or personal 3D printers that cost from $500 to $5,000. They will produce unique one-off or small-quantity items tailored to consumers or businesses that don't want bother with designing and printing items for themselves, Wohlers says.


Gartner predicts that the price for professional 3D printers that now sell for $15,000 will decline to about $2,500 by 2020 and will deliver better performance and more features. But ultimately, says Basiliere, "From the manufacturer's perspective it's not the sale price of the printer but the sale of the supplies that matters most." Average consumables costs for 3D printers range from $2.50 to $10 per cubic inch, according to Basiliere.


Small-business manufacturing


The emergence of low-cost 3D printing lowers the bar for some types of manufacturing. "Companies and individuals with design talent and business savvy can start a business and start manufacturing products," Wohlers says.


After seeing what a 3D printer could do, Ed Fries, the former vice president of Microsoft Game Studios, started up FigurePrints, which uses Z Corp.'s ZPrinter machines to create one-of-a kind models of personal avatars for World of Warcraft and Xbox Live game enthusiasts.


FigurePrints downloads the characters directly from each game site, and lets users pose them before placing an order. An artist then cleans up the object, smoothing away the series of polygons that describe the figure and adding a third dimension to some 2D elements of the image, such as a cape and hair.


Fries chose Z Corp.'s ZPrinter because it is the only 3D printer on the market that supports full-color printing. That is, it can print an object using multiple colors.


He considered more traditional manufacturing techniques, such as a resin-cast process designed for low-volume production. "But you can't hand paint to the resolution we get, which is 600 dpi," he says, and it cost more.


That's a key advantage of the full-color ink-jet printing approach, says Z Corp. CEO John Kawola. "Because we use ink jet heads you can print a bottle with all of the label graphics and text on it."


By using full-color printing, Z Corp.'s ZPrinter can fabricate a product bottle prototype complete with the label and text.While plastic jet printers heat and extrude ABS plastic through an "extrusion head" that looks like a syringe or glue gun, Z Corp.'s ZPrinter builds a 3D object by spreading a thin layer of a powder and then using an ink-jet print head to selectively deposit a liquid that hardens it.


As the layers build up, the unused powder that surrounds the object serves as a support. Once the item is finished, it goes to a cleaning station where a technician uses compressed air to remove the powder residue. The composite material, which has a polymer component, isn't as strong as ABS plastic, so FigurePrints dips each in a glue solution that hardens the material.


Even using the hardener solution, the final product isn't nearly as strong as injection-molded ABS plastic. Initially some characters, which tended to have overdeveloped upper torsos but thin ankles, snapped off the base during shipment. So artists take some license with images, in some cases thickening ankles or extending a cape or weapon to the base to add support.


"The texture and appearance of the finished product is OK, but isn't to the standard of a plastic injection-molded action figure you would buy at the store," Fries admits.


The colors aren't as bright, and the finished product has a texture that Fries describes as somewhat "chalky." But it works fine for models that ship in a glass display case, and the price is right: It costs about $5 per cubic inch to print a figure, not including pre- and post-processing time.


3D printers compared




For the full chart, go here.


FigurePrints sells the characters for about $15 per cubic inch -- and users seem willing to pay. "A common request is for wedding cake toppers," Fries says. "Couples meet in the games and want their characters on top of the wedding cake."


Smith also likes the idea of using 3D printers for one-off or limited run manufacturing. "We can do small-scale production -- tens of units -- without spending the money on expensive injection-molding tools," he says. But the printer works slowly, producing up to about four runs a day. FigurePrints gets about two products per day from each of its printers.


"If you're trying to manufacture with these machines, throughput is everything," Wohlers says. Using 3D printers successfully in a manufacturing setting will require better automation of both pre-processing and post-processing steps.


This ABS plastic hand vacuum was printed on a Stratasys uPrint 3D printer using fuse deposition modeling, a process that involves heating a plastic filament until it liquefies and sending it through a special syringe-like print head that extrudes it.


Cobb says Stratasys expects to cut total pre- and post-processing time for a typical print job in half, from 5 hours today to about 2.5 hours within the next three years, and for prices to drop from today's $15,000 for its entry-level professional printer to between $7,000 and $10,000 in that same timeframe. "In three to five years you will have the same capabilities for under $5,000," he says.


In the personal printer space, says Lewis at 3D Systems, prices will drop even further. "In the next year or two you will see us go past the $1,000 mark. In two years we'll be close to $500," she says.


How much the market will grow as prices continue to drop, and whether a mass market will ever emerge, is an open question. But as easy-to-use 3D design tools get better, and as shared 3D object libraries gain in size and sophistication, businesses and consumers may come up with new applications for the technology that haven't yet been envisioned. "3D printing is where the semiconductor business was in the 1960s," Wohlers says. "We know it is going to be big but we don't know how big."


Japanese company building 13-foot working Gundam tribute robot



>
Hajime Sakamoto, president of the Osaka-based Hajime Research Institute is working to make Gundam fan’s dreams a reality. His latest project, already underway, is a mobile 13-foot robot with a cockpit within for a human pilot


Development on the 13-foot mobile robo suit began in 2010, though the Japanese company has been churning out humanoid robots since 2002. The robots are slowly getting larger as well; in 2007, Hajime Robot 25 was three feet tall, and in 2009 robot 33 was seven feet tall, reining as one of the largest humanoid robots in the world.


The giant robot from HRI aims to be the largest in the world and will be able to do bipedal walking, though at the moment only one leg has been finished. The company is currently looking for sponsors to jump in and help with the project. NKK Kyousei and various contractors are working to build the giant robot’s parts.


Why would someone want to unleash a giant, possibly destructive, robot suit into the world? Cnet points out that the Hajime Web site’s philosophy, developed in 2002, is “to cheer people by dream power. We provide a dream to people through robotic technology.”


The 13 foot Gundam tribute may seem amazing enough, but Hajime Sakamoto’s dreams rest far into the future. The HRI president is a devout fan of Gundam and aims to make a working version of the now-dissasembled, 59-foot giant mecha in Shizuoka, Japan which was created as part of the anime series’ 30th anniversary. HRI plans on creating a 26-foot robot to lead up to the final giant Gundam suit, which the company president hopes to complete in eight years, just in time for the Gundam series’ 40th anniversary.

Computer Science Students Hold Lego Robot Tournament




>
At this time of year, many children will be hoping to get Lego in their Christmas stockings. Students studying computer science at Coventry University in the UK have been having some pre-Christmas fun with an advanced version of the classic child’s toy.


Lego Mindstorms are special kits that include everything you need to build a programmable robot or machine. As part of their degree course, students got the chance to have a play with the kits to learn more about the practical applications of computer science.


“The robots can be fitted with sensors which control motors and react to stimuli including light, color, sound, motion, and touch,” explained Michael Odetayo, principal lecturer in Computer Science at Coventry University, in a press release.


“The robot is controlled by an NXT intelligent micro-computer Lego brick that can be programmed to take inputs from sensors and activate the servo motors – meaning the robot reacts to its environment.”


After building their robots, the students decided to hold a competition to test the robots’ abilities.


The “Robot Tournament” consisted of two activities: a two-lap race of a Grand Prix-like racetrack against a single opponent; and navigation of an “alien” landscape in the search for “food” hidden under different colored balls, while avoiding hazards such as quick sand and dinosaurs.

“The ‘racing’ activity involved mimicking visual processing using light sensors to detect the black race circuit outline. In the second exercise the robots had to negotiate an obstacle course in a ‘lost world,’ with sensors having to detect different colored objects to win food points and avoid danger zones,” Odetayo said.


“Activities like this allow students to consolidate their learning using a practical example of programming – and have some fun, which is what learning should be.”