基于NAO机器人接力赛技术研究与实现外文翻译资料

 2023-01-14 03:01

武汉理工大学毕业设计(论文)

外文翻译

毕业设计(论文)题目:基于NAO机器人接力赛技术研究与实现

外文原文一:

Control of a humanoid robot by a noninvasive brain-computer interface in humans

Christian J Bell, Pradeep Shenoy, Rawichote Chalodhorn and Rajesh P N Rao

Department of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA

E-mail: pshenoy@cs.washington.edu

Received 25 March 2008

Accepted for publication 9 April 2008

Published 15 May 2008

Online at stacks. iop. org/JNE/5/214

Abstract

We describe a brain-computer interface for controlling a humanoid robot directly using brain signals obtained non-invasively from the scalp through electroencephalography (EEG). EEG has previously been used for tasks such as controlling a cursor and spelling a word, but it has been regarded as an unlikely candidate for more complex forms of control owing to its low signal-to-noise ratio. Here we show that by leveraging advances in robotics, an interface based on EEG can be used to command a partially autonomous humanoid robot to perform complex tasks such as walking to specific locations and picking up desired objects. Visual feedback from the robots cameras allows the user to select arbitrary objects in the environment for pick-up and transport to chosen locations. Results from a study involving nine users indicate that a command for the robot can be selected from four possible choices in 5 s with 95% accuracy. Our results demonstrate that an EEG-based brain-computer interface can be used for sophisticated robotic interaction with the environment, involving not only navigation as in previous applications but also manipulation and transport of objects.

Advances in neuroscience and computer technology have made possible a number of recent demonstrations of direct brain control of devices such as a cursor on a computer screen [1-5] and various prosthetic devices [6-8], Such brain-computer interfaces (BCIs) could potentially lead to sophisticated neural prosthetics and other assistive devices for paralyzed and disabled patients. Some of the more complex demonstrations of control, such as the control of a prosthetic limb, have relied on invasive techniques for recording brain signals [6-8] (although see also [9, 10]). Non-invasive techniques, such as electroencephalography (EEG) recorded from the scalp, have been used in interfaces for cursor control [1-3] and spelling [11, 12] but the low bandwidth offered by such non-invasive signals (20-30 bits min-1) makes their use in more complex systems difficult. An approach to overcome this is to incorporate increasing autonomy in the agent that

executes BCI commands, for example a wheelchair with obstacle avoidance [13-15]. Extending this line of work, we demonstrate here that we can leverage advances in robotics and machine learning and, in particular, make use of a sophisticated humanoid robot which only requires high-level commands from the user. The robot autonomously executes those commands without requiring tedious moment-by-moment supervision. By using a dynamicimage-based BCI to select between alternatives, our system can seamlessly incorporate newly discovered objects and interaction affordances in the environment. This frees the user from having to exercise control at a very low level while allowing non-invasive signals such as EEG to be used as low-bandwidth control signals. Such an approach is consistent with a cognitive approach to neural prosthetics [16].

214

In our interface, the subjects command to the robot is determined based on a visually evoked EEG response known as P3 (or P300) [11, 17, 18] which is produced when an object that the user is attending to suddenly changes (e.g., flashes). Similar techniques have been used previously in speller paradigms [18, 19], and in control of a robotic arm [9] or a wheelchair [14, 15]. In our case, the P3 is used to discern which object the robot should pick up and which location the robot should bring the object to. The robot transmits images of objects discovered by its cameras back to the user for selection. The user is instructed to attend to the image of the object of their choice, while the border around each image is flashed in a random order. Machine learning techniques are used to classify the subjects response to a particular image flash as containing a P3 or not, thereby allowing us to infer the users choice of object. A similar procedure is used to select a destination location. We present results from a study involving nine human subjects that explores the effects of varying the amount of time needed for decoding brain signals, the number of choices available to the user and the customization of the interface to a particular user. Our results show that a command for the robot can be selected from four possible choices in 5 s with 95% accuracy.

1.Materials and methods

1.1.Human subjects

Nine subjects (eight male and one female) were recruited from the student population at the University of Washington. The subjects had no known neurological conditions and had not participated in prior BCI experiments. The study was approved by the University of Washington Human Subjects Division and each user gave informed consent. Subjects received a small monetary compensation for their participation in the experiment.

1.2.User study protocol

The accuracy of the P3-based interface in decoding the users intent from brain signals was tested in a user study without the robot i

剩余内容已隐藏,支付完成后下载完整资料


资料编号:[240026],资料为PDF文档或Word文档,PDF文档可免费转换为Word

您需要先支付 30元 才能查看全部内容!立即支付

课题毕业论文、文献综述、任务书、外文翻译、程序设计、图纸设计等资料可联系客服协助查找。