Main content

Modeling Grasp Motor Imagery

Show full item record

Title: Modeling Grasp Motor Imagery
Author: Veres, Matthew
Department: School of Engineering
Program: Engineering
Advisor: Taylor, GrahamMoussa, Medhat
Abstract: Humans have an innate ability for performing complex grasping maneuvers, yet transferring this ability to robotics is an extremely daunting task. A primary culprit for this difficulty is that robots will eventually need to operate within unstructured environments, and will require capabilities for learning and generalizing between scenarios. A second issue is that grasping does not follow the classical one-to-one paradigm; a single grasp may be applied to many different objects, and a single object may be grasped in many different ways. In this thesis, we investigate how techniques within the Deep Learning (DL) framework can be leveraged to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. This work explores a paradigm for learning integrated object-action representations, and demonstrates its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset.
Date: 2016-08
Rights: Attribution 2.5 Canada
Terms of Use: All items in the Atrium are protected by copyright with all rights reserved unless otherwise indicated.

Files in this item

Files Size Format View Description
veres_matthew_201609_masc.pdf 5.495Mb PDF View/Open main article

This item appears in the following Collection(s)

Show full item record

Attribution 2.5 Canada Except where otherwise noted, this item's license is described as Attribution 2.5 Canada