Skip to content

Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment

License

Notifications You must be signed in to change notification settings

weiyx16/Active-Perception

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment

An quick official reproduce repo for the paper: Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment appeared in IROS 2019, by Yuhong Deng*, Xiaofeng Guo*, Yixuan Wei*, Kai Lu*, Bin Fang, Di Guo, Huaping Liu, Fuchun Sun.

This work combine Affordance map with active exploration policy by deep reinforcement learning to increase manipulation success rate. Especially, we designed a composite robotic manipulator including two paralleled fingers and a suction cup. By introducing promising strategy into grasping process, we make a definite improvement on robot grasping in clutter scenes. Code is mainly by Yixuan Wei*.

A new operation hand design

We designed a new kind of robot manipulator with a suction cup and fingers, for details, please ref to the source paper.

DQN & Affordance map

This repo is main for quick reproduce the DQN model and training scheme we used in this paper. Noticed this repo doesn't build with strong support, but only for referece when you want to rebuild the pipeline. The directory affordance_model is the inference code in Lua&Torch from Affordance map model. And to create affordance map for each image, please download its model, put in a suitable location and change the code of L22 in infer.lua. The directory DQN is our model of combining U-Net Structure and DQN to output action over sub-pixelwise location, for details, please see ReadMe.md

Releases

No releases published

Packages

No packages published