Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. Toward ergonomic risk prediction via segmentation of indoor object manipulation actions using spatiotemporal convolutional networks
 
  • Details

Toward ergonomic risk prediction via segmentation of indoor object manipulation actions using spatiotemporal convolutional networks

Source
IEEE Robotics and Automation Letters
Date Issued
2019-10-01
Author(s)
Parsa, Behnoosh
Samani, Ekta U.
Hendrix, Rose
Devine, Cameron
Singh, Shashi M.
Devasia, Santosh
Banerjee, Ashis G.
DOI
10.1109/LRA.2019.2925305
Volume
4
Issue
4
Abstract
Automated real-time prediction of the ergonomic risks of manipulating objects is a key unsolved challenge in developing effective human-robot collaboration systems for logistics and manufacturing applications. We present a foundational paradigm to address this challenge by formulating the problem as one of action segmentation from RGB-D camera videos. Spatial features are first learned using a deep convolutional model from the video frames, which are then fed sequentially to temporal convolutional networks to semantically segment the frames into a hierarchy of actions, which are either ergonomically safe, require monitoring, or need immediate attention. For performance evaluation, in addition to an open-source kitchen dataset, we collected a new dataset comprising 20 individuals picking up and placing objects of varying weights to and from cabinet and table locations at various heights. Results show very high (87%-94%) F1 overlap scores among the ground truth and predicted frame labels for videos lasting over 2 min and consisting of a large number of actions.
Publication link
https://arxiv.org/pdf/1902.05176
URI
https://d8.irins.org/handle/IITG2025/23176
Subjects
Action segmentation | Computer vision for automation | Deep learning in robotics and automation | Ergonomic safety | Human-centered automation
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify