| ||||
| ||||
![]() Title:Variational Bayesian Sequence to Sequence Networks for Memory-Efficient Sign Language Translation Authors:Harris Partaourides, Andreas Voskou, Dimitrios Kosmopoulos, Sotirios Chatzis and Dimitris Metaxas Conference:ISVC'20 Tags:Deep Learning, Gloss to Text, Sign language Translation and Weight compression Abstract: Memory-efficient continuous Sign Language Translation is a significant challenge for the development of assisted technologies with real-time applicability for the deaf.In this work, we introduce a paradigm of designing recurrent deep networks whereby the output of the recurrent layer is derived from appropriate arguments from nonparametric statistics. A novel variational Bayesian sequence-to-sequence network architecture is proposed that consists of a) a full Gaussian posterior distribution for data-driven memory compression and b) a nonparametric Indian Buffet Process prior for regularization applied on the Gated Recurrent Unit non-gate weights. We dub our approach Stick-Breaking Recurrent network and show that it can achieve a substantial weight compression without diminishing modeling performance. Variational Bayesian Sequence to Sequence Networks for Memory-Efficient Sign Language Translation ![]() Variational Bayesian Sequence to Sequence Networks for Memory-Efficient Sign Language Translation | ||||
Copyright © 2002 – 2025 EasyChair |