Artwork

תוכן מסופק על ידי Itzik Ben-Shabat. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Itzik Ben-Shabat או שותף פלטפורמת הפודקאסט שלו. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

Yicong Hong - VLN BERT

22:57
 
שתפו
 

Manage episode 320630033 series 3300270
תוכן מסופק על ידי Itzik Ben-Shabat. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Itzik Ben-Shabat או שותף פלטפורמת הפודקאסט שלו. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

PAPER TITLE:
"VLN BERT: A Recurrent Vision-and-Language BERT for Navigation"
AUTHORS:
Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould
ABSTRACT:
Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language (V&L) BERT. However, its application for the task of vision and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously.
CODE:
💻 https://github.com/YicongHong/Recurrent-VLN-BERT
LINKS AND RESOURCES
👱Yicong's page
RELATED PAPERS:
📚 Attention is All You Need
📚 Towards learning a generic agent for vision-and-language navigation via pre-training
CONTACT:
-----------------
If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com
This episode was recorded on April, 16th 2021.
SUBSCRIBE AND FOLLOW:
🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com
📧Subscribe to our mailing list: http://eepurl.com/hRznqb
🐦Follow us on Twitter: https://twitter.com/talking_papers
🎥YouTube Channel: https://bit.ly/3eQOgwP
#talkingpapers #CVPR2021 #VLNBERT
#VLN #VisionAndLanguageNavigation #VisionAndLanguage #machinelearning #deeplearning #AI #neuralnetworks #research #computervision #artificialintelligence

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

  continue reading

32 פרקים

Artwork

Yicong Hong - VLN BERT

Talking Papers Podcast

0-10 subscribers

published

iconשתפו
 
Manage episode 320630033 series 3300270
תוכן מסופק על ידי Itzik Ben-Shabat. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Itzik Ben-Shabat או שותף פלטפורמת הפודקאסט שלו. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

PAPER TITLE:
"VLN BERT: A Recurrent Vision-and-Language BERT for Navigation"
AUTHORS:
Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould
ABSTRACT:
Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language (V&L) BERT. However, its application for the task of vision and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously.
CODE:
💻 https://github.com/YicongHong/Recurrent-VLN-BERT
LINKS AND RESOURCES
👱Yicong's page
RELATED PAPERS:
📚 Attention is All You Need
📚 Towards learning a generic agent for vision-and-language navigation via pre-training
CONTACT:
-----------------
If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com
This episode was recorded on April, 16th 2021.
SUBSCRIBE AND FOLLOW:
🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com
📧Subscribe to our mailing list: http://eepurl.com/hRznqb
🐦Follow us on Twitter: https://twitter.com/talking_papers
🎥YouTube Channel: https://bit.ly/3eQOgwP
#talkingpapers #CVPR2021 #VLNBERT
#VLN #VisionAndLanguageNavigation #VisionAndLanguage #machinelearning #deeplearning #AI #neuralnetworks #research #computervision #artificialintelligence

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

  continue reading

32 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר