Computer Science > Artificial Intelligence

Raw Text

v1

Title: Context-Aware Policy Reuse

Authors:

Siyuan Li

Fangda Gu

Guangxiang Zhu

Chongjie Zhang

Download PDF

Abstract: Transfer learning can greatly speed up reinforcement learning for a new task by leveraging policies of relevant tasks. Existing works of policy reuse either focus on only selecting a single best source policy for transfer without considering contexts, or cannot guarantee to learn an optimal policy for a target task. To improve transfer efficiency and guarantee optimality, we develop a novel policy reuse method, called Context-Aware Policy reuSe (CAPS), that enables multi-policy transfer. Our method learns when and which source policy is best for reuse, as well as when to terminate its reuse. CAPS provides theoretical guarantees in convergence and optimality for both source policy selection and target task learning. Empirical results on a grid-based navigation domain and the Pygame Learning Environment demonstrate that CAPS significantly outperforms other state-of-the-art policy reuse methods.

Artificial Intelligence (cs.AI)

arXiv:1806.03793 [cs.AI]

arXiv:1806.03793v4 [cs.AI]

https://doi.org/10.48550/arXiv.1806.03793

Focus to learn more

Submission history

view email

[v1]

[v2]

[v3]

Full-text links:

Download:

Download a PDF of the paper titled Context-Aware Policy Reuse, by Siyuan Li and 3 other authors PDF

Other formats

license

< prev

|

next >

new

|

recent

|

1806

cs

References & Citations

NASA ADS

Google Scholar

Semantic Scholar

DBLP - CS Bibliography

listing

bibtex

Siyuan Li

Fangda Gu

Guangxiang Zhu

Chongjie Zhang

a

export BibTeX citation

Loading...

BibTeX formatted citation

×

Data provided by:

Bookmark

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer

What is the Explorer?

Litmaps Toggle

Litmaps

What is Litmaps?

scite.ai Toggle

scite Smart Citations

What are Smart Citations?

Code, Data and Media Associated with this Article

Links to Code Toggle

CatalyzeX Code Finder for Papers

What is CatalyzeX?

DagsHub Toggle

DagsHub

What is DagsHub?

Links to Code Toggle

Papers with Code

What is Papers with Code?

ScienceCast Toggle

ScienceCast

What is ScienceCast?

Demos

Replicate Toggle

Replicate

What is Replicate?

Spaces Toggle

Hugging Face Spaces

What is Spaces?

Recommenders and Search Tools

Link to Influence Flower

Influence Flower

What are Influence Flowers?

Connected Papers Toggle

Connected Papers

What is Connected Papers?

Core recommender toggle

CORE Recommender

What is CORE?

Author

Venue

Institution

Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Which authors of this paper are endorsers?

Disable MathJax

What is MathJax?

Single Line Text

v1. Title: Context-Aware Policy Reuse. Authors: Siyuan Li. Fangda Gu. Guangxiang Zhu. Chongjie Zhang. Download PDF. Abstract: Transfer learning can greatly speed up reinforcement learning for a new task by leveraging policies of relevant tasks. Existing works of policy reuse either focus on only selecting a single best source policy for transfer without considering contexts, or cannot guarantee to learn an optimal policy for a target task. To improve transfer efficiency and guarantee optimality, we develop a novel policy reuse method, called Context-Aware Policy reuSe (CAPS), that enables multi-policy transfer. Our method learns when and which source policy is best for reuse, as well as when to terminate its reuse. CAPS provides theoretical guarantees in convergence and optimality for both source policy selection and target task learning. Empirical results on a grid-based navigation domain and the Pygame Learning Environment demonstrate that CAPS significantly outperforms other state-of-the-art policy reuse methods. Artificial Intelligence (cs.AI) arXiv:1806.03793 [cs.AI] arXiv:1806.03793v4 [cs.AI] https://doi.org/10.48550/arXiv.1806.03793. Focus to learn more. Submission history. view email. [v1] [v2] [v3] Full-text links: Download: Download a PDF of the paper titled Context-Aware Policy Reuse, by Siyuan Li and 3 other authors PDF. Other formats. license. < prev. |. next >. new. |. recent. |. 1806. cs. References & Citations. NASA ADS. Google Scholar. Semantic Scholar. DBLP - CS Bibliography. listing. bibtex. Siyuan Li. Fangda Gu. Guangxiang Zhu. Chongjie Zhang. a. export BibTeX citation. Loading... BibTeX formatted citation. ×. Data provided by: Bookmark. Bibliographic and Citation Tools. Bibliographic Explorer Toggle. Bibliographic Explorer. What is the Explorer? Litmaps Toggle. Litmaps. What is Litmaps? scite.ai Toggle. scite Smart Citations. What are Smart Citations? Code, Data and Media Associated with this Article. Links to Code Toggle. CatalyzeX Code Finder for Papers. What is CatalyzeX? DagsHub Toggle. DagsHub. What is DagsHub? Links to Code Toggle. Papers with Code. What is Papers with Code? ScienceCast Toggle. ScienceCast. What is ScienceCast? Demos. Replicate Toggle. Replicate. What is Replicate? Spaces Toggle. Hugging Face Spaces. What is Spaces? Recommenders and Search Tools. Link to Influence Flower. Influence Flower. What are Influence Flowers? Connected Papers Toggle. Connected Papers. What is Connected Papers? Core recommender toggle. CORE Recommender. What is CORE? Author. Venue. Institution. Topic. arXivLabs: experimental projects with community collaborators. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs . Which authors of this paper are endorsers? Disable MathJax. What is MathJax?