Neuro-Symbolic Learning of Lifted Action Models from Visual Traces

dc.contributor.authorXi, Kaien
dc.contributor.authorGould, Stephenen
dc.contributor.authorThiébaux, Sylvieen
dc.date.accessioned2026-03-02T15:40:46Z
dc.date.available2026-03-02T15:40:46Z
dc.date.issued2024en
dc.description.abstractModel-based planners rely on action models to describe available actions in terms of their preconditions and effects. Nonetheless, manually encoding such models is challenging, especially in complex domains. Numerous methods have been proposed to learn action models from examples of plan execution traces. However, high-level information, such as state labels within traces, is often unavailable and needs to be inferred indirectly from raw observations. In this paper, we aim to learn lifted action models from visual traces --- sequences of image-action pairs depicting discrete successive trace steps. We present ROSAME, a differentiable neuRO-Symbolic Action Model lEarner that infers action models from traces consisting of probabilistic state predictions and actions. By combining ROSAME with a deep learning computer vision model, we create an end-to-end framework that jointly learns state predictions from images and infers symbolic action models. Experimental results demonstrate that our method succeeds in both tasks, using different visual state representations, with the learned action models often matching or even surpassing those created by humansen
dc.description.sponsorshipThis work was supported by Australian Research Council grant DP220103815 and by the Artificial and Natural Intelligence Toulouse Institute (ANITI) under the grant agreement ANR-19-PI3A-000.en
dc.description.statusPeer-revieweden
dc.format.extent10en
dc.identifier.isbn978-1-57735-889-3en
dc.identifier.otherdblp:conf/icaps/XiGT24en
dc.identifier.scopus85195925127en
dc.identifier.urihttps://hdl.handle.net/1885/733806995
dc.relation.ispartofICAPS: Proceedings of the Thirty-Fourth International Conference on Automated Planning and Schedulingen
dc.rights© 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). Free access via publisher site.en
dc.titleNeuro-Symbolic Learning of Lifted Action Models from Visual Tracesen
dc.typeConference paperen
dspace.entity.typePublicationen
local.bibliographicCitation.lastpage662en
local.bibliographicCitation.startpage653en
local.contributor.affiliationGould, Stephen; School of Computing, ANU College of Systems and Society, The Australian National Universityen
local.contributor.affiliationThiébaux, Sylvie; School of Computing, ANU College of Systems and Society, The Australian National Universityen
local.identifier.citationvolume34en
local.identifier.doi10.1609/icaps.v34i1.31528en
local.identifier.pure7326f5b6-1919-4a28-b84f-15f35553e887en
local.type.statusPublisheden

Downloads