## Unifying probability and logic for learning

Hutter, Marcus; Lloyd, John W.; Ng, Kee Siong; Uther, William T. B.

### Description

Uncertain knowledge can be modeled by using graded probabilities rather than binary truth-values, but so far a completely satisfactory integration of logic and probability has been lacking. In particular the inability of confirming universal hypotheses has plagued most if not all systems so far. We address this problem head on. The main technical problem to be discussed is the following: Given a set of sentences, each having some probability of being true, what probability should be...[Show more]

dc.contributor.author | Hutter, Marcus | |
---|---|---|

dc.contributor.author | Lloyd, John W. | |

dc.contributor.author | Ng, Kee Siong | |

dc.contributor.author | Uther, William T. B. | |

dc.date.accessioned | 2015-08-13T23:58:42Z | |

dc.date.available | 2015-08-13T23:58:42Z | |

dc.identifier.uri | http://hdl.handle.net/1885/14717 | |

dc.description.abstract | Uncertain knowledge can be modeled by using graded probabilities rather than binary truth-values, but so far a completely satisfactory integration of logic and probability has been lacking. In particular the inability of confirming universal hypotheses has plagued most if not all systems so far. We address this problem head on. The main technical problem to be discussed is the following: Given a set of sentences, each having some probability of being true, what probability should be ascribed to other (query) sentences? A natural wish-list, among others, is that the probability distribution (i) is consistent with the knowledge base, (ii) allows for a consistent inference procedure and in particular (iii) reduces to deductive logic in the limit of probabilities being 0 and 1, (iv) allows (Bayesian) inductive reasoning and (v) learning in the limit and in particular (vi) allows confirmation of universally quanti- fied hypotheses/sentences. We show that probabilities satisfying (i)-(vi) exist, and present necessary and sufficient conditions (Gaifman and Cournot). The theory is a step towards a globally consistent and empirically satisfactory unification of probability and logic. | |

dc.publisher | Workshop on Weighted Logics for Artiticial Intelligence | |

dc.relation.ispartof | IJCAI-13 Workshop on Weighted Logics for Artiticial Intelligence (WL4AI-2013) | |

dc.rights | © The Author(s) | |

dc.subject | higher-order logic | |

dc.subject | probability on sentences | |

dc.subject | Gaifman | |

dc.subject | Cournot | |

dc.subject | Bayes | |

dc.subject | entropy | |

dc.title | Unifying probability and logic for learning | |

dc.type | Conference paper | |

dc.date.issued | 2013-08 | |

local.type.status | Published Version | |

local.contributor.affiliation | Hutter, M., Research School of Computer Science, The Australian National University | |

local.contributor.affiliation | Lloyd, J., Research School of Computer Science, The Australian National University | |

local.contributor.affiliation | Ng, K. S. EMC Greenplum and The Australian National University | |

dc.relation | http://purl.org/au-research/grants/arc/DP0877635 | |

local.bibliographicCitation.startpage | 65 | |

local.bibliographicCitation.lastpage | 72 | |

Collections | ANU Research Publications |

### Download

Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated: **19 May 2020**/
Responsible Officer: **University Librarian**/
Page Contact: **Library Systems & Web Coordinator**