They should be able to notice patterns in sentence classes that are true with a certain frequency. For example, consider the claims “this Turing machine outputs an odd number” and “this Turing machine outputs an even number.” A good reasoner thinking about those claims should eventually recognize that they are mutually exclusive, and assign them probabilities that sum to at most 1, even before they can run the relevant Turing machine.Ģ. They should be able to notice patterns in what is provable about claims, even before they can prove or disprove the claims themselves. Intuitively, there are at least two properties we would want logically non-omniscient reasoners to exhibit:ġ. What does it mean for a reasoner to assign “reasonable probabilities” to claims that they haven’t computed, but could compute in principle? Without probability theory to guide us, we’re reduced to using intuition to identify properties that seem desirable, and then investigating which ones are possible. Intuitively, they have a fair bit of knowledge that bears on the claim “quicksort runs faster than mergesort on this dataset,” but modern probability theory can’t tell us which information they should use and how. They could in principle figure out which one uses fewer resources on this dataset, by running both of them and comparing, but that would defeat the purpose. They might know that quicksort typically runs faster than mergesort, but that doesn’t necessarily apply to the current dataset. Imagine an agent considering whether to use quicksort or mergesort to sort a particular dataset. Furthermore, we want to understand how to assign “reasonable” probabilities to claims that are too expensive to evaluate. We want a generalization of probability theory that allows us to model reasoners that have uncertainty about statements that they have not yet evaluated. In other words, modern probability theory assumes that all reasoners know all the consequences of all the things they know, even if deducing those consequences is intractable. Roughly speaking, if you give a classical probability distribution variables for statements that could be deduced in principle, then the axioms of probability theory force you to put probability either 0 or 1 on those statements, because you’re not allowed to assign positive probability to contradictions. To give some background on the problem: Modern probability theory models reasoners’ empirical uncertainty, their uncertainty about the state of a physical environment, e.g., “What’s behind this door?” However, it can’t represent reasoners’ logical uncertainty, their uncertainty about statements like “this Turing machine halts” or “the twin prime conjecture has a proof that is less than a gigabyte long.” 3 The solutions for each subproblem are available in two new papers, based on work spearheaded by Scott Garrabrant: “ Inductive coherence” 1 and “ Asymptotic convergence in online learning with unbounded delays.” 2 The remaining problem, in light of these results, is to find a unified set of methods that solve both at once. In brief, these results split the problem of logical uncertainty into two distinct subproblems, each of which we can now solve in isolation. I’m happy to announce two new technical results related to the problem of logical uncertainty, perhaps our most significant results from the past year. New papers dividing logical uncertainty into two subproblems
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |