Is there a formal way you’d define this? My first attempt is something like “information that, if it were different, would change my answer”
I’d say that the rule is: “To construct probability experiment use the minimum generalization that still allows you to model your uncertainty”.
In the case with 1,253,725,569th digit of pi, if I try to construct a probability experiment consisting only of checking this paticular digit, I fail to model my uncertainty, as I don’t know yet what is the value of this digit.
So instead I use a more general probability experiment of checking any digit of pi that I don’t know. This allows me to account for my uncertainty.
Now, I may worry, that I overdid it and have abstracted away some relevant information, so I check:
- Does knowing that the digit in question is specifically 1,253,725,569 affects my credence?
- Not until I receive some evidence about the state of specifically 1,253,725,569th digit of pi.
- So until then this information is not relevant.
Unrelatedly, would you agree that there’s not really a meaningful difference between logical and physical uncertainties?
Yes. I’m making this point here:
We can notice that coin tossing is, in fact, similar to not knowing whether some digit of pi is even or odd. There are two outcomes with an equal ratio among the iterations of probability experiment. I can use the model from coin tossing, apply it to evenness of some unknown to me digit of pi, and get a correct result. So we can generalize even further and call both of them, and any other probability experiment with the same properties as:
Sample(2)
There is no particular need to talk about logical and physical uncertainty as different things. It’s just a historical artifact of confused philosiophical approach of possible worlds and I’m presenting a better way.
logical uncertainty is where you could find the answer in principle but haven’t done so; physical uncertainty is where you don’t know how to find the answer.
Even this difference is not real. Consider:
A coin is tossed and put into an opaque box, without showing you the result. What is the probability that the result of this particular toss was Heads?
This is physical uncertainty. And yet I do know how to find the answer: all I need is to remove the opaque box and look. Nevertheless, I can talk about my credence before I looked at the coin.
The exact same situation goes with not knowing a particular digit of pi. Yes, I do know a way to find an answer: google an algorithm for calculating any digit of pi and insert there my digit as an input. Nevertheless, I can still talk about my credence before I performed all these actions.
In the case with 1,253,725,569th digit of pi, if I try to construct a probability experiment consisting only of checking this paticular digit, I fail to model my uncertainty, as I don’t know yet what is the value of this digit.
Ok, let me see if I’m understanding this correctly: if the experiment is checking the X-th digit specifically, you know that it must be a specific digit, but you don’t know which, so you can’t make a coherent model. So you generalize up to checking an arbitrary digit, where you know that the results are distributed evenly among {0...9}, so you can use this as your model.
The first part about not having a coherent model sounds a lot like the frequentist idea that you can’t generate a coherent probability for a coin of unknown bias—you know that it’s not 1⁄2 but you can’t decide on any specific value.
Now, I may worry, that I overdid it and have abstracted away some relevant information, so I check:
- Does knowing that the digit in question is specifically 1,253,725,569 affects my credence?
This seems equivalent to my definition of “information that would change your answer if it was different”, so it looks like we converged on similar ideas?
This is physical uncertainty.
I’d argue that it’s physical uncertainty before the coin is flipped, but logical certainty after. After the flip, the coin’s state is unknown the same way the X-th digit of pi is unknown—the answer exists and all you need to do is look for it.
Ok, let me see if I’m understanding this correctly: if the experiment is checking the X-th digit specifically, you know that it must be a specific digit, but you don’t know which, so you can’t make a coherent model. So you generalize up to checking an arbitrary digit, where you know that the results are distributed evenly among {0...9}, so you can use this as your model.
Basically yes. Strictly speaking it’s not just any arbitrary digit, but any digit your knowledge about values of which works the same way as about value of X.
For any digit you can execute this algorithm:
Check whether you know about it more (or less) than you know about X.
Yes: Go to the next digit
No: Add it to the probability experiment
As a result you get a bunch of digits about values of which you knew as much as you know about X. And so you can use them to estimate your credence for X
The first part about not having a coherent model sounds a lot like the frequentist idea that you can’t generate a coherent probability for a coin of unknown bias—you know that it’s not 1⁄2 but you can’t decide on any specific value.
Yes. As I say in the post:
By the same logic tossing a coin is also deterministic, because if we toss the same coin exactly the same way in exactly the same conditions, the outcome is always the same. But that’s not how we reason about it. Just like we’ve generalized coin tossingprobability experiment from multiple individual coin tosses, we can generalize checking whether some previously unknown digit of pi is even or odd probability experiment from multiple individual checks about different unknown digits of pi.
The fact how a lot of Bayesians mock Frequentists for not being able to conceptualize probability of a coin of unknown fairness, and then make the exact same mistake with not being able to conceptualize probability of a specific digit of pi, which value is unknown, has always appeared quite ironic to me.
This seems equivalent to my definition of “information that would change your answer if it was different”, so it looks like we converged on similar ideas?
I think we did!
I’d argue that it’s physical uncertainty before the coin is flipped, but logical certainty after. After the flip, the coin’s state is unknown the same way the X-th digit of pi is unknown—the answer exists and all you need to do is look for it.
That’s not how people usually use these terms. The uncertainty about a state of the coin after the toss is describable within the framework of possible worlds just as uncertainty about a future coin toss, but uncertainty about a digit of pi—isn’t.
Moreover, isn’t it the same before the flip? It’s not that coin toss is “objectively random”. At the very least, the answer also exists in the future and all you need is to wait a bit for it to be revealed.
The core princinple is the same: there is in fact some value that Probability Experiment function takes in this iteration. But you don’t know which. You can do some actions: look under the box, do some computation, just wait for a couple of seconds—to learn the answer. But you also can reason approximately for the state of your current uncertainty before these actions are taken.
That’s not how people usually use these terms. The uncertainty about a state of the coin after the toss is describable within the framework of possible worlds just as uncertainty about a future coin toss, but uncertainty about a digit of pi—isn’t.
Oops, that’s my bad for not double-checking the definitions before I wrote that comment. I think the distinction I was getting at was more like known unknowns vs unknown unknowns, which isn’t relevant in platonic-ideal probability experiments like the ones we’re discussing here, but is useful in real-world situations where you can look for more information to improve your model.
Now that I’m cleared up on the definitions, I do agree that there doesn’t really seem to be a difference between physical and logical uncertainty.
I’d say that the rule is: “To construct probability experiment use the minimum generalization that still allows you to model your uncertainty”.
In the case with 1,253,725,569th digit of pi, if I try to construct a probability experiment consisting only of checking this paticular digit, I fail to model my uncertainty, as I don’t know yet what is the value of this digit.
So instead I use a more general probability experiment of checking any digit of pi that I don’t know. This allows me to account for my uncertainty.
Now, I may worry, that I overdid it and have abstracted away some relevant information, so I check:
- Does knowing that the digit in question is specifically 1,253,725,569 affects my credence?
- Not until I receive some evidence about the state of specifically 1,253,725,569th digit of pi.
- So until then this information is not relevant.
Yes. I’m making this point here:
There is no particular need to talk about logical and physical uncertainty as different things. It’s just a historical artifact of confused philosiophical approach of possible worlds and I’m presenting a better way.
Even this difference is not real. Consider:
This is physical uncertainty. And yet I do know how to find the answer: all I need is to remove the opaque box and look. Nevertheless, I can talk about my credence before I looked at the coin.
The exact same situation goes with not knowing a particular digit of pi. Yes, I do know a way to find an answer: google an algorithm for calculating any digit of pi and insert there my digit as an input. Nevertheless, I can still talk about my credence before I performed all these actions.
Ok, let me see if I’m understanding this correctly: if the experiment is checking the X-th digit specifically, you know that it must be a specific digit, but you don’t know which, so you can’t make a coherent model. So you generalize up to checking an arbitrary digit, where you know that the results are distributed evenly among {0...9}, so you can use this as your model.
The first part about not having a coherent model sounds a lot like the frequentist idea that you can’t generate a coherent probability for a coin of unknown bias—you know that it’s not 1⁄2 but you can’t decide on any specific value.
This seems equivalent to my definition of “information that would change your answer if it was different”, so it looks like we converged on similar ideas?
I’d argue that it’s physical uncertainty before the coin is flipped, but logical certainty after. After the flip, the coin’s state is unknown the same way the X-th digit of pi is unknown—the answer exists and all you need to do is look for it.
Basically yes. Strictly speaking it’s not just any arbitrary digit, but any digit your knowledge about values of which works the same way as about value of X.
For any digit you can execute this algorithm:
Check whether you know about it more (or less) than you know about X.
Yes: Go to the next digit
No: Add it to the probability experiment
As a result you get a bunch of digits about values of which you knew as much as you know about X. And so you can use them to estimate your credence for X
Yes. As I say in the post:
The fact how a lot of Bayesians mock Frequentists for not being able to conceptualize probability of a coin of unknown fairness, and then make the exact same mistake with not being able to conceptualize probability of a specific digit of pi, which value is unknown, has always appeared quite ironic to me.
I think we did!
That’s not how people usually use these terms. The uncertainty about a state of the coin after the toss is describable within the framework of possible worlds just as uncertainty about a future coin toss, but uncertainty about a digit of pi—isn’t.
Moreover, isn’t it the same before the flip? It’s not that coin toss is “objectively random”. At the very least, the answer also exists in the future and all you need is to wait a bit for it to be revealed.
The core princinple is the same: there is in fact some value that Probability Experiment function takes in this iteration. But you don’t know which. You can do some actions: look under the box, do some computation, just wait for a couple of seconds—to learn the answer. But you also can reason approximately for the state of your current uncertainty before these actions are taken.
Oops, that’s my bad for not double-checking the definitions before I wrote that comment. I think the distinction I was getting at was more like known unknowns vs unknown unknowns, which isn’t relevant in platonic-ideal probability experiments like the ones we’re discussing here, but is useful in real-world situations where you can look for more information to improve your model.
Now that I’m cleared up on the definitions, I do agree that there doesn’t really seem to be a difference between physical and logical uncertainty.