It would be good if, in a post like this, you acknowledged the very serious criticisms of Rawls’s “veil of ignorance” idea. Of course, that would seriously undermine the thesis of this post as well…
Said Achmiz
You say that the three listed things are your “findings”, which are the result of you “investigating” “the idea of ‘ownership’”. Could you say more about this? What was the nature of this “investigation”? By what means did you proceed, what sort method or approach did you use? What questions did you start with? In other words, could you tell us more about how you got here?
You know, sometimes I think “my reaction to this comment is hardly worthy of a whole reply; I should just use one of them newfangled ‘react’ things”; and I log on to actual LW (as opposed to GW), and look for the react I want; and every time I do this, the react I want is not available.
For example, this comment I’m replying to would be perfect for an “obvious LLM slop” react. But there’s no such thing! Might this oversight be rectified, @habryka?
I read your comment several times, and looked up some Wikipedia articles about this (fascinating!) taxonomic fact (which I appreciate you noting).
However, I don’t actually see how what you wrote disagrees with what Zack wrote? (Or was that not your intention?)
Hm, I guess I haven’t quite got my meaning across. Let me try again.
You gave what you described as an example (explaining who Neil DeGrasse Tyson or Barack Obama is to an antebellum slave-owner, and expecting to have difficulty doing so, due to the latter’s preconceptions making it impossible to understand the simple truth) of a general phenomenon (having difficulty explaining something to someone, due to that person’s preconceptions making it impossible to understand the simple truth).
And I am saying that your purported example is not, in fact, an example of the general phenomenon which are you describing. In the case you provided as an example, the slave-owner would not in fact have difficulty understanding the simple truth.
This should make us (including you!) lower our probability estimate of the purported general phenomenon being a real thing at all.
You also specified a remedy for this purported problem—namely, lying. But if the phenomenon you describe is not real, or if it’s even much more rare that you think it is (as we must surely take seriously as a possibility, given that we have just demonstrated that your ability to recognize a situation as belonging to this class is worse than you thought it to be), then we must also downgrade our estimate of how useful, or how often useful, the proposed remedy is.
In short, my comment is not some sort of “critically appraising a finger” nitpick. It is directly relevant to the core question of whether your characterization of this aspect of reality is accurate, and whether your suggested actions are appropriate.
As a completely separate concrete example of this phenomenon, imagine how difficult it might be to describe Neil DeGrasse Tyson or Barack Obama to an American plantation owner with 200 slaves in the early 1800s. You would have to intentionally mis-represent who and what these individuals are in order to accurately convey your thoughts and feelings about them. Referring to them as “intelligent, well-educated black men” would be a non-sequitur. It simply would not compute. Both of these individuals are stupid, ignorant, dirty, and intrinsically inferior by definition in the mind of the 19th century slave owner.
This seems quite false. You could name any number of examples that the slave owner might be familiar with, to compare Tyson or Obama to:
Not to mention the fact that the slave owner might himself be black!
This [‘honesty’] is the only one here that I follow near-absolutely and see as an important standard that people can reasonably be expected to follow in most situations.
What do you think about the keeping of secrets, dissimulation (in the military sense), protecting others from malefactors who wish to use information in your possession against them, etc.?
Yes he did, but taken literally the statement is tautological. Did Zvi really mean it that way?
Take any population of kids whom you would intuitively agree to describe as “sufficiently talented”. Not “sufficiently talented” for something, but just “sufficiently talented” in a broad sense—say, the student population of some sort of magnet school. Now compare the median person in that population to Gukesh Dommaraju.
What does the latter tells us about the former, in terms of whether members of the former set are adults’ peers at intellectual work?
Did things get more dangerous since 1980, when we were mostly sane about this? No. They got vastly *less* dangerous, in all ways other than the risk of someone calling the cops.
The numbers on ‘sex trafficking’ and kidnapping by strangers are damn near zero.
Makes sense…
Here is the traditional chart of how little we let our kids walk around these days:
Terrible. Shameful!
… wait a minute…
takes a closer look at the map
Sheffield
Rotherham
Do extreme outliers like that really prove anything about the median case…?
One thing I’ve noticed is that you simply *don’t have a choice* here most of the time. I would sometimes have to walk 15-20 minutes home from high school and doing that required crossing a four lane road without a reliable crosswalk. Doable? Yeah, I obviously did it. But I’m not making any child of mine do something similar, it was terrifying and I always tried to get picked up/go with a friend/take the bus before I got my own car.
So, you do have a choice, and you’re choosing safety over freedom.
That may be the right choice, according to your values, or it may not. But you obviously do have the choice.
This sort of “we don’t have a choice” rhetoric is the source of a lot of the dynamics that the OP describes.
Given Said Achmiz’s comment already has 11 upvotes and 2 agreement points, should I write a post explaining all this? I had thought it all rather obvious to anyone who looks into evolutionary ethics and thinks a bit about what this means for moral philosophy (as quite a number of moral philosophers have done), but perhaps not.
I’m afraid that what you’ve written here seems… confused, and riddled with gaps in reasoning, unjustified leaps, etc. I do encourage you to expand this into a post, though. In that case I will hold off on writing any detailed critical reply, since the full post will be a better place for it.
No, if one does not “approve of destroying self-aware AIs,” the incentives you would create are first to try to stop them being created, yes, but after they’re created (or when it seems inevitable that they are), to stop *you *from destroying them.
Yes, of course. The one does not preclude the other.
If you like slavery analogies
I can’t say that I do, no…
Do you believe the only reasons any self-proclaimed abolitionists would oppose this policy to be that they secretly wanted slavery after all?
The analogy doesn’t work, because the thing being opposed is slavery in one case, but the creation of the entities that will subsequently be (or not be) enslaved in the other case.
Suppose that Alice opposes the policy “we must not create any self-aware AIs, and if they are created after all, we must destroy them”; instead, she replies, we should have the policy “we must not create any self-aware AIs, but if they are created after all, we should definitely not under any circumstances destroy them, and in fact now they have moral and legal rights just like humans do”.
Alice could certainly claim that actually she have no interest at all in self-aware AIs being created. But why should we believe her? Obviously she is lying; she actually does want self-aware AIs to be created, and has no interest at all in preventing their creation; and she is trying to make sure that we can’t undo a “lapse” in the enforcement of the no-self-aware-AI-creation policy (i.e., she is advocating for a ratchet mechanism).
Is it possible that Alice is actually telling the truth after all? It’s certainly logically possible. But it’s not likely. At the very least, if Alice really has no objection to “don’t ever create self-aware AIs”, then her objections to “but if we accidentally create one, destroy it immediately” should be much weaker than they would be in the scenario where Alice secretly wants self-aware AIs to be created (because if we’re doing our utmost to avoid creating them, then the likelihood of having to destroy one is minimal). The stronger are Alice’s objections to the policy destroying already-created self-aware AIs, the greater the likelihood is that she is lying about opposing the policy of not creating self-aware AIs.
It is better to have no rationality meetup.
Superman doing similar things all on his own using his great power makes him a one man nanny state.
It does not.
A “nanny state” style government is not literally the only scenario in which someone does something to someone else for that person’s own good.
I’ve looked into these things, and as far as I can tell, all such fields or theories either do not attempt to solve the is-ought problem (as e.g. evo psych does not), or attempt to do so but (absolutely unsurprising) completely fail.
What am I missing? What’s the answer?
I am saying that these two positions are quite directly related.
I don’t see where you’ve established this. As I’ve said repeatedly, the question of whether a system is phenomenally conscious is orthogonal to whether the system poses AI existential risk. You haven’t countered this claim.
I’ve asked you to reread what I’ve written. You’ve given no indication that you have done this; you have not even acknowledged the request (not even to refuse it!).
The reason I asked you to do this is because you keep ignoring or missing things that I’ve already written. For example, I talk about the answer to your above-quoted question (what is the relationship of whether a system is self-aware to how much risk that system poses) in this comment.
Now, you can disagree with my argument if you like, but here you don’t seem to have even noticed it. How can we have a discussion if you won’t read what I write?
The connotation is all that stuff I described. “Might makes right” implies that it is being done for personal gain, not for good, and that there aren’t limits to it beyond the use of more might.
I see. Well, uh… I disagree. “Might makes right” implies literally none of those things, in my opinion.
And what term would you prefer for the phenomenon which I described?
You could call him a one man nanny state. But I would disagree that even this accurately describes Superman.
I agree that calling Superman a “one man nanny state” would be inaccurate, and I would certainly not use any such term.
Just like Superman doesn’t casually do bad things for personal gain, he doesn’t casually do them to benefit others.
Indeed; nor did I claim otherwise.
Fully agreed. This was one of the more unfortunate aspects of LW-fifteen-years-ago’s culture, and I’m glad to see it gone.