Endorsing successionism might be strongly correlated with expecting the “mind children” to keep humans around, even if in a purely ornamental role and possibly only at human timescales. This might be more of a bailey position, so when pressed on it they might affirm that their endorsement of successionism is compatible with human extinction, but in their heart they would still hope and expect that it won’t come to that. So I think complaints about human extinction will feel strawmannish to most successionists.
Andrew Critch: From my recollection, >5% of AI professionals I’ve talked to about extinction risk have argued human extinction from AI is morally okay, and another ~5% argued it would be a good thing.
Though sure, Critch’s process there isn’t white-boxed, so any number of biases might be in it.
Endorsing successionism might be strongly correlated with expecting the “mind children” to keep humans around, even if in a purely ornamental role and possibly only at human timescales. This might be more of a bailey position, so when pressed on it they might affirm that their endorsement of successionism is compatible with human extinction, but in their heart they would still hope and expect that it won’t come to that. So I think complaints about human extinction will feel strawmannish to most successionists.
I’m not so sure about that:
Though sure, Critch’s process there isn’t white-boxed, so any number of biases might be in it.