I’m going to argue a complementary story: The basic reason why it’s so hard to talk about consciousness has to do with 2 issues that are present in consciousness research, and both make it impossible to do productive research on:
Extraordinarily terrible feedback loops, almost reminiscent of the pre-deep learning alignment work on LW (I’m looking at you MIRI, albeit even then it achieved more than the consciousness research to date, and LW is slowly shifting to a mix of empirical and governance work, which is quite a lot faster than any consciousness related work has done, and it’s transition is more complete than the consciousness field.)
To the extent that we have feedback loops, we basically only have one, which is competing introspections/intuition pumps, and the problem here is that for our purposes, humans are extraordinarily terrible at reporting their internal states accurately, and self-reports are basically worthless here. In essence, you can’t get any useful evidence at all, and thus consciousness discourse goes nowhere useful.
Agreed. My impression has been for a while that there’s a super weak correlation (if any) between whether an idea goes into the right direction and how well it’s received. Since there’s rarely empirical data, one would hope for an indirect correlation where correctness correlates with argument quality, and argument quality correlates with reception, but second one is almost non-existent in academia.
I’m going to argue a complementary story: The basic reason why it’s so hard to talk about consciousness has to do with 2 issues that are present in consciousness research, and both make it impossible to do productive research on:
Extraordinarily terrible feedback loops, almost reminiscent of the pre-deep learning alignment work on LW (I’m looking at you MIRI, albeit even then it achieved more than the consciousness research to date, and LW is slowly shifting to a mix of empirical and governance work, which is quite a lot faster than any consciousness related work has done, and it’s transition is more complete than the consciousness field.)
To the extent that we have feedback loops, we basically only have one, which is competing introspections/intuition pumps, and the problem here is that for our purposes, humans are extraordinarily terrible at reporting their internal states accurately, and self-reports are basically worthless here. In essence, you can’t get any useful evidence at all, and thus consciousness discourse goes nowhere useful.
Agreed. My impression has been for a while that there’s a super weak correlation (if any) between whether an idea goes into the right direction and how well it’s received. Since there’s rarely empirical data, one would hope for an indirect correlation where correctness correlates with argument quality, and argument quality correlates with reception, but second one is almost non-existent in academia.