That is a great point and your examples are fascinating!
I think polarization is still quite possible and should be avoided at high cost. If AI safety becomes the new climate change, it seems pretty clear that it will create conflict in public opinion and deadlock in politics.
I think the way the issue is framed matters a lot. If it’s a “populist” framing (“elites are in it for themselves, they can’t be trusted”), that frame seems to have resonated with a segment of the right lately. Climate change has a sanctimonious frame in American politics that conservatives hate.
Agreed, tone and framing are crucial. The populist framing might work for conservatives, but it will also set off the enemy rhetoric detectors among liberals. So coding it to either side is prone to backfire. Based on that logic, I’m leaning toward thinking that it needs to be framed to carefully avoid or walk the center line between the terms and framings of both sides.
It would be just as bad to have it polarized as conservative, right? Although we’ve got four years of conservatism, so it might be worth thinking seriously about whether that trade might be worth it. I’m not sure a liberal administration would undo restrictions on AI even if they had been conservative-coded...
Interesting. I’m feeling more like saying “the elites want to make AI that will make them rich while putting half the world out of a job”. That’s probably true as far as it goes, and it could be useful.
That is a great point and your examples are fascinating!
I think polarization is still quite possible and should be avoided at high cost. If AI safety becomes the new climate change, it seems pretty clear that it will create conflict in public opinion and deadlock in politics.
I think the way the issue is framed matters a lot. If it’s a “populist” framing (“elites are in it for themselves, they can’t be trusted”), that frame seems to have resonated with a segment of the right lately. Climate change has a sanctimonious frame in American politics that conservatives hate.
Agreed, tone and framing are crucial. The populist framing might work for conservatives, but it will also set off the enemy rhetoric detectors among liberals. So coding it to either side is prone to backfire. Based on that logic, I’m leaning toward thinking that it needs to be framed to carefully avoid or walk the center line between the terms and framings of both sides.
It would be just as bad to have it polarized as conservative, right? Although we’ve got four years of conservatism, so it might be worth thinking seriously about whether that trade might be worth it. I’m not sure a liberal administration would undo restrictions on AI even if they had been conservative-coded...
Interesting. I’m feeling more like saying “the elites want to make AI that will make them rich while putting half the world out of a job”. That’s probably true as far as it goes, and it could be useful.
I’m not sure about that, does Bernie Sanders rhetoric set off that detector?