Is a big sign that if there is something here, it’s likely to be discovered. We’re likely to find out in the next few years of this is the future of general purpose AI.
Yair Halberstadt
Thanks so much! This is precisely the sort of answer I was looking for!
Ok, yes that makes a lot more sense—whilst tarnishing by association increases incentives to point out flaws in your friend, it decreases incentives to point out flaws in your friend’s friend.
And since most of your friends are also your friends’ friends, the aggregate impact is to decrease incentives to point out flaws in your friends as well.
That sounds exactly like what I was saying: the reason insiders don’t criticise other insiders isn’t because it reduces their status by association. It’s that other insiders don’t like it, and they want to stay insiders.
I actually think this mostly goes the other way:
Generally people aren’t judged for associating with someone if they whistleblow that they’re doing something wrong. But anyone who doesn’t whistleblow might still be tarnished by association. So this creates an incentive to be the first to publicly report wrongs.
Now you appear to only be talking about small wrongs, with the idea being that you still want to associate with that person, hence whistleblowing wouldn’t save you. But there’s already a very strong incentive in such cases not to whistleblow, namely that you want to stay friends. So I’m not sure the additional impact on your reputation makes much impact beyond that.
~1% of the world dies every year. If we accelerate AGI sooner 1 year, we save 1%. Push back 1 year, lose 1%. So, pushing back 1 year is only worth it if we reduce P(doom) by 1%.
That would imply that if you could flip a switch which 90% chance kills everyone, 10% chance grants immortality then (assuming there weren’t any alternative paths to immortality) you would take it. Is that correct?
Is this just a semantic quibble, or are you saying there’s fundamental similarities between them that are relevant?
Yep, I know about earlier work, but I think that one of the top 3 labs taking this seriously is a big sign.
For sure. It might be nothing, or it might be everything.
Gemini Diffusion: watch this space
I actually find it really annoying, because if I scroll too little it snaps back to the previous picture, and too much it scrolls completely to the next one. There’s no ability to get it about where I want them refine iteratively.
Also, I think there’s a psychological thing where it feels nice where the UI feels like it reacts smoothly to what you do like a physical tool would. This breaks your control of the UI, snapping you out of this illusion, which is unpleasant.
I’m not sure this follows. If I have aims I want to achieve, I may resist permanent shutdown even if do not mind dieing because that limits my ability to achieve my aims.
Caplan’s being melodramatic about circumcision
If you want a more accurate estimate of how often top chess engines pick the theoretical best move, you could compare Leelachess and stockfish. These are very close to each other ELO wise but have very different engines and styles of play. So you could look at how often they agree on the best move, and assume that both have some distribution where they pick their best moves from the the true move ranking, and then use that to calculate parameters of the distribution.
Stockfish is incredibly strong at exploiting small mistakes. I’m going to assume that on average if you make anything other than the top 5 moves at any point in a game, stockfish will win, no matter what you do after.
An average game is about 40 turns, and there’s about 20 valid moves each turn.
So that puts a upper limit on success of 1 in 4^40.
Similarly if you pick the best move at all times, you’ll win, putting a lower limit at 1 in 20^40
Making some assumptions about how many best moves you need to counteract a poor but not fatal move, you could try to estimate something more accurate in this range.
Ok, I think that makes a lot of sense. Newton’s 2nd law is the first step of constructing a model which is (ideally) isomorphic to reality once you’ve filled in all the details.
But you could equally well start off constructing your model with a different first step, and if you do it might be that some nice neat packaged concepts in modelA do not map cleanly onto anything in modelB. The fundamental concepts in physics are fundamental to the model, not to reality.
I agree it’s tautologically true, but I’m saying that we only use it because it maps nicely to reality. When it doesn’t map cleanly to reality we replace it with something else (special relativity in your example) instead of continuously adding epicycles.
There’s an infinite number of laws I could generate that would be equally tautologically true (e.g. f = mvdv/dt), but we don’t use them because they require more epicycles to work correctly.
I think Newtons seconds law would be discarded if we consistently saw the following:
There was no relation between how hard I push things and how fast they move.
Pushing a faster moving object as a hard as I push a slower moving object, for the same amount of time, speeds it up less than the slower object.
When you take two objects, each of which moves the same speed after 5 seconds of pushing as hard as you can, and stick them together, the resulting object moves 4 times as fast after 5 seconds.
The first would get rid of the law completely, the second would make you refine your concept of acceleration, and the third your concept of how acceleration relates to mass.
Now you’re right you could always add epicycles to fix this, but the correct response would be discarding the theory outright.
I imagine that one way to reduce the way the financial impact of working 80% is to wait till you get a pay rise, (or move to a higher paying role at a new company), and make the switch at the same time, so you never feel like you’re financially worse off than you were before.