The study Mollick linked to doesn’t actually show what he claims. If you read it carefully, it says literally nothing about capabilities improving. It shows that models are getting more efficient in terms of the computational resources they require to get to given level of performance, “ the level of compute needed to achieve a given level of performance has halved roughly every 8 months, with a 95% confidence interval of 5 to 14 months. ” But (a) past performance doesn’t always predict future performance, and (b) most of the data in the study are old, none from this year.