One of my chief criticisms of Rennert et. al. 2022 is that it calculates the cost imposed by an additional ton of CO2 by summing costs from now to 2300 while almost entirely ignoring technological change, in effect assuming technological stasis. That raises the question of how one ought to model technological change in trying to calculate costs over a long period of time. My initial response was that you can't do it, that any estimate of effects centuries in the future is a wild guess, science fiction not science. But that started me thinking about how, if I had to do it, I would.
Here is my answer.
Step 1: Create a simplified procedure for deducing the Social Cost of Carbon (SCC) from your preferred climate change scenario using causal relationships, such as the effect of temperature on mortality or on crop yield, calculated with data from a single decade. The causal relationships should take account of variables other than technology, such as income, that can be expected to change over time.
Step 2: Use the procedure to produce a value for SCC using data from the most recent decade for which suitable data are available, say 2010-2020. Call this SCC(2010).
Step 3: Repeat step 2, using data from the previous decade. Call this SCC(2000).
Step 4: Calculate the ratio R1=SCC(2010)/SCC(2000)
This is essentially what Lay et. al. 2021 did for the effect of technology on temperature-related mortality.
Suppose R1=.9. That implies that changes over the decade were reducing the social cost of carbon implied by your climate change scenario at about 1%/year.
Step 5: You now abandon your simplified procedure and substitute whatever you consider the best way of calculating SCC. But instead of discounting at the discount rate you discount at the discount rate plus 1% (or whatever R1/10 turned out to be). You have now allowed, as best you can, for the reduction in cost over time due to technological change.
It is not a very good solution to the problem but better than assuming stasis. One problem with it is that it ignores the fact that, the farther into the future you go, the more uncertain your estimate of the effect of technological progress. To solve that you need ...
The Improved Version
Step 3a: Repeat step 2 using data from each earlier decade for which the necessary data exist, generating SCC(1990), SCC(1980), ...
Step 4a: Calculate R2=SCC(2000)/SCC(1990), R3=SCC(1990)/SCC(1980), R4= ...
If you are willing to model the effect of technological progress as a constant rate of decrease of costs, use the average of your ratios R1, R2, ... instead of R1 alone to estimate the rate at which cost is being reduced due to technological change. Alternatively, if you think the rate of change due to technological change is itself changing over time and you have enough ratios, you could try fitting them to a function linear in time and use that.
In either case, use the variation of the ratios (from the average or from the linear fit) to calculate by how much you should increase the uncertainty in cost for the later years. If you estimate the rate as 1% when it is really 2% that will have little effect on costs in the near future but produce a large overestimate of costs a century or two later.
Actually doing all of this would be a large and complicated project. Even turning my verbal sketch into a precise mathematical formulation would be quite a lot of work. I don't propose to do either, but perhaps someone more ambitious could. Having been done once, the result could be applied to a variety of different approaches for estimating future costs — from CO2 or anything else.