I am building an energy-system-model with python/pyomo. It is basically creating an optimization problem (LP) in following form:
min cost
st constraint[t]
and with the rise of the index t (refering timesteps in the picture), it is creating more constraints for the opt. problem.
Eg: t=3
min cost
st constraint[1]
constraint[2]
constraint[3]
The solver I am using is GLPK and it solves the opt. problem using the simplex algorithm. With the below picture it is clear that the solver takes up more time when you have bigger values of t (timesteps). And it rises exponentially.
I would like to find a reason for that. I mean it makes of course sense that it takes more time to solve the opt. problem, but what I am wondering is why it is exponential and not linear. The reason is most likely because of the effects of the complexity of the opt. problem over the simplex algorithm.
Is there any paper that I could read about the calculation efficiency on the simplex algo, or a paper which explains why the calculation time is exponentially rising?
Any hints would be very beneficial, ty!

I agree with the comments above -- you cannot conclude that the CPU time is increasing exponentially just by eyeballing it. It might just as easily be quadratic or other polynomial.
What I think you are really asking is, why does the CPU time increase faster than linearly? The short answer is, the simplex method uses operations that require more than a linear number of operations, such as matrix inversion. So, as the number of variables, $n$, increases, the number of operations increases faster than $n$. Even if the number of simplex iterations were to stay the same as $n$ increases (which it probably wouldn't), the CPU time would increase faster than linearly. This is true of most optimization algorithms.