How is it Possible to Optimize the Shichman-Hodges Slope Parameters from the Left and Right using Least Square Linear Regression Approaches

168 Views Asked by At

The Shichman-Hodges equations model a curve that on the left-hand side has a slope that decreases as a function of the variable $x_i$. That is $x_0=0$, and $y_i$ is a function of $x_i$ for $i=0$, and $i\le n$. Ideally the discrete slope $Δ_i=\frac{y_i-y_{i-1}}{x_i-x_{i-1}}$ for $i\ge 1$ is constant on the left side slope linear regression.

Also, on the left side, $y_0=0\text{ Amps}$ at $x_0=0\text{ Volts}$. That is because whatever channel impedance is at that point, it cannot be that current flows unless voltage is applied implying that $y_0=0\text{ Amps}$.

And ideally for the second right side linear slope regression, ideally the slope is constant, that is $Δ_i=Δ_{const}$ for $i\ge i_{right}$.

However measurement is often not ideal, so the question is how to apply a least-squares regression approach to the two slope lines, the one on the left side that has discrete slope that varies (approximately linearly) with data point $i$. The left side linear parameters for the slope need to be determined with a regression least squares fit. And also the right sight least square error for the slope for $\sum{(Δ_i-Δ_{const})^2}$ for $i\ge i_{right}$ and $i\le i_{final}$ needs to be minimized.

One approach to the problem is to attempt to adjust the parameters by hand in order to find a resulting model curve that approximates a "best-fit" to the actual data. However, it is desired to compare the two approaches, the one parameters determined by hand and the second whose parameters are determined with mathematical formalities.

Because this comparison involves summarizing the mathematical derivations and results, the mathematical side of this approach and its mathematical derivation is posted and referred to here, with the comparison with the hand-best-fit-approach referenced elsewhere since that approach does not have any mathematical basis, other that to look at the relative error and to find that is within an "acceptable bound".

1. Some Representative Data to Optimize

A table of some real sample data is found below:

$ X_{V\text{ }SD}\left(Volts\right) $ $Y_{I\text{ }SD}(Amps) $ $ I_{SD\text{ hand-modeled}} $ $\frac{ΔY}{ΔX} $
1.00000000 0.71261233 0.70234478 0.71261233
1.12515652 0.77522725 0.76905370 0.50029284
1.25814426 0.83691174 0.83474886 0.46383569
1.41560924 0.89661342 0.90561485 0.37914252
1.58292711 0.96057391 0.97267991 0.38226956
1.78104079 1.01345909 1.04109907 0.26694372
1.99155068 1.03700638 1.10071766 0.11185810
2.24080634 1.08575487 1.15384483 0.19557653
2.50565815 1.11098194 1.18949735 0.09524947
2.81925774 1.14553201 1.20392942 0.11017279
3.17210603 1.17214799 1.20527661 0.07543153
3.54703307 1.18115664 1.20670807 0.02402785
3.99096727 1.18115664 1.20840311 0.00000000
4.46267939 1.20860028 1.21020412 0.05817875
5.02121258 1.19938219 1.21233678 -0.01650394
5.61469460 1.20860028 1.21460271 0.01553207
6.31741047 1.19938219 1.21728575 -0.01311768
7.06409597 1.21788907 1.22013676 0.02478532
7.94821405 1.23668146 1.22351241 0.02125556
8.88765240 1.22724938 1.22709930 -0.01004023
10.00000000 1.24618614 1.23134637 0.01702422

$$\textbf{Table 1.1. Data Points to Fit with the Shichman-Hodges Equations}$$

It can be appreciated that when $X_V$ starts out at $1.0$ and that the slope of $Y_I$ with respect to $X_V$ steadily decreases, representing the parabolic portion of the Shichman-Hodges equations model, where the decrease of slope is linear but the slope itself varies downwards, in the linear-region part. Then the slope levels out in the saturation-region of the model.

Because the slope itself can be represented by two separate lines, it seems that this is a prime candidate for applying Least Squares Linear regression - except that it is not for one line, but rather for two lines that intersect that together minimize the least square for the fit of the overall slope.

Are there any answers or suggestions about how to approach this?

My homework so far: So far I have attempted a hand-adjustment model, defining a specific data-point as the convergence point between the two lines.

Thank you in advance for any help or suggestions. I did find these articles interesting and potentially very useful too:

Derivation of the formula for Ordinary Least Squares Linear Regression

How to derive the least squares solution for linear regression?

It seems trivial to accomplish this for a given set of data. However, I am still not certain about the formalities about 1) proving the approach and 2) applying and seeing if indeed it provides a better data-fit.

2. Updated Attempts of Mine to Scope How to Resolve This Question

It can be appreciated that based on the results for standard linear regression calculation, that the least square fit across the entire data range consists of two least square fits, one for $i<i_{right}$ and the other for $i\ge i_{right}$. Indeed minimization of the entire least squares fit implies minimization of the left-side fit that has a slope and intercept that are just determined by the points for which $i<i_{right}$. And also for $i \ge i_{right}$ the right-side fit needs to minimize the least square error fit. Except this might be different, because the fit is against a constant value namely $Δ_{const}$. So, just to be sure, the least squares fit against a constant needs to be derived, and the least squares fit against a sloped line needs to be clearly shown, step by step, but still succinctly.

Also, given the amount of computation necessary, because all of the least square values have to be tabulated and summed versus the different $i_{right}$ values, probably some degree of computer computation is needed which probably is best addressed in a separate question so as not to distract from the focus of this question and still-needed answer.

Finally, the resulting fit needs to be compared against the "hand-adjustment approach" to see if indeed minimizing the least squares sum is the best approach to the optimization problem.

Updated headings that still need filling in (especially if there is an answer that fully addresses this question) are as follows:

3. Succinct Derivation of the Standard Linear Regression Approach

For details about derivation of linear regression equations, the reference How to derive the least squares solution for linear regression? has details, with a quick summary here.

The error $w=\sum_{\text{error squared}}$ is defined as follows:

$$w=\sum_{i=1}^{i<i_{right}}\left(y_i-\left(a+b*x_i\right)\right)^2\tag{3.1}$$

Now there are two variables to solve for, namely $a$ and $b$. And there are two equations, since the derivative of $w$ with respect to $a$ and $b$ needs to be zero at an inflection point, like a maximum or minimum. The reference proves that this zero point is the minimum error and not the maximum. Hence:

$$\frac{d w}{d a} = \frac {d \sum_{i=1}^{i\le n}\left(y_i-\left(a+b*x_i\right)\right)^2}{d a}=0\tag{3.2}$$ $$\frac{d w}{d a} = -2 \sum_{i=1}^{i<i_{right}}\left(y_i-\left(a+b*x_i\right)\right)=0\tag{3.3}$$

Also, the derivative $\frac{ d w}{d b}=0$ at the minimum and hence: $$\frac{d w}{d b} = \frac {d \sum_{i=1}^{i\le n}\left(y_i-\left(a+b*x_i\right)\right)^2}{d b}=0\tag{3.4}$$

$$\frac{d w}{d b}= -2 \sum_{i=1}^{i\le n}\left(y_i-\left(a+b*x_i\right)\right)*x_i=0\tag{3.5}$$

The answer, with detailed steps set out to prove it, I provided to the article "How to derive the least squares solution for linear regression". This answer has the resulting (correct as compared to the question and a reference) equations:

$$b=\frac {-\left( \sum_{i=1}^{i\le n } x_i \right) \left( \left( \sum_{i=1}^{i\le n } y_i \right) \right) + \left( n*\left(\left( \sum_{i=1}^{i\le n } y_i*x_i \right) \right) \right)} { \left( \sum_{i=1}^{i\le n } x_i \right)\left(-1*\left(\sum_{i=1}^{i\le n } x_i \right)\right) + \left( n*\left(\sum_{i=1}^{i\le n } x_i*x_i\right)\right) } \tag{3.6}$$

$$ a=\frac { - \left( \sum_{i=1}^{i\le n } x_i*x_i \right) *\left( \left( \sum_{i=1}^{i\le n } y_i \right) \right) + \left( \sum_{i=1}^{i\le n } x_i \right) * \left( \left( \sum_{i=1}^{i\le n } y_i*x_i \right) \right) } { \left( \sum_{i=1}^{i\le n } x_i*x_i \right) * \left( -n \right) - \left( \sum_{i=1}^{i\le n } x_i \right) * \left( - 1* \left( \sum_{i=1}^{i\le n } x_i \right) \right) } \tag{3.7} $$

4. Applicable Shichman-Hodges Equations Theory

From Shichman-Hodges equations for the P-channel MOSFET (which has $V_{th} < 0$ for the transistor under consideration):

"DC model selector. Level 1 is the Schichman-Hodges model."

$$\text{Cutoff Region, } 0 \le V_{SG} \le |V_{th}|$$

$$I_{DS}=0.0 \longrightarrow I_{SD}=0$$

For the "Cutoff Region", the current entering into the Source and exiting the Drain and it is modeled as zero Amps. From the IRF9510 Data Sheet, at $V_{SG}=0$ and $V_{DS} > -100 V$ then $I_{SD} \lt 50 \mu A $ which is negligible (essentially zero) for the purposes of this model.

$$\text{Linear Region, } V_{SD} \le V_{SG} - |V_{th}|$$

$$I_{SD}= \text{ KP } \frac{W_{eff}}{L_{eff}} \left(1+\text{LAMBDA } V_{SD} \right)\left( V_{SG}- |V_{th}| – \frac{1}{2}V_{SD}\right) V_{SD}$$

$$\longrightarrow I_{SD}= \frac{\text{ KP }}{2} \frac{W_{eff}}{L_{eff}} \left(1+\text{LAMBDA } V_{SD} \right)\left( 2 \left(V_{SG}- |V_{th}| \right)– V_{SD}\right) V_{SD}$$

$\text{LAMDA }$ is positive for PMOS and NMOS transistors according to page 3 of this link.

$$\text{Saturation Region, } V_{SD} \ge V_{SG} - |V_{th}|$$

$$ I_{SD}= \frac{\text{KP }}{2} \frac{W_{eff}}{L_{eff}} \left(1+\text{LAMBDA } V_{SD} \right)\left( V_{SG}- |V_{th}|\right)^2 $$

$$\text{Saturation Voltage }V_{sat}, V_{\text{SD sat}}=V_{SG}-|V{th}|$$

At the saturation voltage $V_\text{ SD Sat }$, the saturation region and linear region meet at one point. This can be revealed by substituting $V_\text{ SD sat }=V_\text{ SG }-|V_\text{ th }| $ into both the equation for Linear Region and for the Saturated Region: $$ I_{SD}= \frac{\text{ KP }}{2} \frac{W_{eff}}{L_{eff}} \left(1+\text{LAMBDA } V_{SD} \right)\left( 2 \left(V_{SG}- |V_{th}| \right)– V_{SD}\right) V_{SD} $$ $$ =\frac{\text{ KP }}{2} \frac{W_{eff}}{L_{eff}} \left(1+\text{LAMBDA } V_{\text{SD sat }} \right)\left( 2 V_{\text{SD sat }}– V_{\text{SD sat }}\right) V_{\text{SD sat }} $$ $$ I_{\text{SD sat Linear Region}}=\frac{\text{ KP }}{2} \frac{W_{eff}}{L_{eff}} \left(1+\text{LAMBDA } V_{\text{SD sat }} \right)\left(V_{\text{SD sat }}\right)^2 $$ At the Saturation Region Point: $$ I_{\text{SD sat}}= \frac{\text{KP }}{2} \frac{W_{eff}}{L_{eff}} \left(1+\text{LAMBDA } V_{\text{SD sat}} \right)\left( V_{\text{SG sat}}- |V_{th}|\right)^2 $$ $$ I_{\text{SD sat Saturation Region}}=\frac{\text{ KP }}{2} \frac{W_{eff}}{L_{eff}} \left(1+\text{LAMBDA } V_{\text{SD sat }} \right)\left(V_{\text{SD sat }}\right)^2 $$

It is important to realize that for $I_D \ge I_{\text{D Sat}}$ that the equation for $I_D$ is that of a line. So statistical linear regression approaches can be used to find the parameters over the applicable region and then apply these to the full solution.

5. Discussion of How to Approach the Shichman-Hodges Theory

The equations for the Shichman-Hodges theory equations basically have a line multiplied by an inverted parabola in Linear Region and just a Line in the Saturation Region. Further, in the Linear Region, the equations themselves guarantee that at $0 Volts$ there is a flow of $0 Amps$ for $I_{SD}$ in the transistor. Also, the first data point of $V_{DS}=1^{Volt}$ has defined current. It is very important that the fit goes through this data point.

If there is any kind of arrangement of the inverted parabola with its maximum at $V_{DS}$=$V_{DS\text{ }SAT}$ times a line that satisfies the above requirements, then that can be translated into the parameters of Shichman-Hodges theory.

One thought that I have is that the overall minimization of the least square sum throughout both the Linear and Saturation Regions can be approached as follows.

In the linear region, let:

$$ I_{SD}=α V_{SD} \left(2*V_{SD\text{ }SAT}-V_{SD}\right) \left(1+λ*(V_{SD}-V_{DS\text{ }1})\right) \tag{5.1}$$

Because of the offset $\left(V_{SD}-V_{DS\text{ }1}\right)$, at the first data point $V_{DS\text{ }1}$ the Equation can be simplified there:

$$ I_{SD\text{ }1}=α V_{SD\text{ }1} \left(2*V_{SD\text{ }SAT}-V_{SD\text{ }1} \right) \tag{5.2}$$

Then the fit can be varied for values of $V_{SD\text{ }SAT}$.

The saturation region equation can be written as:

$$ I_{SD}=α \left(V_{SD\text{ }SAT}\right)^2 \left(1+λ*(V_{SD}-V_{DS\text{ }1})\right) \tag{5.3}$$

One question is how to find the saturation point $V_{SD\text{ }SAT}$?

A simplistic approach is that it data is included both in the data for the linear region and also the data for the saturation region as one of the data point that were sampled in Table 1.1.

From Table 1.1, we have the first non-zero data point corresponding to $V_{DS\text{ }1}$, and that is the offset point for the fit.

After the solution of least-square minimization within these parameters, then the Shichman-Hodges variables can be derived from these initial results.

6. Additional Insights as to the Answer of the Problem

From Equation 5.2, $α_j$ is a function of $V_{SD\text{ }SAT\text{ }j}$, where the simplistic approach is taken here to simply minimize the sum of errors squared with the assumption that $V_{SD\text{ }SAT\text{ }j}=V_{SD\text{ }j}$ for each data point $V_{SD\text{ }SAT}$ being tested. It seems that with this simplistic approach, that the sum of least squares errors needs to be tabulated within the limits: $$ 1<j<i_{max}\text{, where }i_{max}\text{is the maximum number row for the data points} \tag{6.1}$$

$$ I_{SD\text{ }1}=α_j V_{SD\text{ }1} \left(2*V_{SD\text{ }SAT\text{ }j}-V_{SD\text{ }1} \right) $$ $$ \underset{\text{implies}}{\longrightarrow} α_j=\frac{I_{SD\text{ }1}}{V_{SD\text{ }1} \left(2*V_{SD\text{ }SAT\text{ }j}-V_{SD\text{ }1} \right)} \tag{6.2}$$

For each $V_{SD\text{ }SAT\text{ }j}$ being tested, there is only one parameter that varies, namely $λ$, both in the linear region and also the saturation region.

So then the combined error function for both regions needs to be determined, and the derivative of the total error with respect to $λ$ needs to be zero so as to minimum error over both the linear and saturation regions. There is only one variable to solve for, so the mathematics is very straight forward for . After that, the total error squared for the current, namely the quantity $w$ below is minimized:

$$w= \sum_{i=1}^{i=i_{linear\text{ }max}}{\left(I_{SD}-I_{SD\text{ modeled}}\right)^2} + \sum_{i=i_{saturation\text{ }min}}^{i=i_{saturation\text{ }max}}{\left(I_{SD}-I_{SD\text{ modeled}}\right)^2} \tag{6.3}$$

$$\frac{d w}{d λ}= \frac{ d \sum_{i=1}^{i=i_{linear\text{ }max}}{\left(I_{SD}-I_{SD\text{ modeled}}\right)^2} + \sum_{i=i_{saturation\text{ }min}}^{i=i_{saturation\text{ }max}}{\left(I_{SD}-I_{SD\text{ modeled}}\right)^2} } {d λ} = 0 \tag{6.4}$$

$$\frac{d w}{d λ}= $$ $$ \frac{ d }{{d λ}} \sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_{SD}-I_{SD\text{ modeled}} \right)^2 + $$ $$ + \frac{ d }{d λ} \sum_{i=i_{saturation\text{ }min}}^{i=i_{saturation\text{ }max}}{\left(I_{SD}-I_{SD\text{ modeled}}\right)^2} = 0 \tag{6.5}$$

$$\frac{d w}{d λ}= \left(-2 \right) \sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_{SD}-I_{SD\text{ modeled}} \right) \frac{ d }{{d λ}} I_{SD\text{ modeled}} + $$ $$ + \left( -2 \right) \sum_{i=i_{saturation\text{ }min}}^{i=i_{saturation\text{ }max}} { \left( I_{SD}- I_{SD\text{ modeled}} \right) \frac{ d }{d λ} I_{SD\text{ modeled}} } = 0 \tag{6.6}$$

$$ \sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_\text{SD i}-I_{SD\text{ modeled i}} \right) \frac{ d }{{d λ}} I_{SD\text{ modeled i}} + $$ $$ + \sum_{i=i_{saturation\text{ }min}}^{i=i_{saturation\text{ }max}} { \left( I_\text{SD i}- I_{SD\text{ modeled i}} \right) \frac{ d }{d λ} I_{SD\text{ modeled i}} } = 0 \tag{6.7}$$

From Equation 5.1:

$$ \frac{ d }{{d λ}} I_\text{SD modeled linear i}= α_\text{ j } V_\text{ SD i } \left(2*V_\text{ SD SAT j } -V_\text{ SD i }\right) \left(V_\text{ SD i }-V_\text{ SD 1}\right) \tag{6.8}$$ From Equation 5.3: $$ \frac{ d }{{d λ}} I_\text{ SD modeled saturation i}= =α_\text{ j } \left(V_\text{ SD SAT j }\right)^2 \left( V_\text{ SD i } -V_\text{ DS 1 } \right) \tag{6.9}$$

Substituting Equations 6.8 and 6.9 into Equation 6.7 and expanding in $\text{ λ } $ yields:

$$ \sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_\text{SD i}-I_{SD\text{ modeled i}} \right) α_\text{ j } V_\text{ SD i } \left(2*V_\text{ SD SAT j } -V_\text{ SD i }\right) \left(V_\text{ SD i }-V_\text{ SD 1}\right) + $$

$$ + \sum_{i=i_\text{ saturation min }}^{i=i_\text{ saturation max }} \left( I_\text{ SD i}- I_\text{ SD modeled i} \right) α_\text{ j } \left(V_\text{ SD SAT j }\right)^2 \left( V_\text{ SD i } -V_\text{ DS 1 } \right) = 0 \tag{6.10}$$

$$ \sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_\text{SD i}-α V_{SD} \left(2*V_{SD\text{ }SAT}-V_{SD}\right) \left(1+λ*(V_{SD}-V_{DS\text{ }1})\right) \right) * $$ $$ * α_\text{ j } V_\text{ SD i } \left(2*V_\text{ SD SAT j } -V_\text{ SD i }\right) \left(V_\text{ SD i }-V_\text{ SD 1}\right) + $$

$$ + \sum_{i=i_\text{ saturation min }}^{i=i_\text{ saturation max }} \left( I_\text{ SD i}- α \left(V_{SD\text{ }SAT}\right)^2 \left(1+λ*(V_{SD}-V_{DS\text{ }1})\right) \right) * $$ $$ * α_\text{ j } \left(V_\text{ SD SAT j }\right)^2 \left( V_\text{ SD i } -V_\text{ DS 1 } \right) = 0 \tag{6.11}$$

Then, expanding in $ λ_j $:

$$ \sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_\text{SD i}-α V_{SD} \left(2*V_{SD\text{ }SAT}-V_{SD}\right) \left( 1 \right) \right) * $$ $$ * α_\text{ j } V_\text{ SD i } \left( 2*V_\text{ SD SAT j } -V_\text{ SD i } \right) \left( V_\text{ SD i }-V_\text{ SD 1} \right) + $$ $$ + \sum_{i=i_\text{ saturation min }}^{i=i_\text{ saturation max }} \left( I_\text{ SD i}- α \left( V_{SD\text{ }SAT} \right)^2 \left( 1 \right) \right) * $$ $$ * α_\text{ j } \left(V_\text{ SD SAT j }\right)^2 \left( V_\text{ SD i } -V_\text{ DS 1 } \right) + $$ $$ \sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_\text{SD i}-α V_{SD} \left(2*V_{SD\text{ }SAT}-V_{SD}\right) \left( λ*(V_{SD}-V_{DS\text{ }1}) \right) \right) * $$ $$ * α_\text{ j } V_\text{ SD i } \left(2*V_\text{ SD SAT j } -V_\text{ SD i }\right) \left(V_\text{ SD i }-V_\text{ SD 1}\right) + $$ $$ + \sum_{i=i_\text{ saturation min }}^{i=i_\text{ saturation max }} \left( I_\text{ SD i}- α \left(V_{SD\text{ }SAT}\right)^2 \left( λ*(V_{SD}-V_{DS\text{ }1}) \right) \right) * $$ $$ * α_\text{ j } \left(V_\text{ SD SAT j }\right)^2 \left( V_\text{ SD i } -V_\text{ DS 1 } \right) = 0 $$ $$\tag{6.12}$$

$$ λ_\text{ j } \Big( \sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_\text{SD i}-α V_{SD} \left(2*V_{SD\text{ }SAT}-V_{SD} \right) \left( λ*(V_{SD}-V_{DS\text{ }1}) \right) \right) * α_\text{ j } V_\text{ SD i } \left(2*V_\text{ SD SAT j } -V_\text{ SD i }\right) \left(V_\text{ SD i }-V_\text{ SD 1}\right) + \sum_{i=i_\text{ saturation min }}^{i=i_\text{ saturation max }} \left( I_\text{ SD i}- α \left(V_{SD\text{ }SAT}\right)^2 \left( λ*(V_{SD}-V_{DS\text{ }1}) \right) \right) * α_\text{ j } \left(V_\text{ SD SAT j }\right)^2 \left( V_\text{ SD i } -V_\text{ DS 1 } \right) \Big)= $$ $$ \Big(-1 \Big) \Big(\sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_\text{SD i}-α V_{SD} \left(2*V_{SD\text{ }SAT}-V_{SD}\right) \left( 1 \right) \right) * α_\text{ j } V_\text{ SD i } \left( 2*V_\text{ SD SAT j } -V_\text{ SD i } \right) \left( V_\text{ SD i }-V_\text{ SD 1} \right) + + \sum_{i=i_\text{ saturation min }}^{i=i_\text{ saturation max }} \left( I_\text{ SD i}- α \left( V_{SD\text{ }SAT} \right)^2 \left( 1 \right) \right) * α_\text{ j } \left(V_\text{ SD SAT j }\right)^2 \left( V_\text{ SD i } -V_\text{ DS 1 } \right) \Big) $$ $$\tag{6.13}$$

Finally,

$$ λ_\text{ j } = $$ $$\frac{ \displaystyle \Big(-1 \Big) \Big( \sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_\text{ SD i }-α V_\text{ SD i } \left(2*V_\text{ SD SAT j }-V_\text{ SD i } \right) \left( 1 \right) \right) * α_\text{ j } V_\text{ SD i } \left( 2*V_\text{ SD SAT j } -V_\text{ SD i } \right) \left( V_\text{ SD i }-V_\text{ SD 1} \right) + \\ + \sum_{i=i_\text{ saturation min }}^{i=i_\text{ saturation max }} \left( I_\text{ SD i }- α \left( V_\text{ SD SAT j } \right)^2 \left( 1 \right) \right) * α_\text{ j } \left(V_\text{ SD SAT j }\right)^2 \left( V_\text{ SD i } -V_\text{ DS 1 } \right) \Big) } {\displaystyle 1} * $$

$$ * \frac{\displaystyle 1} {\displaystyle \Big( \sum_{i=1}^{i=i_{linear\text{ }max}} \left( I_\text{ SD i }-α V_\text{ SD i } \left(2*V_\text{ SD SAT j }-V_\text{ SD i } \right) \left( λ*(V_\text{ SD i }-V_\text{ SD i } \right) \right) * α_\text{ j } V_\text{ SD i } \left(2*V_\text{ SD SAT j } -V_\text{ SD i }\right) \left(V_\text{ SD i }-V_\text{ SD 1}\right) + \\ + \sum_{i=i_\text{ saturation min }}^{i=i_\text{ saturation max }} \left( I_\text{ SD i }- α \left(V_\text{ SD SAT j }\right)^2 \left( λ*(V_{SD}-V_\text{ SD 1 } \right) \right) * α_\text{ j } \left(V_\text{ SD SAT j }\right)^2 \left( V_\text{ SD i } -V_\text{ DS 1 } \right) \Big) }$$

$$\tag{6.14}$$

7. Derivation of Least Squares Fit Versus a Constant $α_j$ and a Varying $λ$

To find the global minimum least squares fits, Equation 6.3 is minimized for each value of $V_{SD\text{ }SAT\text{ }j}$, by taking the derivative with respect to $λ$, and setting that derivative equal to zero. After that, the least squared error is computed, and tabulated versus index $j$. The index $j$ that has the least squared error and its associated minimal least squares $λ$ are then used as the solution to the problem.

The Shichman-Hodges paramaters are then derived back from $V_{SD\text{ }SAT\text{ }fit}$,$α_{fit}$, and $λ_{fit}$, so that the plots can be compared with simulator plots using the same corresponding model variables. Also, $V_{th}$ is calculated from $V_{SD\text{ }SAT\text{ }fit}$ using the equation $V_{SD\text{ }SAT\text{ }fit}=V_{SG}-V_{th}$. For this fit, the curve being fitted has $V_{SG}=5^{Volts}$ and the expected range of $V_{th}$ is given on a table in the first page of the data-sheet. I remember from my hand-minimization of the error, that $V_{th}$ came within that range, as a sanity check to the solution of the correct parameters.

Plots and Tables are made:

  1. Showing the taken data points, and the curve fit.
  2. Showing the relative error at each data point.
  3. Showing the simulator piece-wise linear data and the model fit.
  4. Showing from the simulator piece-wise relative error to the fit.

8. Link to Computation and Results of Least Squares Minimization

9. Least Squares and Their Sum Versus Variations $ i_{right}$ Index

The actual computation code would be in a separate question, and a summary table with a link to that question would be placed here.

7. Computation of the Data Curves Using the Minimal Least Square Fit Computed

From my perspective, this is a lot of work needed to resolve the question, with the headings copied into the answer when the answer is final. I am including them now to demonstrate my attempts so far.

Is there a better way?

9. Still Needing to Be Done in a Good Answer

The Shichman-Hodges equations model is widely available in Circuit Simulation Software (SPICE for example and LT-SPICE in particular). Not only should plots and tables be provided, but also the approach for simulation should be derived in the answer. I am still looking for the actual simulation derived perimeters as well as implementation techniques using standard approaches for these parameters to give the answer an upvote.

1

There are 1 best solutions below

6
On

The model with two lines $y_1 = a x + b$ and $y_2=c x+d$ can be handled defining a function as

$$ y(x) = \min(y_1(x), y_2(x)) $$

or

$$ y(x,a,b,c,d) = \min(a x + b, c x + d) $$

Follows a MATHEMATICA script solving this fitness problem.

data = {{1., 0.712612}, {1.12516, 0.775227}, {1.25814, 0.836912},{1.41561,0.896613}, 
        {1.58293, 0.960574}, {1.78104, 1.01346}, {1.99155, 1.03701}, {2.24081, 1.08575}, 
        {2.50566, 1.11098}, {2.81926, 1.14553}, {3.17211, 1.17215}, {3.54703, 1.18116}, 
        {3.99097, 1.18116}, {4.46268, 1.2086}, {5.02121, 1.19938}, {5.61469, 1.2086}, 
        {6.31741, 1.19938}, {7.0641, 1.21789}, {7.94821, 1.23668}, {8.88765, 1.22725}, {10., 1.24619}}

f[x_, a_, b_, c_, d_] := Min[a x + b, c x + d]
n = Length[data];
obj = Sum[Abs[f[data[[k, 1]], a, b, c, d] - data[[k, 2]]], {k, 1, n}];
sol = NMinimize[{obj, a data[[1, 1]] + b == data[[1, 2]]}, {a, b, c, d}, Method -> "DifferentialEvolution"]

f0 = f[x, a, b, c, d] /. sol[[2]]
interx = Solve[(a x + b == c x + d) /. sol[[2]], x][[1]];
{xi, yi} = {x, a x + b} /. sol[[2]] /. interx

(* {2.16602, 1.14111} *)

gr0 = ListPlot[data, PlotStyle -> Red];
gr1 = Plot[f0, {x, data[[1, 1]], data[[n, 1]]}, PlotRange -> All];
Show[gr0, gr1, PlotRange -> All]

enter image description here

Another model to fit is

$$ y(x,a,b,c,d) = e^{-a x}d + b x + c $$

Follows the script which is almost identical to the previous.

f[x_, a_, b_, c_, d_] := d Exp[-a x] + b x + c
n = Length[data];
obj = Sum[Abs[f[data[[k, 1]], a, b, c, d] - data[[k, 2]]], {k, 1, n}];
sol = NMinimize[{obj, f[data[[1, 1]], a, b, c, d] == data[[1, 2]]}, {a, b, c, d}, Method -> "DifferentialEvolution"]

f0 = f[x, a, b, c, d] /. sol[[2]]
gr0 = ListPlot[data, PlotStyle -> Red];
gr1 = Plot[f0, {x, data[[1, 1]], data[[n, 1]]}, PlotRange -> All];
Show[gr0, gr1]

enter image description here

Both are nonlinear fitting problems and require optimization tools to solve properly.