Doubts on Oslo's Algorithm

217 Views Asked by At

I have a simple doubt on the Oslo's algorithm which is used for knot insertion in a knot vector for a B-spline. Here

  • alpha = entries in Refinement Matrix
  • Tau = original knot vector
  • t = New knot vector with added entries
  • d = index for degree of B-spline which goes from 0,1..p (desired degree)
  • j = index for "t" which goes from 1:m (where m denotes no. of new control points)

Oslo's algorithm

This image describes Oslo's algorithm for $d>1$. However in the second term, the denominator goes to zero. The knot vectors taken in question are:

  • tau = [0 0 0 1 1 1]
  • t = [0 0 0 0.25 0.25 0.5 0.75 1 1 1]

From this we can notice the denominator will go to zero. How do I circumvent that in the algorithm?

1

There are 1 best solutions below

1
On BEST ANSWER

Let $B_1$ and $B_2$ be degree $d$ B-spline bases with knot vectors $\tau$ and $t$, respectively.

$\tau$ and $t$ are indexed from $1$, and the multiplicity of each knot is allowed to be larger than $d+1$.
(That is, $B_1$ and $B_2$ might actually not be bases because some elements could be zero.)

If each knot in $\tau$ exists in $t$ with the same or higher multiplicity, then $B_1=B_2\cdot M$.
The row and columns of $M$ are indexed from $1$, and the entries are given by $M_{i,j}=\alpha^d_{i,j}$ where

$$\alpha_{i, j}^d = \begin{cases} \hspace{-0.08cm} 1-\delta_{\large \tau_j, \tau_{j+d+1}} & \sum_{k=0}^{d+1} \delta_{\large t_{i+k}, \tau_{j+k}} = d+2 \\[0.2cm]\hspace{-0.08cm} 0 & \hspace{-6.8cm} {0 <\max\hspace{-0.1cm}\left\{\overset {d\hspace{0.04cm}\small+\normalsize\hspace{0.02cm} 1}{\underset {k\hspace{0.04cm}\small=\normalsize\hspace{0.02cm} 0}{\sum}} \hspace{-0.1cm} \left( \hspace{-0.04cm} \delta_{\large t_{i+k}, \tau_j} \hspace{-0.1cm} - \hspace{-0.02cm} \delta_{\large \tau_{j+k}, \tau_j}\hspace{-0.1cm}\right) \hspace{-0.1cm}, \overset {d\hspace{0.04cm}\small+\normalsize\hspace{0.02cm} 1}{\underset {k\hspace{0.04cm}\small=\normalsize\hspace{0.02cm} 0}{\sum}} \hspace{-0.1cm} \left( \hspace{-0.04cm} \delta_{\large t_{i+k}, \tau_{j+d+1}} \hspace{-0.1cm} - \hspace{-0.02cm} \delta_{\large \tau_{j+k}, \tau_{j+d+1}}\hspace{-0.1cm}\right) \hspace{-0.1cm}\right\}} \\[0.2cm]\hspace{-0.16cm} \begin{cases}\hspace{-0.08cm} 1 & d=0 \\\hspace{-0.08cm} \alpha_{i,\ j}^{d-1} \large\frac{t_{i+d} - \tau_j} {\tau_{j+d} - \tau_j} \normalsize\hspace{-0.06cm} + \hspace{-0.02cm} \alpha_{i,\ j+1}^{d-1} \large\frac{\tau_{j+d+1} - t_{i+d}} {\tau_{j+d+1} - \tau_{j+1}} & t_i \neq t_{i+d} \\\hspace{-0.08cm} \alpha_{i+1,\ j}^{d-1} \hspace{0.04cm} \large\frac{ t_{i+1} - \tau_j} {\tau_{j+d} - \tau_j} \normalsize\hspace{-0.06cm} + \hspace{-0.02cm} \alpha_{i+1,\ j+1}^{d-1} \large\frac{\tau_{j+d+1} - t_{i+1}} {\tau_{j+d+1} - \tau_{j+1}} & t_{i+1} \neq t_{i+d+1} \end{cases} & \tau_j \leq t_i < t_{i+d+1} \leq \tau_{j+d+1} \\[0.2cm]\hspace{-0.08cm} 0 & \text{else} \end{cases}$$

As written above, there will be $0$-divisions. When it happens, the term must be replaced by $0$.

Note that the formula in the OP only works when $t_i \neq t_{i+d}$ which is not always sufficient to compute the refinement matrix $M$.

If both $t_i \neq t_{i+d}$ and $t_{i+1} \neq t_{i+d+1}$, then both recursive formulas are equally valid. You should consistently choose the one that calls $\alpha_{i,j}^d$ with even $i$ so as to compute as few distinct $\alpha_{i,j}^d$ as possible. I say even because the index of first row is odd, so on average there will be more odd indexed rows.

The first two cases are not needed to make the formula work, but they decrease the number of recursive calls. In particular, the recursive formulas will be reached only if $\alpha_{i,j}^d>0$.

The condition in the first case simply checks whether the $i$'th element of $B_2$ is the exact same as the
$j$'th element of $B_1$. If that's the case, and if the elements are non-zero, the entry must be $1$. If the elements are zero, the entry doesn't matter and is set to $0$ for the sake of sparsity.