The following problem arose in the context of string theory. I hope someone here might provide some guidance or a solution...
Our starting point is:
i) an integer-lattice $L\subset\mathbb R^n$ generated by a basis $\{v_1,\ldots,v_n\}$, i.e. $$L=\langle v_1, v_2,\ldots, v_n \rangle=\{m_1v_1+\ldots+m_nv_n: m_i\in\mathbb Z\}$$
ii) a vector $h\in\mathbb R^n$ with $Kh\in L$ for some integer K and K is the smallest such integer. $h$ is also required to satisfy $\frac K2 h\cdot h\in\mathbb Z$, where $\cdot$ denotes the standard dot product in $\mathbb R^n$.
We then consider the lattice $$\Lambda=\bigcup_{k=0}^{K-1}\{l+kh:\quad l\in L\quad \mbox{and}\quad h\cdot(l+\frac k2h)\in\mathbb Z\}.$$
The question is to find a basis for $\Lambda$.
EDIT: Following WillJagy's suggestion I provide here an explicit 2x2 example for clarity.
Let $L$ be the lattice generated by the basis vectors $$v_1= \left[ \begin{array}{c} 2 \\ 0 \end{array} \right],\quad v_2= \left[ \begin{array}{c} 0 \\ 1 \end{array} \right]$$ $$L=\{\left[ \begin{array}{c} 2m_1 \\ m_2 \end{array} \right]: m_1,m_2\in\mathbb Z\}.$$ Let also $K=2$ and $h=\left[ \begin{array}{c} 1 \\1 \end{array}\right]$. Then $2h\in L$ and $\frac K2 h\cdot h=2\in\mathbb Z$, so all the requirements are satisfied.
The question is to find a basis for the lattice
$$\Lambda=\bigcup_{k=0}^{1}\{l+kh:\quad l\in L\quad \mbox{and}\quad h\cdot(l+\frac k2h)\in\mathbb Z\}$$ For this example, we have that
\begin{align} \Lambda&=\{l\in L:\quad h\cdot l\in\mathbb Z\}\ \bigcup\ \{l+h:\quad l\in L\quad \mbox{and}\quad h\cdot(l+\frac 12h)\in\mathbb Z\}\\ &=L\ \bigcup\ (L+h)\\ &=\mathbb{Z}^2 \end{align} where in the last step we can easily see that $\Lambda=\mathbb{Z}^2$ "by inspection", and therefore a basis is $e_1= \left[ \begin{array}{c} 1 \\ 0 \end{array} \right],\ e_2= \left[ \begin{array}{c} 0 \\ 1 \end{array} \right]$, but how do we do this step in the most general case?
UPDATE: The Hermite Normal Form (HNF) might be used in solving this problem but the details still escape me. For the example above, we start with the basis (in column notation) $$\left( \begin{array}{cc} 2 &0\\ 0 &1\\ \end{array} \right),$$ we then add $h=\left[ \begin{array}{c} 1 \\1 \end{array}\right]$ to get $$\left( \begin{array}{cc} 2 &0& 1\\ 0 &1& 1\\ \end{array} \right)$$ and the HNF of the above is $$\left( \begin{array}{cc} 1 &0 &0\\ 0 &1 &0\\ \end{array} \right),$$ which is the correct answer. However, I would appreciate it if someone could justify why the procedure works given the weird product constraint in the definition of $\Lambda$.