Introduction
Inspired by the work of Olivier Oloa [1] and the question of Vladimir Reshetnikov in a comment I succeeded in calculating the closed form of the sum
$$s_m = \sum _{k=1}^{\infty } \frac{\left(H_k\right){}^m-(\log (k)+\gamma )^m}{k}\tag{1}$$
for $m=3$.
The cases known (to me) are
$$s_1 = -\gamma _1+\frac{\pi ^2}{12}-\frac{\gamma ^2}{2}= 0.728694...\tag{2a}$$
$$s_2 = -2 \gamma \gamma _1-\gamma _2+\frac{5 \zeta (3)}{3}-\frac{2 \gamma ^3}{3}= 1.96897 ...\tag{2b}$$
$$s_3 =-3 \gamma ^2 \gamma _1-3 \gamma \gamma _2-\gamma _3+\frac{43 \pi ^4}{720}-\frac{3 \gamma ^4}{4}=5.82174 ...\tag{2c}$$
Here $\gamma$ is Euler's gamma, and $\gamma_k$ is Stieltjes gamma of order k.
Notice that $s_2$ was calculated in [1]. For a check of my method I calculated all three cases.
Inspecting the available cases $m=1, 2, 3$ I tentatively propose here a general formula for the closed form of $s_m$.
$$ s_{m}= -m \sum _{j=1}^{m-1} \gamma ^j \gamma _{m-j}+a_m \zeta (m+1)-\gamma _m-\frac{m }{m+1}\gamma ^{m+1}\tag{3}$$
Here the coefficients are
$$a_1=\frac{1}{2},\; a_2=\frac{5}{3},\; a_3=\frac{43}{8}\tag{4}$$
With only three terms of $a_k$ OEIS [2] returns of course too many choices to be able to identify the sequences. And what's more, I don't know if the true sequence is contained in OEIS at all.
Unfortunately, I could not extend my method to $m=4$.
Questions
Prove the formula for $s(3)$
Calculate $s(4)$ and verify and try to complete (3)
References
[1] A closed form of the series $\sum_{n=1}^{\infty} \frac{H_n^2-(\gamma + \ln n)^2}{n}$
Before showing my solution of the specific problem, I would like to make some remarks on useful methods.
Methods
To begin with, I describe briefly some methods to solve the following basic problem:
Given a sequence of numbers $a_k, k=1,2,3,...$ we wish to calculate the infinite sum $s=\sum_{k=1}^\infty a_k$.
To my knowledge there are three important methods to tackle the problem. The first two of which have been employed frequently here. I haven't yet seen the third method, which I have used here.
Maybe the most natural method as it employs just the definition of an infinite sum.
Defining the partial sum as $s_n = \sum_{k=1}^n a_k$ we have
$$s = \lim_{n\to \infty } \, s_n$$
If the summand can be written as
$$a_k = \int_{0}^1 w(x,k) \, dx$$
with some function $w(x,k)$ then, interchanging sum and integral, we get
$$s = \int_{0}^1 W(x) \, dx$$
where
$$W(x) = \sum_{k=1}^\infty w(k,x)$$
Here we form a function $g(x)$ by multiplying the $a_k$ by $x^k$ ("wrapping") an summing up:
$$g(x) =\sum_{k=1}^\infty x^k a_k$$
This hopefully results in a known function, called the generating function (g.f) of the $a_k$.
Then we have
$$s = \lim_{x\to 1 } \, g(x)$$
We also obtain immediately an expression for the alternating sum defined as
$$s_a = \sum_{k=1}^\infty (-1)^k a_k$$
by
$$s_a = \lim_{x\to -1 } \, g(x)$$
Interchanging limits must be justified, of course. But here we skip this labour as we are just looking for a heuristic tool to find a closed expression. Once we have it, we can verify its validity numerically.
More specifically, the generating function method will be applied to cases where $a_k = u_k - v_k$ and where the sums of $u_k$ and $v_k$ separately can be divergent.
Despite of the separate divergence, in many cases their generating functions exist separately (the factor $x^k$ "forces" convergence) and we have
$$g(x) = g_u(x) - g_v(x)$$.