Jekyll2021-05-22T05:19:43+00:00https://mjmorse.com/atom.xmlMatt MorsePersonal website and blog.{"name"=>nil, "picture"=>"images/bio-photo.jpg", "email"=>nil, "twitter"=>nil, "links"=>[{"title"=>"Github", "url"=>"https://github.com/mmorse1217", "icon"=>"fab fa-github"}, {"title"=>"Google Scholar", "url"=>"https://scholar.google.com/citations?user=wtaBtd8AAAAJ&hl=en&oi=ao", "icon"=>"fab fa-google"}, {"title"=>"LinkedIn", "url"=>"https://linkedin.com/in/matthewjmorse", "icon"=>"fab fa-linkedin-in"}]}Some intuition behind fundamental solutions and Green’s functions2021-04-16T00:00:00+00:002021-04-16T00:00:00+00:00https://mjmorse.com/blog/greens-function-intuition<h1 id="summary">Summary</h1> <p>Green’s functions and fundamental solutions are a useful tool to solve partial differential equations (PDEs), particularly the linear, constant coefficient ones. In math courses, they’re used mostly as an analytic tool, but they also enable some cool numerical techniques for physical simulations. In these courses (and most textbooks), Green’s functions are usually just defined without much intuition, then used to solve problems and prove theorems. This is fairly uninspiring, but I recently found an <a href="https://math.stackexchange.com/a/1738322">answer on MathOverflow</a> that more clearly motivates the definition.</p> <h2 id="first-some-linear-algebra">First, some linear algebra</h2> <p>Before even discussing PDEs, let’s discuss a standard method of solving a system of linear equations, \begin{equation} A\mathbf{x} = \mathbf{b} \end{equation} where $$A$$ is an $$m \times n$$ matrix with linearly independent columns and $$\mathbf{x}$$ and $$\mathbf{b}$$ are $$n$$- and $$m$$-dimensional vectors, respectively. We can take the standard basis vectors $$\mathbf{e}_i$$ in $$\mathbb{R}^m$$ (real $$m$$-vectors whose $$i$$th component is 1 with zero elsewhere) and rewrite $$\mathbf{b}$$ as: \begin{equation} \mathbf{b}=b_1\mathbf{e}_1 + b_2\mathbf{e}_2 + \dots +b_m\mathbf{e}_m, \end{equation} where $$b_i$$ is the $$i$$th component of $$\mathbf{b}$$. Since our equation is linear, we know <a href="https://en.wikipedia.org/wiki/Inverse_element#Matrices">from linear algebra</a> that if we can solve the $$m$$ linear systems $$A \mathbf{x}^{(i)} = \mathbf{e}_i$$ for $$\mathbf{x}^{(i)}$$, then we know the solution for all possible $$\mathbf{b}$$’s, \begin{equation} \mathbf{x} = b_1 \mathbf{x}^{(1)} + b_2 \mathbf{x}^{(2)} + \dots + b_m \mathbf{x}^{(m)}. \end{equation} Another way to say this is that each $$\mathbf{x}^{(i)}$$ is a column of the right inverse of $$A$$. Hopefully, this is not too surprising.</p> <h2 id="to-infinity-and-beyond">To infinity and beyond</h2> <p>Let’s write down the PDE that we want to solve: \begin{equation} Lu(\mathbf{x}) = f(\mathbf{x}),\quad \mathrm{for}\, \mathbf{x} \in \mathbb{R}^n \end{equation} $$L$$ is a linear constant-coefficient second order differential operator ( such as $$\frac{d^2}{dx^2}$$, $$\Delta$$, or something scarier), while $$u(\mathbf{x})$$ and $$f(\mathbf{x})$$ are functions. This looks suspiciously like our linear system above with one main difference: we are solving for a function in an infinite dimensional space (the space of $$C^2$$ functions on $$\mathbb{R}^n$$) instead of a finite vector in a finite dimensional space. This has a couple consequences:</p> <ol> <li>The inner product is rather different. While $$\langle \mathbf{a},\mathbf{b} \rangle = \mathbf{a} \cdot \mathbf{b}$$ for finite vectors, for functions $$f$$ and $$g$$, the inner product between them is \begin{equation} \langle f,g \rangle = \int_{\mathbb{R}^n} f(\mathbf{x}) g(\mathbf{x}) d\mathbf{x} \end{equation} This means that we need to deal with integrals rather than finite sums.</li> <li>We don’t have a finite set of basis vectors any more, so how to do we decompose functions in an analogous fashion? While we could choose an infinite basis, there is a more convenient alternative called the <em>Dirac delta function</em> $$\delta(\mathbf{x})$$, which is defined heuristically as <!---$$\delta(\mathbf{x}) = \begin{cases} \infty & \mathbf{x}=0 \\ 0 & \text{otherwise} \end{cases}, \quad \int_{\mathbb{R}^n} \delta(\mathbf{x}) = 1.$$--> $$\delta(\mathbf{x}) = \infty$$ if $$\mathbf{x}=0$$ and zero otherwise, while satisfying $$\int_{\mathbb{R}^n} \delta(\mathbf{x}) = 1.$$ Technically, the delta function is <em>not</em> a function; it can be properly defined as a <a href="https://en.wikipedia.org/wiki/Dirac_delta_function#As_a_measure">measure</a> or <a href="https://en.wikipedia.org/wiki/Dirac_delta_function#As_a_distribution">distribution</a>. But $$\delta(\mathbf{x})$$ has the following useful property: \begin{equation} f(\mathbf{x}) = \int_{\mathbb{R}^n} f(\mathbf{y})\delta(\mathbf{y}-\mathbf{x}) d\mathbf{y}, \end{equation} which is an inner product between $$f$$ and a delta function shifted by $$\mathbf{x}$$.</li> </ol> <p>This is nice because we have a compact representation for arbitrary functions. This is not so nice because now we have to compute infinite integrals involving $$\delta(\mathbf{x})$$. It seems that we made things more complicated, but we can use this form to decompose our PDE into a set of problems in terms of $$\delta(\mathbf{x})$$.</p> <h2 id="comparing-the-two-setups">Comparing the two setups</h2> <p>Now let’s compare the forms of the representation of a discrete vector in terms of a basis set \begin{equation} \mathbf{b}=b_1\mathbf{e}_1 + b_2\mathbf{e}_2 + \dots +b_m\mathbf{e}_m = \sum_{i=1}^m b_i \mathbf{e}_i, \end{equation} and the representation of a function in terms of a delta function \begin{equation} f(\mathbf{x}) = \int_{\mathbb{R}^n} f(\mathbf{y})\delta(\mathbf{y}-\mathbf{x}) d\mathbf{y}. \end{equation} These are looking pretty similar if you squint. The continuous analogue of the summation is the integral. For each value of $$i$$ in the discrete case and each $$\mathbf{y}$$ in the continuous one, $$f(\mathbf{y})$$ is acting like $$b_i$$, since both values are determined by $$f$$ and $$\mathbf{b}$$, respectively. Meanwhile, $$\delta(\mathbf{y}-\mathbf{x})$$ is acting like $$\mathbf{e}_i$$, since both are independent of $$f$$ and $$\mathbf{b}$$ respectively.</p> <p>For the linear system, we solve the $$m$$ linear systems $$A\mathbf{x}^{(i)} = \mathbf{e}_i$$ for a set of vectors $$\mathbf{x}^{(i)}$$. In the PDE setting, we want to do something similar: we want to solve the PDE for a set of functions $$F_\mathbf{y}(\mathbf{x})$$, parametrized by $$\mathbf{y}$$, with the right hand side equal to $$\delta(\mathbf{y}-\mathbf{x})$$: \begin{equation} LF_\mathbf{y}(\mathbf{x}) = \delta(\mathbf{y}-\mathbf{x}),\quad \text{for } \mathbf{x},\mathbf{y}, \in \mathbb{R}^n \end{equation}</p> <p>If we can actually compute $$F_\mathbf{y}(\mathbf{x})$$, we can similarly reconstruct $$u$$ with an inner product: \begin{equation} u(\mathbf{x}) = \int_{\mathbb{R^n}} F_\mathbf{y}(\mathbf{x})f(\mathbf{y})d\mathbf{y} \end{equation} In other words, we can represent the solution $$u(\mathbf{x})$$ as an inner product between $$F_\mathbf{y}(\mathbf{x})$$ and $$f(\mathbf{y})$$ as functions of $$\mathbf{y}$$. We can once again compare this formula to the discrete case: \begin{equation} \mathbf{x} = b_1 \mathbf{x}^{(1)} + b_2 \mathbf{x}^{(2)} + \dots + b_m \mathbf{x}^{(m)}. \end{equation} Again $$f(\mathbf{y})$$ serves the role of $$b_i$$ in the continuous setting, while $$F_\mathbf{y}(\mathbf{x})$$ is acting like $$\mathbf{x}^{(i)}$$.</p> <p>To see why this representation of $$u$$ works, we can start with our set of equation $$LF_\mathbf{y}(\mathbf{x}) = \delta(\mathbf{y}-\mathbf{x})$$, multiply both sides by $$f(\mathbf{y})$$ and integrate with respect to $$\mathbf{y}$$: \begin{equation} \int_{\mathbb{R^n}} L\left(F_\mathbf{y}(\mathbf{x})\right)f(\mathbf{y}) d\mathbf{y} = \int_{\mathbb{R^n}} \delta(\mathbf{y}-\mathbf{x}) f(\mathbf{y}) d\mathbf{y}. \end{equation} We can <a href="https://en.wikipedia.org/wiki/Leibniz_integral_rule">bring the integral inside of the differential operator</a> $$L$$ because it’s independent of $$\mathbf{x}$$: \begin{equation} L \left[\int_{\mathbb{R^n}} F_\mathbf{y}(\mathbf{x})f(\mathbf{y}) d\mathbf{y} \right] = \int_{\mathbb{R^n}} \delta(\mathbf{y}-\mathbf{x}) f(\mathbf{y}) d\mathbf{y}, \end{equation} and since the right hand side is equal to our definition of $$f(\mathbf{x})$$ above, we’re left with \begin{equation} L \left[\int_{\mathbb{R^n}} F_\mathbf{y}(\mathbf{x})f(\mathbf{y}) d\mathbf{y} \right] = f(\mathbf{x}). \end{equation} Comparing this equation with $$Lu=f$$, we see that $$u(\mathbf{x})$$ must equal the integral in the braces.</p> <p>This function $$F_\mathbf{y}(\mathbf{x})$$ is called the <em>fundamental solution</em> of the differential operator $$L$$. People usually write $$F_\mathbf{y}(\mathbf{x})$$ as $$F(\mathbf{x},\mathbf{y})$$ or, confusingly, $$G(\mathbf{x},\mathbf{y})$$; we’re just solidifying the linear algebra analogy with the $$\mathbf{y}$$ subscript here.</p> <h2 id="what-about-boundary-conditions">What about boundary conditions?</h2> <p>Most PDEs have boundary conditions of some sort, so how does the fundamental solution fit into this setting? Our PDE looks like this: \begin{equation} Lu = f,\quad \text{for } \mathbf{x} \in \Omega \subset \mathbb{R}^n \end{equation} with either \begin{equation} u = g_D,\quad \text{for } \mathbf{x} \in \partial\Omega = \Gamma \end{equation} for Dirichlet problems or \begin{equation} \nabla_\mathbf{x} u(\mathbf{x}) \cdot n(\mathbf{x}) = g_N(\mathbf{x}),\quad \text{for } \mathbf{x} \in \partial\Omega = \Gamma \end{equation} for Neumann problems, where $$\Omega$$ is a closed bounded domain with a $$C^2$$ boundary $$\partial\Omega= \Gamma$$ (details in <a href="https://www.amazon.com/Integral-Equations-Applied-Mathematical-Sciences/dp/3642971482">Kress</a>). Note that $$\nabla_\mathbf{x}$$ is the gradient operator with respect to the $$\mathbf{x}$$ variable. For the sake of concreteness, in this section, we’ll choose $$L = -\Delta$$, i.e., we’re solving a Poisson problem (or a Laplace problem if $$f=0$$). <a href="https://en.wikipedia.org/wiki/Robin_boundary_condition">Mixed boundary conditions</a> are also possible but I can’t remember the correct reference for the derivation.</p> <p>Without getting into too much symbol pushing, since $$u$$ is harmonic (satisfies $$\Delta u = 0$$), Green’s <a href="https://en.wikipedia.org/wiki/Green%27s_identities#On_manifolds">second</a> and <a href="https://en.wikipedia.org/wiki/Green%27s_identities#Green's_third_identity">third</a> identities tell us that we can write down the solution $$u$$ as a sum of different integrals.</p> <p>\begin{equation} u(\mathbf{x}) = \int_\Omega G_\mathbf{y}(\mathbf{x})f(\mathbf{y})dy + \int_{\Gamma} G_\mathbf{y}(\mathbf{x}) \left(\nabla_\mathbf{y} u(\mathbf{y})\cdot n(\mathbf{y})\right) d\Gamma_\mathbf{y} - \int_{\Gamma} \left(\nabla_\mathbf{y} G_\mathbf{y}(\mathbf{x})\cdot n(\mathbf{y})\right) u(\mathbf{y}) d\Gamma_\mathbf{y}, \end{equation}</p> <p>The details of the formula are on the first page of <a href="https://web.stanford.edu/class/math220b/handouts/greensfcns.pdf">these lecture notes</a> (apply the <a href="https://en.wikipedia.org/wiki/Green%27s_identities#On_manifolds">second identity</a> one twice, interchanging the roles of $$u$$ and $$G$$, then plug in $$u=f$$ in $$\Omega$$ and the boundary conditions for the integrals over $$\Gamma$$). The second integral is where the Neumann boundary condition contributes to the solution and the third integral incorporates the Dirichlet information (by plugging in values of $$\left(\nabla_\mathbf{y} u(\mathbf{y})\cdot n(\mathbf{y})\right)$$ and $$u(\mathbf{x})$$ on the boundary, respectively).</p> <p>The first integral in this formula gives us hope, since it is identical to our fundamental solution representation of the PDE without boundary conditions: maybe we can just use $$F_\mathbf{y}(\mathbf{x})$$? Unfortunately, $$F_\mathbf{y}(\mathbf{x})$$ was derived without a boundary condition, so we need to find a new function.</p> <h4 id="what-is-g_mathbfymathbfx">What is $$G_\mathbf{y}(\mathbf{x})$$?</h4> <p>We need to solve a new family of PDEs for $$G_\mathbf{y}(\mathbf{x})$$, but the PDE looks suspiciously familiar: \begin{equation} -\nabla G_\mathbf{y}(\mathbf{x}) = \delta(\mathbf{y}-\mathbf{x}),\quad \text{for } \mathbf{x},\mathbf{y} \in \Omega \subset \mathbb{R}^n \end{equation} But for boundary conditions, we use \begin{equation} G_\mathbf{y}(\mathbf{x}) = 0,\quad \text{for } \mathbf{y} \in \partial\Omega, \end{equation} for Dirichlet conditions, and for Neumann problems, we use \begin{equation} \nabla G_\mathbf{y}(\mathbf{x}) \cdot n(\mathbf{y}) = 0,\quad \text{for } \mathbf{y} \in \partial\Omega. \end{equation}</p> <p>Conveniently, if we plug the corresponding boundary condition for $$G_\mathbf{y}$$ into the integrals above, we’re left with one of the two integrals over the boundary, containing the boundary condition for $$u$$ that we actually have. The Dirichlet condition $$G_\mathbf{y}(\mathbf{x}) =0$$ kills the integral over $$\nabla_\mathbf{y} u(\mathbf{y})\cdot n(\mathbf{y})$$ (which is the Neumann condition) and vice versa for the Neumann case. So that’s nice!</p> <p>The question still remains: what is $$G_\mathbf{y}(\mathbf{x})$$? It turns out that the right thing to do <a href="https://web.stanford.edu/class/math220b/handouts/greensfcns.pdf">(see page 3)</a> is to choose \begin{equation} G_\mathbf{y}(\mathbf{x}) = F_\mathbf{y}(\mathbf{x}) - C_\mathbf{y}(\mathbf{x}), \end{equation} where $$C_\mathbf{y}(\mathbf{x})$$ satisfies yet another PDE: \begin{equation} -\Delta_\mathbf{y} C_\mathbf{y}(\mathbf{x}) = 0,\quad \text{for } \mathbf{x},\mathbf{y} \in \Omega \subset \mathbb{R}^n \end{equation} with \begin{equation} C_\mathbf{y}(\mathbf{x}) = F_\mathbf{y}(\mathbf{x}),\quad \text{for } \mathbf{y} \in \Gamma \end{equation} for Dirichlet problems or \begin{equation} \nabla_\mathbf{y} C_\mathbf{y}(\mathbf{x}) \cdot n(\mathbf{y}) = \nabla_\mathbf{y} F_\mathbf{y}(\mathbf{x}) \cdot n(\mathbf{y}),\quad \text{for } \mathbf{y} \in \Gamma \end{equation} for Neumann ones. This looks more confusing, but it really just means: <strong>let’s use the fundamental solution for the free space PDE and subtract off a correction term to make sure it has the value we want on the boundary.</strong> Since the PDEs involved are linear, all of this works out ok.</p> <p>The function $$G_\mathbf{y}(\mathbf{x})$$ is called the <em>Green’s function</em> of the differential operator $$L$$. It’s usually written as $$G(\mathbf{x},\mathbf{y})$$. They’re also sometimes referred to as the <em>kernel</em> of the PDE. Although we specialized the argument above for a Laplace problem, the same reasoning holds for Stokes, Helmholtz, elasticity, and many other PDEs.</p> <h4 id="ok-but-what-actually-are-these-functions-concretely">Ok, but what actually <em>are</em> these functions concretely?</h4> <p>Unfortunately, you need to work out $$F_\mathbf{y}(\mathbf{x})$$ and $$G_\mathbf{y}(\mathbf{x})$$ for each PDE. For many PDEs, the Green’s functions are <a href="https://en.wikipedia.org/wiki/Green%27s_function#Table_of_Green's_functions">usually worked out already</a>. In a future post, I’ll list a few with some code snippets.</p> <p>A short disclaimer: technically, everything in this post should have been discussed in the context of distributions, distributional derivatives, etc, since $$\delta(\mathbf{x})$$ isn’t a function. This would be more correct at the expense of needless complexity.</p>{"name"=>nil, "picture"=>"images/bio-photo.jpg", "email"=>nil, "twitter"=>nil, "links"=>[{"title"=>"Github", "url"=>"https://github.com/mmorse1217", "icon"=>"fab fa-github"}, {"title"=>"Google Scholar", "url"=>"https://scholar.google.com/citations?user=wtaBtd8AAAAJ&hl=en&oi=ao", "icon"=>"fab fa-google"}, {"title"=>"LinkedIn", "url"=>"https://linkedin.com/in/matthewjmorse", "icon"=>"fab fa-linkedin-in"}]}Green's functions are pretty useful, but can seem a bit confusing for newcomers since they seem like an arbitrary definition. Here's some intuition.Blossoms and Bézier curves2020-12-20T00:00:00+00:002020-12-20T00:00:00+00:00https://mjmorse.com/blog/blossoms<p>An degree $$n$$ Bézier curve is given by the following formula \begin{equation} C(t) = \sum_{i=0}^n a_i B_i^n(t). \end{equation} The coefficients $$a_i$$ are called <em>control points</em> are $$n$$-dimensional vectors to produce an $$n$$-dimensional curve. The function $$B_i^n(t) = \binom{n}{i}t^i(1-t)^{n-i}$$ is known as the <em>$$i$$th Bernstein basis function of degree $$n$$</em> (with $$\binom{n}{i} = \frac{n!}{i!(n-i)!})$$. As with most polynomials, you can evaluate $$C(t)$$ at a given value of $$t$$ by evaluating the basis functions at $$t$$ and using the above formula.</p> <p>But Bézier curves have a cool property: <em>you can compute $$C(t)$$ directly from the control points without evaluating the basis functions</em>. The algorithm to achieve this is called <em>de Casteljau’s algorithm</em> and can be expressed compactly in the following recursive formula: \begin{equation} a_i^r = (1-t)a^{r-1}_i + ta_{i+1}^{r-1},\quad r = 1,\dots, n, \quad i=0,\dots, n-r \end{equation} with $$a_i^0 = a_i$$. Evaluating $$a_0^n$$ for a given $$t$$ will produce the value $$C(t)$$, i.e., $$a_0^n = C(t)$$. Each iteration of $$r$$ performs a linear interpolation between the control points (or the intermediate control points from the previous recursive evaluation). In pseudocode form, we can write this recursively as:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def de_casteljau(r, a): if r == 0: return a[i] else: # a^{r-1} and a^r are arrays of coefficients a^{r-1} = de_casteljau(r-1, a) for i in range(n): a^r[i] = (1-t) * a^{r-1}[i] + t * a^{r-1}[i+1] return a^r </code></pre></div></div> <h3 id="why-are-we-talking-about-flowers">Why are we talking about flowers?</h3> <p>This algorithm becomes much more interesting when we ask: what happens if we vary $$t$$ with each recursive step? This means that we now have a vector $$\mathbf{t} = (t_1, t_2, \dots, t_n)$$ and our equation becomes: \begin{equation} a_i^r = (1-t_r)a^{r-1}_i + t_ra_{i+1}^{r-1},\quad r = 1,\dots, n, \quad i=0,\dots, n-r \end{equation} again with $$a_i^0 = a_i$$. This is what we call the <em>blossom</em> of the control points $$a_i$$ above over the values $$\mathbf{t}$$, which we’ll write as $$a_i^r[\mathbf{t}]$$ or $$a_i^r[t_1,\dots, t_n]$$ for the intermediate blossom levels.</p> <p>To evaluate the full blossom, we write $$a_0^n[\mathbf{t}]$$, as with de Casteljau. The pseudocode looks very similar to de Casteljau:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def blossom(r, a): if r == 0: return a[i] else: a^{r-1} = blossom(r-1, a) for i in range(r): a^r[i] = (1-t) * a^{r-1}[i] + t * a^{r-1}[i+1] return a^r </code></pre></div></div> <p>When $$t_i = t$$ for each $$i$$, we recover de Casteljau’s algorithm, but we’re now free to vary $$t$$ throughout the algorithm.</p> <h3 id="nice-properties-of-blossoms">Nice properties of blossoms</h3> <p>This might seem like a trivial generalization, but blossoms have some interesting uses:</p> <ul> <li>Blossoms are symmetric: $$a_0^n[\mathbf{t}] = a_0^n[\pi(\mathbf{t})]$$, where $$\pi$$ is a permutation (i.e., reordering) of the entries of $$\mathbf{t}$$.</li> <li>Blossoms are multiaffine: $$a_0^n[bt_1 + ct_2, \dots] = ba_0^n[t_1, \dots] + ca_0^n[t_2,\dots]$$</li> <li>We can express the $$i$$th control point as $$a_i^0 = a_0^n[\mathbf{\tau}_i]$$ with the vector $$\tau_i = (0, 0,\dots, 1,1)$$ containing $$n-i$$ zeros and $$i$$ ones.</li> <li>Similarly, we can write down the Bézier form of a subcurve on the domain $$[c,d]$$ with in terms of blossoms. We can compute the $$i$$th control point of the subcurve by evaluating $$a_0^n[\mathbf{\eta}_i]$$ with $$\eta_i= (c,c,\dots, d,d)$$ being a vector of $$n-i$$ copies of $$c$$ and $$i$$ copies of $$d$$.</li> <li>We can differentiate Bézier curves trivially with blossoms, using the following expression \begin{equation} \frac{dC}{dt} = n a^n_0[t,\dots, t,1], \end{equation} evaluting the blossom with $$n-1$$ copies of $$t$$ followed by a single 1.</li> <li>We can elevate the degree of a Bézier curve by summing over various blossoms: \begin{equation} a_0^{n+1}[t_1, \dots, t_{n+1}] = \frac{1}{n+1}\sum_{i=0}^{n+1} a_0^n[t_1,\dots,t_{n+1}\mid t_i], \end{equation} where the notation $$t_1,\dots,t_{n+1}\mid t_i$$ means that the entry $$t_i$$ is omitted from the sequence. This isn’t the most efficient way to evaluate an elevated degree Bézier curve, but it does lead to a compact formula for the elevated curve’s control points $$\tilde{a}_i^0$$: \begin{equation} \tilde{a}_i^0= a_0^{n+1}[0,0, \dots, 1,1] \end{equation} with $$n+1-i$$ zeroes and $$i$$ ones as arguments in the blossom.</li> </ul> <p>Every polynomial in one variable has a unique form as a blossom, but blossoms are mostly used in the context of Bézier curves. For some intuition to see why this might be true, lets look at the quadratic case. We know that $$a_0^n[t,t] = C(t)$$, and since $$C(t)$$ is a polynomial, we can write it as: \begin{equation} C(t) = c_0 + c_1 t +c_2 t^2 \end{equation} for some coefficients $$c_i$$. If we define $$t_1 = t_2 = t$$, we might be able to convince ourselves that \begin{equation} a_0^n[t_1,t_2] = c_0^\prime + c_1^\prime t_1 + c_2^\prime t_2 + c_3^\prime t_1t_2 \end{equation} for some other coefficients $$c^\prime_i$$. By equating these two equations, we can see that $$c_0 = c_0^\prime, c_1 = \frac{c_1^\prime + c_2^\prime}{2},$$ and $$c_3 = c_3^\prime$$. A cubic example is worked out <a href="https://mrl.cs.nyu.edu/~dzorin/geom04/lectures/lect02.pdf">here</a> on page 4.</p> <h3 id="de-castejau-implementation">de Castejau implementation</h3> <p>Here’s a simple C++ implementation of de Casteljau’s algorithm, using <a href="https://eigen.tuxfamily.org/">Eigen</a>. It’s fairly simple to implement without many surprises. The full implementation can be found in <a href="https://github.com/qnzhou/nanospline/blob/a48c5d055705ab7a81302937682f1177005f87b6/include/nanospline/Bezier.h#L231">this commit</a> of <a href="https://github.com/qnzhou/nanospline">nanospline</a>.</p> <div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// _dim and _degree are dimension and degree of the Bezier curve.</span> <span class="k">using</span> <span class="n">Scalar</span> <span class="o">=</span> <span class="kt">double</span><span class="p">;</span> <span class="k">using</span> <span class="n">ControlPoints</span> <span class="o">=</span> <span class="n">Eigen</span><span class="o">::</span><span class="n">Matrix</span><span class="o">&lt;</span><span class="n">Scalar</span><span class="p">,</span> <span class="n">_dim</span><span class="p">,</span> <span class="n">_degree</span><span class="o">+</span><span class="mi">1</span><span class="o">&gt;</span> <span class="p">...</span> <span class="n">ControlPoints</span> <span class="nf">de_casteljau</span><span class="p">(</span><span class="n">Scalar</span> <span class="n">t</span><span class="p">,</span> <span class="kt">int</span> <span class="n">num_recursions</span><span class="p">)</span> <span class="k">const</span> <span class="p">{</span> <span class="k">const</span> <span class="k">auto</span> <span class="n">degree</span> <span class="o">=</span> <span class="n">Base</span><span class="o">::</span><span class="n">get_degree</span><span class="p">();</span> <span class="k">if</span> <span class="p">(</span><span class="n">num_recursions</span> <span class="o">&lt;</span> <span class="mi">0</span> <span class="o">||</span> <span class="n">num_recursions</span> <span class="o">&gt;</span> <span class="n">degree</span><span class="p">)</span> <span class="p">{</span> <span class="k">throw</span> <span class="n">invalid_setting_error</span><span class="p">(</span> <span class="s">"Number of de Casteljau recursion </span><span class="err"> </span><span class="s"> cannot exceeds degree"</span><span class="p">);</span> <span class="p">}</span> <span class="k">if</span> <span class="p">(</span><span class="n">num_recursions</span> <span class="o">==</span> <span class="mi">0</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// get original control points at the bottom of the recursion</span> <span class="k">return</span> <span class="n">Base</span><span class="o">::</span><span class="n">m_control_points</span><span class="p">;</span> <span class="p">}</span> <span class="k">else</span> <span class="p">{</span> <span class="n">ControlPoints</span> <span class="n">ctrl_pts</span> <span class="o">=</span> <span class="n">de_casteljau</span><span class="p">(</span><span class="n">t</span><span class="p">,</span> <span class="n">num_recursions</span><span class="o">-</span><span class="mi">1</span><span class="p">);</span> <span class="n">assert</span><span class="p">(</span><span class="n">ctrl_pts</span><span class="p">.</span><span class="n">rows</span><span class="p">()</span> <span class="o">&gt;=</span> <span class="n">degree</span><span class="o">+</span><span class="mi">1</span><span class="o">-</span><span class="n">num_recursions</span><span class="p">);</span> <span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span><span class="o">=</span><span class="mi">0</span><span class="p">;</span> <span class="n">i</span><span class="o">&lt;</span><span class="n">degree</span><span class="o">+</span><span class="mi">1</span><span class="o">-</span><span class="n">num_recursions</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// ctrl_pts.row(i) gets the i-th control point.</span> <span class="n">ctrl_pts</span><span class="p">.</span><span class="n">row</span><span class="p">(</span><span class="n">i</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span><span class="mf">1.0</span><span class="o">-</span><span class="n">t</span><span class="p">)</span> <span class="o">*</span> <span class="n">ctrl_pts</span><span class="p">.</span><span class="n">row</span><span class="p">(</span><span class="n">i</span><span class="p">)</span> <span class="o">+</span> <span class="n">t</span> <span class="o">*</span> <span class="n">ctrl_pts</span><span class="p">.</span><span class="n">row</span><span class="p">(</span><span class="n">i</span><span class="o">+</span><span class="mi">1</span><span class="p">);</span> <span class="p">}</span> <span class="k">return</span> <span class="n">ctrl_pts</span><span class="p">;</span> <span class="p">}</span> <span class="err">}</span> </code></pre></div></div> <h3 id="blossom-implementation">Blossom implementation</h3> <p>For our blossom implementation, we can essentially just add a vector input instead of a single value of $$t$$. For a slightly cleaner implementation, we use two for loops instead of an explicit recursion. The full implementation is available in the <code class="language-plaintext highlighter-rouge">Bezier&lt;...&gt;::evaluate</code> function in <a href="https://github.com/qnzhou/nanospline/blob/350847f5f28673b2e247ed0cc707563a2ab14fd1/include/nanospline/Bezier.h#L375">nanospline</a>.</p> <div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">using</span> <span class="n">Scalar</span> <span class="o">=</span> <span class="kt">double</span><span class="p">;</span> <span class="k">using</span> <span class="n">ControlPoints</span> <span class="o">=</span> <span class="n">Eigen</span><span class="o">::</span><span class="n">Matrix</span><span class="o">&lt;</span><span class="n">Scalar</span><span class="p">,</span> <span class="n">_dim</span><span class="p">,</span> <span class="n">_degree</span><span class="o">+</span><span class="mi">1</span><span class="o">&gt;</span> <span class="k">using</span> <span class="n">BlossomVector</span> <span class="o">=</span> <span class="n">Eigen</span><span class="o">::</span><span class="n">Matrix</span><span class="o">&lt;</span><span class="n">Scalar</span><span class="p">,</span> <span class="n">_degree</span><span class="p">,</span> <span class="mi">1</span><span class="o">&gt;</span> <span class="p">...</span> <span class="kt">void</span> <span class="n">blossom</span><span class="p">(</span><span class="k">const</span> <span class="n">BlossomVector</span><span class="o">&amp;</span> <span class="n">blossom_vector</span><span class="p">,</span> <span class="kt">int</span> <span class="n">degree</span><span class="p">,</span> <span class="n">ControlPoints</span><span class="o">&amp;</span> <span class="n">control_pts</span><span class="p">)</span> <span class="k">const</span> <span class="p">{</span> <span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">r</span> <span class="o">=</span> <span class="mi">1</span><span class="p">;</span> <span class="n">r</span> <span class="o">&lt;=</span> <span class="n">degree</span><span class="p">;</span> <span class="n">r</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span> <span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">j</span> <span class="o">=</span> <span class="n">degree</span><span class="p">;</span> <span class="n">j</span> <span class="o">&gt;=</span> <span class="n">r</span><span class="p">;</span> <span class="n">j</span><span class="o">--</span><span class="p">)</span> <span class="p">{</span> <span class="n">Scalar</span> <span class="n">t</span> <span class="o">=</span> <span class="n">blossom_vector</span><span class="p">(</span><span class="n">r</span><span class="o">-</span> <span class="mi">1</span><span class="p">);</span> <span class="n">control_pts</span><span class="p">.</span><span class="n">row</span><span class="p">(</span><span class="n">j</span><span class="p">)</span> <span class="o">=</span><span class="p">(</span><span class="mf">1.</span> <span class="o">-</span> <span class="n">t</span><span class="p">)</span> <span class="o">*</span> <span class="n">control_pts</span><span class="p">.</span><span class="n">row</span><span class="p">(</span><span class="n">j</span> <span class="o">-</span> <span class="mi">1</span><span class="p">)</span> <span class="o">+</span> <span class="n">t</span> <span class="o">*</span> <span class="n">control_pts</span><span class="p">.</span><span class="n">row</span><span class="p">(</span><span class="n">j</span><span class="p">);</span> <span class="p">}</span> <span class="p">}</span> <span class="p">}</span> </code></pre></div></div> <h3 id="for-more-information">For more information</h3> <p>Blossoms are explained in the most detail in Gerald Farin’s <a href="https://www.amazon.com/Curves-Surfaces-CAGD-Practical-Kaufmann/dp/1558607374/ref=sr_1_1?crid=1MGLP9JWIEDGY&amp;dchild=1&amp;keywords=gerald+farin+cagd&amp;qid=1608087032&amp;sprefix=sunny+bf%2Caps%2C169&amp;sr=8-1"><em>Curves and Surfaces for Computer-Aided Graphics and Design</em></a>. For more Bézier algorithms and useful visualizations, the <a href="https://pomax.github.io/bezierinfo/">Primer on Bézier curves</a> is a great resource. <a href="https://mrl.cs.nyu.edu/~dzorin/geom04/lectures/lect02.pdf">These lecture notes</a> provide a bit more detail about Bézier and B-Spline blossoms.</p>{"name"=>nil, "picture"=>"images/bio-photo.jpg", "email"=>nil, "twitter"=>nil, "links"=>[{"title"=>"Github", "url"=>"https://github.com/mmorse1217", "icon"=>"fab fa-github"}, {"title"=>"Google Scholar", "url"=>"https://scholar.google.com/citations?user=wtaBtd8AAAAJ&hl=en&oi=ao", "icon"=>"fab fa-google"}, {"title"=>"LinkedIn", "url"=>"https://linkedin.com/in/matthewjmorse", "icon"=>"fab fa-linkedin-in"}]}The Bézier curves are wonderful this time of year...A dependency-free VTK writer2020-09-04T00:00:00+00:002020-09-04T00:00:00+00:00https://mjmorse.com/blog/vtk-writer<p><a href="https://vtk.org/">VTK</a> is a graphics library for rendering scientific data in various formats with particularly nice support for parallel processing. It’s built on OpenGL and pretty comprehensive, with many different data formats, rendering options, and image processing algorithms.</p> <h2 id="i-just-want-to-use-paraview">I just want to use Paraview…</h2> <p>I generally don’t need most of the features of VTK inside my C++ project. I just want to run a simulation and visualize the output in a dynamic fashion using <a href="https://www.paraview.org/">Paraview</a>, which is a GUI wrapped around VTK. Unfortunately, it seems that the whole VTK library needs to be linked into the project in order to write VTK files.</p> <p>After compiling and installing the library, the supported way to achieve this is via CMake, as described on the package’s <a href="https://vtk.org/Wiki/VTK/Tutorials/CMakeListsFile">wiki page</a>:</p> <div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cmake_minimum_required</span><span class="p">(</span>VERSION 2.6<span class="p">)</span> <span class="nb">project</span><span class="p">(</span>Test<span class="p">)</span> <span class="nb">set</span><span class="p">(</span>VTK_DIR <span class="s2">"PATH/TO/VTK/BUILD/DIRECTORY"</span><span class="p">)</span> <span class="nb">find_package</span><span class="p">(</span>VTK REQUIRED<span class="p">)</span> <span class="nb">include</span><span class="p">(</span><span class="si">${</span><span class="nv">VTK_USE_FILE</span><span class="si">}</span><span class="p">)</span> <span class="nb">add_executable</span><span class="p">(</span>Test Test.cxx<span class="p">)</span> <span class="nb">target_link_libraries</span><span class="p">(</span>Test <span class="si">${</span><span class="nv">VTK_LIBRARIES</span><span class="si">}</span><span class="p">)</span> </code></pre></div></div> <p>But this has a few downsides:</p> <ol> <li>It links all the VTK libraries by default, which seems to be about 110 or so in version 7.1.0. I’m not aware of an option to only use particular library components without naming them explicitly. This causes the linking time of one of my project to inflate from less than one second to around five seconds, which is mildly annoying but survivable.</li> <li>You are a second-class citizen if you aren’t using CMake. It is simple to link against VTK if you are already using CMake. If not, buckle up: there’s not much official documentation for this case. You will be greeted with the advice to “convert your project to CMake.” If you don’t take this advice, you need to discover which of the 110 libraries you need to explicitly link against, depending on which parts of the project you are using. The best part is that these libraries aren’t immediately obvious to a typical user and they could change with different version of VTK. There isn’t an obvious approach to handle this possibility ( see <a href="https://stackoverflow.com/a/43162402/3479119">this stackoverflow post</a> for a fix for this).</li> <li>Adding VTK to a project adds a lot of complexity just to use one function. I shouldn’t have to change build systems or sift through CMake build files to link against just to write a VTK file.</li> </ol> <h2 id="a-standalone-vtk-writer">A standalone VTK writer</h2> <p>To solve these problems, people tend to implement their own basic VTK writers for their <a href="https://github.com/cburstedde/p4est/blob/f73f9431af466e999e7a4d3ce1003444cb3f75f8/src/p4est_vtk.c#L301">personal</a> <a href="https://github.com/dmalhotra/pvfmm/blob/67595dd1a1ebcfb5c8079c960910bd72e637aedf/include/mpi_tree.txx#L2124">needs</a>. The result is usually building up the file contents by hand. <a href="https://cs.nyu.edu/~teseo/">Teseo Schneider</a> mentioned that he had written a basic VTK writer for <a href="https://github.com/polyfem/polyfem/blob/3f58d84cd0ad930b71e8c6db917fe46e8dd4e100/src/mesh/VTUWriter.cpp">polyfem</a>. It seemed pretty modular, so I refactored it a bit to only depend on <code class="language-plaintext highlighter-rouge">std::vector</code>’s to pass around data and added more common primitives. The result is available <a href="https://github.com/mmorse1217/lean-vtk">here</a>. Currently, the library supports writing the following data types to <code class="language-plaintext highlighter-rouge">.vtu</code> files:</p> <ul> <li>point clouds</li> <li>triangle and quad volumetric meshes in 2D</li> <li>triangle and quad surface meshes in 3D</li> <li>hex and tet volumetric meshes in 3D</li> </ul> <p>Each of these primitives can be saved with scalar and 3D vector data at each point. This covers most of the cases that I have needed from VTK in the past few years, so it should be a good starting point.</p> <p>It’s fairly straightforward to use:</p> <div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">vector</span><span class="o">&lt;</span><span class="kt">double</span><span class="o">&gt;</span> <span class="n">points</span> <span class="o">=</span> <span class="p">{</span> <span class="mf">1.</span><span class="p">,</span> <span class="mf">1.</span><span class="p">,</span> <span class="o">-</span><span class="mf">1.</span><span class="p">,</span> <span class="mf">1.</span><span class="p">,</span> <span class="o">-</span><span class="mf">1.</span><span class="p">,</span> <span class="mf">1.</span><span class="p">,</span> <span class="o">-</span><span class="mf">1.</span><span class="p">,</span> <span class="o">-</span><span class="mf">1.</span><span class="p">,</span> <span class="mf">0.</span> <span class="p">};</span> <span class="n">vector</span><span class="o">&lt;</span><span class="kt">int</span><span class="o">&gt;</span> <span class="n">elements</span> <span class="o">=</span> <span class="p">{</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span> <span class="p">};</span> <span class="n">vector</span><span class="o">&lt;</span><span class="kt">double</span><span class="o">&gt;</span> <span class="n">scalar_field</span> <span class="o">=</span> <span class="p">{</span> <span class="mf">0.</span><span class="p">,</span> <span class="mf">1.</span><span class="p">,</span> <span class="mf">2.</span> <span class="p">};</span> <span class="n">vector</span><span class="o">&lt;</span><span class="kt">double</span><span class="o">&gt;</span> <span class="n">vector_field</span> <span class="o">=</span> <span class="n">points</span><span class="p">;</span> <span class="err">#</span> <span class="n">just</span> <span class="n">a</span> <span class="n">silly</span> <span class="n">test</span> <span class="k">const</span> <span class="kt">int</span> <span class="n">dim</span> <span class="o">=</span> <span class="mi">3</span><span class="p">;</span> <span class="k">const</span> <span class="kt">int</span> <span class="n">cell_size</span> <span class="o">=</span> <span class="mi">3</span><span class="p">;</span> <span class="n">std</span><span class="o">::</span><span class="n">string</span> <span class="n">filename</span> <span class="o">=</span> <span class="s">"single_tri.vtu"</span><span class="p">;</span> <span class="n">VTUWriter</span> <span class="n">writer</span><span class="p">;</span> <span class="n">writer</span><span class="p">.</span><span class="n">add_scalar_field</span><span class="p">(</span><span class="s">"scalar_field"</span><span class="p">,</span> <span class="n">scalar_field</span><span class="p">);</span> <span class="n">writer</span><span class="p">.</span><span class="n">add_vector_field</span><span class="p">(</span><span class="s">"vector_field"</span><span class="p">,</span> <span class="n">vector_field</span><span class="p">,</span> <span class="n">dim</span><span class="p">);</span> <span class="n">writer</span><span class="p">.</span><span class="n">write_surface_mesh</span><span class="p">(</span><span class="n">filename</span><span class="p">,</span> <span class="n">dim</span><span class="p">,</span> <span class="n">cell_size</span><span class="p">,</span> <span class="n">points</span><span class="p">,</span> <span class="n">elements</span><span class="p">);</span> </code></pre></div></div> <p>But most importantly, it’s easy to add to a project: simply copy <code class="language-plaintext highlighter-rouge">include/lean_vtk.hpp</code> and <code class="language-plaintext highlighter-rouge">src/lean_vtk.cpp</code> into the project and add appropriate includes to source files.</p> <p>I may add support for reading VTK files into <code class="language-plaintext highlighter-rouge">std::vector</code>’s, but this isn’t a priority for my personal use cases at the moment.</p>{"name"=>nil, "picture"=>"images/bio-photo.jpg", "email"=>nil, "twitter"=>nil, "links"=>[{"title"=>"Github", "url"=>"https://github.com/mmorse1217", "icon"=>"fab fa-github"}, {"title"=>"Google Scholar", "url"=>"https://scholar.google.com/citations?user=wtaBtd8AAAAJ&hl=en&oi=ao", "icon"=>"fab fa-google"}, {"title"=>"LinkedIn", "url"=>"https://linkedin.com/in/matthewjmorse", "icon"=>"fab fa-linkedin-in"}]}For saving 2D and 3D data without installing VTKA minimal CMake project template2020-06-15T00:00:00+00:002020-06-15T00:00:00+00:00https://mjmorse.com/blog/cmake-template<p>I grew tired of slowly constructing a Franken-Makefile for each of my C++ projects and their dependencies for each machine that I use. Each one was messy, bug-prone, and full of machine- and OS-dependent conditionals. I eventually got fed up and decided to convert my active projects to CMake. I (naively) figured that I could get a baseline <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> working quickly in order to continue working on projects as I slowly added all the fancy CMake bells and whistles that I had heard so much about.</p> <p>Unfortunately, I was sadly mistaken. The learning curve was <em>very</em> steep. It seemed that the more that I read about CMake, the more confused I became. Even worse, it became apparent that the “bells and whistles” are actually strictly required to get a project up and running. My Franken-Makefile was starting to look very appealing again.</p> <h2 id="success">Success…?</h2> <p>After many days of reading documentation, debugging, and frustration, I was able to make <a href="https://github.com/mmorse1217/cmake-project-template">this project template</a> that is able to do 90% of the things I want:</p> <ul> <li>Compile source code into a static library</li> <li>Link the source code into an executable</li> <li>Handle unit testing</li> <li>Import third party libraries <em>and their dependencies</em> transitively</li> <li>Export the static library <em>and its dependencies</em> transitively</li> </ul> <p>By “transitive,” I mean that if I import <code class="language-plaintext highlighter-rouge">LibA</code> in my project, and <code class="language-plaintext highlighter-rouge">LibB</code> is a dependency of <code class="language-plaintext highlighter-rouge">LibA</code>, then <code class="language-plaintext highlighter-rouge">LibB</code> will be automatically included in my project as well. This is enough for my purposes in my workflow for the moment.</p> <p>Throughout this process, I had a hard time finding a complete working example with inline comments. CMake requires several files in several places to be written by hand, which can be confusing coming from Makefiles. Meanwhile, reading the official documentation was like drinking from a firehose. Even when I found a solution, I often found myself asking the question: “ok, but where should this code actually <em>go</em>?”</p> <p>I’m going to review <a href="https://github.com/mmorse1217/cmake-project-template">my sample project</a> one file at a time, comments and all, with some added commentary between files as needed. This somewhat long, and may make some people’s eyes glaze over, but I think that adding the text right next to the CMake code as comments helps to follow what is happening. To kick things off, the project structure looks like this:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> ├── CMakeLists.txt ├── LICENSE ├── README.md ├── cmake │ ├── CMakeDemo-config.cmake │ └── FindCMakeDemo.cmake ├── include │ ├── CMakeLists.txt │ └── source_file.hpp ├── src │ ├── CMakeLists.txt │ └── source_file.cpp └── tests ├── CMakeLists.txt ├── catch.hpp └── test_cmake_demo.cpp </code></pre></div></div> <p>We will start at the lower levels of the project, <code class="language-plaintext highlighter-rouge">src/</code>, <code class="language-plaintext highlighter-rouge">include/</code> and <code class="language-plaintext highlighter-rouge">tests/</code>, then discuss the root-level <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code>, then <code class="language-plaintext highlighter-rouge">cmake/</code>.</p> <h4 id="source-and-include-files-srccmakeliststxt-and-includecmakeliststxt">Source and include files: <code class="language-plaintext highlighter-rouge">src/CMakeLists.txt</code> and <code class="language-plaintext highlighter-rouge">include/CMakeLists.txt</code></h4> <p>The files <code class="language-plaintext highlighter-rouge">src/CMakeLists.txt</code> and <code class="language-plaintext highlighter-rouge">include/CMakeLists.txt</code> are extremely simple and nearly identical. Here is <code class="language-plaintext highlighter-rouge">src/CMakeLists.txt</code>:</p> <div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Make an explicit list of all source files in CMakeDemo_SRC. This is important</span> <span class="c1"># because CMake is not a build system: it is a build system generator. Suppose</span> <span class="c1"># you add a file foo.cpp to src/ after running "cmake ..". If you set</span> <span class="c1"># CMakeDemo_SRC with file(GLOB ... ), this change is not passed to the makefile;</span> <span class="c1"># the makefile doesn't know that foo.cpp exists and will not re-run cmake. Your</span> <span class="c1"># collaborator's builds will fail and it will be unclear why. Whether you use</span> <span class="c1"># file(GLOB ...) or not, you will need to re-run cmake, but with an explicit</span> <span class="c1"># file list, you know beforehand why your code isn't compiling. </span> <span class="nb">set</span><span class="p">(</span>CMakeDemo_SRC source_file.cpp <span class="p">)</span> <span class="c1"># Form the full path to the source files...</span> <span class="nf">PREPEND</span><span class="p">(</span>CMakeDemo_SRC<span class="p">)</span> <span class="c1"># ... and pass the variable to the parent scope.</span> <span class="nb">set</span><span class="p">(</span>CMakeDemo_SRC <span class="si">${</span><span class="nv">CMakeDemo_SRC</span><span class="si">}</span> PARENT_SCOPE<span class="p">)</span> </code></pre></div></div> <p>This simply makes a list of files that is visible in the “parent scope,” i.e., from within the <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> that contains <code class="language-plaintext highlighter-rouge">add_subdirectory(src)</code>. The <code class="language-plaintext highlighter-rouge">PREPEND</code> function just adds the full path to the beginning of each file. This is used to tell CMake what files are associated with a certain <em>target</em>. A target is an executable or a library; each target has a list of <em>properties</em>. This is the core operation in CMake: associating targets with properties.</p> <h4 id="testing-code-testscmakeliststxt">Testing code: <code class="language-plaintext highlighter-rouge">tests/CMakeLists.txt</code></h4> <p>The file <code class="language-plaintext highlighter-rouge">tests/CMakeLists.txt</code> is very similar to <code class="language-plaintext highlighter-rouge">src/CMakeLists.txt</code>, but also contains our first target definition: <code class="language-plaintext highlighter-rouge">TestCMakeDemo</code>. This is where having a list of source and header files comes in handy:</p> <div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cmake_minimum_required</span><span class="p">(</span>VERSION 3.1<span class="p">)</span> <span class="nb">set</span><span class="p">(</span>CMAKE_CXX_STANDARD 11<span class="p">)</span> <span class="c1"># Explicitly list the test source code and headers. The Catch header-only unit</span> <span class="c1"># test framework is stored in with the test source.</span> <span class="nb">set</span><span class="p">(</span>CMakeDemo_TEST_SRC test_cmake_demo.cpp <span class="p">)</span> <span class="nb">set</span><span class="p">(</span>CMakeDemo_TEST_HEADER catch.hpp <span class="p">)</span> <span class="nf">PREPEND</span><span class="p">(</span>CMakeDemo_TEST_SRC<span class="p">)</span> <span class="c1"># Make an executable target that depends on the test source code we specified</span> <span class="c1"># above.</span> <span class="nb">add_executable</span><span class="p">(</span>TestCMakeDemo <span class="si">${</span><span class="nv">CMakeDemo_TEST_SRC</span><span class="si">}</span> <span class="si">${</span><span class="nv">CMakeDemo_TEST_HEADER</span><span class="si">}</span><span class="p">)</span> <span class="c1"># Enable testing via CTest</span> <span class="nb">enable_testing</span><span class="p">()</span> <span class="c1"># Add our test as runnable via CTest</span> <span class="nb">add_test</span><span class="p">(</span>NAME TestCMakeDemo COMMAND TestCMakeDemo<span class="p">)</span> <span class="c1"># Link our unit tests against the library we compiled</span> <span class="nb">target_link_libraries</span><span class="p">(</span>TestCMakeDemo CMakeDemo<span class="p">)</span> </code></pre></div></div> <p>There are a couple other things happening here. The <code class="language-plaintext highlighter-rouge">enable_testing()</code> and <code class="language-plaintext highlighter-rouge">add_test()</code> calls are related to CTest, which is how CMake runs unit tests. After building our CMake targets, we can run all registered unit tests with the command <code class="language-plaintext highlighter-rouge">ctest</code>. This can be helpful if tests live in multiple directories or spread across multiple files. <code class="language-plaintext highlighter-rouge">enable_testing()</code> tells CMake to allow for unit testing via CTest after building all targets. The <code class="language-plaintext highlighter-rouge">add_test</code> call registers our test target, <code class="language-plaintext highlighter-rouge">TestCMakeDemo</code>, with CTest.</p> <h4 id="project-compilation-cmakeliststxt">Project compilation: <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code></h4> <p>The biggest file is the root-level <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code>:</p> <div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># It's important to specify the minimum CMake version upfront required by</span> <span class="c1"># CMakeLists.txt. This is so that a user can clearly understand the reason the </span> <span class="c1"># build will fail before the build actually occurs, instead of searching for the</span> <span class="c1"># CMake function that was used that is causing the failure.</span> <span class="nb">cmake_minimum_required</span><span class="p">(</span>VERSION 3.1<span class="p">)</span> <span class="c1"># Set the global package-wide C++ standard. This will be inherited by all</span> <span class="c1"># targets specified in the project. One can also specify the C++ standard in a</span> <span class="c1"># target-specific manner, using:</span> <span class="c1"># set_target_properties(foo PROPERTIES CXX_STANDARD 11)</span> <span class="c1"># for a target foo</span> <span class="nb">set</span><span class="p">(</span>CMAKE_CXX_STANDARD 11<span class="p">)</span> <span class="c1"># Set the project name and version number. This allows for a user of your</span> <span class="c1"># library or tool to specify a particular version when they include it, as in </span> <span class="c1"># find_package(CMakeDemo 1.0 REQUIRED)</span> <span class="nb">project</span><span class="p">(</span>CMakeDemo VERSION 1.0<span class="p">)</span> <span class="nb">set</span><span class="p">(</span>CMakeDemo_VERSION 1.0<span class="p">)</span> <span class="c1"># enable unit testing via "make test" once the code has been compiled.</span> <span class="nb">include</span><span class="p">(</span>CTest<span class="p">)</span> <span class="c1"># Function to prepend the subdirectory to source files in subdirectories</span> <span class="nb">function</span><span class="p">(</span>PREPEND var <span class="p">)</span> <span class="nb">set</span><span class="p">(</span>listVar <span class="s2">""</span><span class="p">)</span> <span class="nb">foreach</span><span class="p">(</span>f <span class="si">${${</span><span class="nv">var</span><span class="si">}}</span><span class="p">)</span> <span class="nb">list</span><span class="p">(</span>APPEND listVar <span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_SOURCE_DIR</span><span class="si">}</span><span class="s2">/</span><span class="si">${</span><span class="nv">f</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span> <span class="nb">endforeach</span><span class="p">(</span>f<span class="p">)</span> <span class="nb">set</span><span class="p">(</span><span class="si">${</span><span class="nv">var</span><span class="si">}</span> <span class="s2">"</span><span class="si">${</span><span class="nv">listVar</span><span class="si">}</span><span class="s2">"</span> PARENT_SCOPE<span class="p">)</span> <span class="nb">endfunction</span><span class="p">(</span>PREPEND<span class="p">)</span> <span class="c1"># After a normal build, we can specify the location of various outputs of the</span> <span class="c1"># build. We put executables and static libraries outside the build directory in</span> <span class="c1"># bin/ and lib/, respectively.</span> <span class="nb">set</span><span class="p">(</span>CMAKE_RUNTIME_OUTPUT_DIRECTORY <span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_SOURCE_DIR</span><span class="si">}</span><span class="s2">/bin"</span><span class="p">)</span> <span class="nb">set</span><span class="p">(</span>CMAKE_ARCHIVE_OUTPUT_DIRECTORY <span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_SOURCE_DIR</span><span class="si">}</span><span class="s2">/lib"</span><span class="p">)</span> <span class="c1"># Find LAPACK on the system. This is mostly for demonstration.</span> <span class="nb">find_package</span><span class="p">(</span>LAPACK REQUIRED<span class="p">)</span> <span class="c1"># Include source code and headers. This runs the CMakeLists.txt in each</span> <span class="c1"># subdirectory. These can define their own libraries, executables, etc. as targets, </span> <span class="c1"># but here we define all exportable targets in the root CMakeLists.txt.</span> <span class="nb">add_subdirectory</span><span class="p">(</span>src<span class="p">)</span> <span class="nb">add_subdirectory</span><span class="p">(</span>include<span class="p">)</span> <span class="c1"># Add the test directory. It is optional and can be disabled during with</span> <span class="c1"># cmake -DBUILD_TESTING=OFF ..</span> <span class="c1"># To run unit tests produced here, we only need to run:</span> <span class="c1"># make test</span> <span class="c1"># or</span> <span class="c1"># ctest </span> <span class="c1"># In case your tests are printing to console, you can view their output to</span> <span class="c1"># stdout with:</span> <span class="c1"># ctest -V</span> <span class="nb">if</span><span class="p">(</span>BUILD_TESTING<span class="p">)</span> <span class="nb">add_subdirectory</span><span class="p">(</span>tests<span class="p">)</span> <span class="nb">endif</span><span class="p">()</span> <span class="c1"># Add the library CMakeDemo as a target, with the contents of src/ and include/</span> <span class="c1"># as dependencies.</span> <span class="nb">add_library</span><span class="p">(</span>CMakeDemo STATIC <span class="si">${</span><span class="nv">CMakeDemo_SRC</span><span class="si">}</span> <span class="si">${</span><span class="nv">CMakeDemo_INC</span><span class="si">}</span><span class="p">)</span> <span class="c1"># These variables slightly modify the install location to allow for version</span> <span class="c1"># specific installations.</span> <span class="nb">set</span><span class="p">(</span>CMakeDemo_INCLUDE_DEST <span class="s2">"include/CMakeDemo-</span><span class="si">${</span><span class="nv">CMakeDemo_VERSION</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span> <span class="nb">set</span><span class="p">(</span>CMakeDemo_LIB_DEST <span class="s2">"lib/CMakeDemo-</span><span class="si">${</span><span class="nv">CMakeDemo_VERSION</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span> <span class="c1"># generator expressions are needed for the include directories, since installing </span> <span class="c1"># headers changes the include path.</span> <span class="c1"># Specify that CMakeDemo requires the files located in the include/ directory at</span> <span class="c1"># compile time. This would normally look like </span> <span class="c1"># target_include_directories(CMakeDemo PUBLIC include/)</span> <span class="c1"># PUBLIC means that other libraries including CMakeDemo should also include the</span> <span class="c1"># directory include/.</span> <span class="c1"># However, there is a catch. If we are installing the project in</span> <span class="c1"># CMAKE_INSTALL_PREFIX, we can't specify include/ in the build directory: we have </span> <span class="c1"># copied the contents of include to CMAKE_INSTALL_PREFIX/include and we would</span> <span class="c1"># like other projects to include this directory instead of include/. The following</span> <span class="c1"># CMake command handles this.$&lt;BUILD_INTERFACE:...&gt; and</span> <span class="c1"># $&lt;INSTALL_INTERFACE:...&gt; are macros whose values change depending on if we are</span> <span class="c1"># simply building the code or if we are installing it.</span> <span class="nb">target_include_directories</span><span class="p">(</span>CMakeDemo PUBLIC <span class="c1"># headers to include when building from source</span>$&lt;BUILD_INTERFACE:<span class="si">${</span><span class="nv">CMakeDemo_SOURCE_DIR</span><span class="si">}</span>/include&gt;$&lt;BUILD_INTERFACE:<span class="si">${</span><span class="nv">CMakeDemo_BINARY_DIR</span><span class="si">}</span>/include&gt; <span class="c1"># headers to include when installing </span> <span class="c1"># (implicitly prefixes with${CMAKE_INSTALL_PREFIX}).</span> $&lt;INSTALL_INTERFACE:include&gt; <span class="p">)</span> <span class="c1"># Specify that CMakeDemo requires LAPACK to link properly. Ideally, LAPACK would</span> <span class="c1"># specify LAPACK::LAPACK for linking so that we can avoid using the variables.</span> <span class="c1"># However, each package is different and one must check the documentation to </span> <span class="c1"># see what variables are defined.</span> <span class="nb">target_link_libraries</span><span class="p">(</span>CMakeDemo <span class="si">${</span><span class="nv">LAPACK_LIBRARIES</span><span class="si">}</span><span class="p">)</span> <span class="c1"># Install CMakeDemo in CMAKE_INSTALL_PREFIX (defaults to /usr/local on linux). </span> <span class="c1"># To change the install location, run </span> <span class="c1"># cmake -DCMAKE_INSTALL_PREFIX=&lt;desired-install-path&gt; ..</span> <span class="c1"># install(...) specifies installation rules for the project. It can specify</span> <span class="c1"># location of installed files on the system, user permissions, build</span> <span class="c1"># configurations, etc. Here, we are only copying files.</span> <span class="c1"># install(TARGETS ...) specifies rules for installing targets. </span> <span class="c1"># Here, we are taking a target or list of targets (CMakeDemo) and telling CMake</span> <span class="c1"># the following:</span> <span class="c1"># - put shared libraries associated with CMakeDemo in ${CMakeDemo_LIB_DEST}</span> <span class="c1"># - put static libraries associated with CMakeDemo in${CMakeDemo_LIB_DEST}</span> <span class="c1"># - put include files associated with CMakeDemo in ${CMakeDemo_INCLUDE_DEST}</span> <span class="c1"># We also need to specify the export that is associated with CMakeDemo; an export </span> <span class="c1"># is just a list of targets to be installed.</span> <span class="c1"># So we are associating CMakeDemo with CMakeDemoTargets.</span> <span class="nb">install</span><span class="p">(</span> <span class="c1"># targets to install</span> TARGETS CMakeDemo <span class="c1"># name of the CMake "export group" containing the targets we want to install</span> EXPORT CMakeDemoTargets <span class="c1"># Dynamic, static library and include destination locations after running</span> <span class="c1"># "make install"</span> LIBRARY DESTINATION <span class="si">${</span><span class="nv">CMakeDemo_LIB_DEST</span><span class="si">}</span> ARCHIVE DESTINATION <span class="si">${</span><span class="nv">CMakeDemo_LIB_DEST</span><span class="si">}</span> INCLUDES DESTINATION <span class="si">${</span><span class="nv">CMakeDemo_INCLUDE_DEST</span><span class="si">}</span> <span class="p">)</span> <span class="c1"># We now need to install the export CMakeDemoTargets that we defined above. This</span> <span class="c1"># is needed in order for another project to import CMakeDemo using </span> <span class="c1"># find_package(CMakeDemo)</span> <span class="c1"># find_package(CMakeDemo) will look for CMakeDemo-config.cmake to provide</span> <span class="c1"># information about the targets contained in the project CMakeDemo. Fortunately,</span> <span class="c1"># this is specified in the export CMakeDemoTargets, so we will install this too.</span> <span class="c1"># install(EXPORT ...) will install the information about an export. Here, we</span> <span class="c1"># save it to a file {$CMakeDemo_LIB_DEST}/CMakeDemoTargets.cmake and prepend </span> <span class="c1"># everything inside CMakeDemoTargets with the namespace CMakeDemo::.</span> <span class="nb">install</span><span class="p">(</span> <span class="c1"># The export we want to save (matches name defined above containing the</span> <span class="c1"># install targets)</span> EXPORT CMakeDemoTargets <span class="c1"># CMake file in which to store the export's information</span> FILE CMakeDemoTargets.cmake <span class="c1"># Namespace prepends all targets in the export (when we import later, we</span> <span class="c1"># will use CMakeDemo::CMakeDemo)</span> NAMESPACE CMakeDemo:: <span class="c1"># where to place the resulting file (here, we're putting it with the library)</span> DESTINATION <span class="si">${</span><span class="nv">CMakeDemo_LIB_DEST</span><span class="si">}</span> <span class="p">)</span> <span class="c1"># install(FILES ...) simply puts files in a certain place with certain</span> <span class="c1"># properties. We're just copying include files to the desired include directory</span> <span class="c1"># here.</span> <span class="nb">install</span><span class="p">(</span>FILES <span class="si">${</span><span class="nv">CMakeDemo_INC</span><span class="si">}</span> DESTINATION <span class="si">${</span><span class="nv">CMakeDemo_INCLUDE_DEST</span><span class="si">}</span><span class="p">)</span> <span class="c1"># Write a "version file" in case someone wants to only load a particular version of</span> <span class="c1"># CMakeDemo </span> <span class="nb">include</span><span class="p">(</span>CMakePackageConfigHelpers<span class="p">)</span> <span class="nf">write_basic_package_version_file</span><span class="p">(</span> CMakeDemoConfigVersion.cmake VERSION <span class="si">${</span><span class="nv">CMakeDemo_VERSION</span><span class="si">}</span> COMPATIBILITY AnyNewerVersion <span class="p">)</span> <span class="c1"># Copies the resulting CMake config files to the installed library directory</span> <span class="nb">install</span><span class="p">(</span> FILES <span class="s2">"cmake/CMakeDemo-config.cmake"</span> <span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_BINARY_DIR</span><span class="si">}</span><span class="s2">/CMakeDemoConfigVersion.cmake"</span> DESTINATION <span class="si">${</span><span class="nv">CMakeDemo_LIB_DEST</span><span class="si">}</span> <span class="p">)</span> </code></pre></div></div> <p>An important note: in the above code, everything after</p> <div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">add_library</span><span class="p">(</span>CMakeDemo STATIC <span class="si">${</span><span class="nv">CMakeDemo_SRC</span><span class="si">}</span> <span class="si">${</span><span class="nv">CMakeDemo_INC</span><span class="si">}</span><span class="p">)</span> </code></pre></div></div> <p>is needed for installing the project in <code class="language-plaintext highlighter-rouge">/usr/local/</code>, <em>except for the <code class="language-plaintext highlighter-rouge">target_include_directories()</code> call</em>. If you only need to compile your code and don’t care about installation, you can remove these lines without a problem, provided that the <code class="language-plaintext highlighter-rouge">target_include_directories()</code> is replaced with the simpler call mentioned in the comments.</p> <h4 id="specify-dependencies-cmakecmakedemo-configcmake">Specify dependencies: <code class="language-plaintext highlighter-rouge">cmake/CMakeDemo-config.cmake</code></h4> <p>The contents of <code class="language-plaintext highlighter-rouge">cmake/CMakeDemo-config.cmake</code> are fairly straightforward. The purpose of the file is to indicate the dependencies of the project and describe how to configure them within CMake.</p> <div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">#get_filename_component(SELF_DIR "${CMAKE_CURRENT_LIST_FILE}" PATH)</span> <span class="nb">include</span><span class="p">(</span>CMakeFindDependencyMacro<span class="p">)</span> <span class="c1"># Capturing values from configure (optional)</span> <span class="c1">#set(my-config-var @my-config-var@)</span> <span class="c1"># Same syntax as find_package</span> <span class="nf">find_dependency</span><span class="p">(</span>LAPACK REQUIRED<span class="p">)</span> <span class="c1"># Any extra setup</span> <span class="c1"># Add the targets file. include() just loads and executes the CMake code in the</span> <span class="c1"># file passed to it. Note that the file loaded here is the same one generated in</span> <span class="c1"># the second install() command in the root-level CMakeLists.txt</span> <span class="nb">include</span><span class="p">(</span><span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_LIST_DIR</span><span class="si">}</span><span class="s2">/CMakeDemoTargets.cmake"</span><span class="p">)</span> </code></pre></div></div> <h4 id="find-module-cmakefindcmakedemocmake">Find module: <code class="language-plaintext highlighter-rouge">cmake/FindCMakeDemo.cmake</code></h4> <p>The final file in the project is <code class="language-plaintext highlighter-rouge">cmake/FindCMakeDemo.cmake</code>. This file is used by projects that want to import the <code class="language-plaintext highlighter-rouge">CMakeDemo</code> project as an external library. It is what allows other projects to add your library as a dependency without any explicit reference to explicit path, like this:</p> <div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">find_package</span><span class="p">(</span>CMakeDemo REQUIRED<span class="p">)</span> <span class="nb">target_link_libraries</span><span class="p">(</span>target CMakeDemo<span class="p">)</span> </code></pre></div></div> <p>Without a file called <code class="language-plaintext highlighter-rouge">cmake/FindCMakeDemo.cmake</code> present in your project, the build will fail and tell you that <code class="language-plaintext highlighter-rouge">CMakeDemo</code> hasn’t been properly initialized. The file is split into two parts: the first part finds the library and include files on your system, according to some prescribed rule; the second part populates and exports the CMake targets for users to include.</p> <div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># - Try to find the CMakeDemo library</span> <span class="c1"># Once done this will define</span> <span class="c1">#</span> <span class="c1"># CMakeDemo_FOUND - system has CMakeDemo</span> <span class="c1"># CMakeDemo_INCLUDE_DIR - CMakeDemo include directory</span> <span class="c1"># CMakeDemo_LIB - CMakeDemo library directory</span> <span class="c1"># CMakeDemo_LIBRARIES - CMakeDemo libraries to link</span> <span class="nb">if</span><span class="p">(</span>CMakeDemo_FOUND<span class="p">)</span> <span class="nb">return</span><span class="p">()</span> <span class="nb">endif</span><span class="p">()</span> <span class="c1"># We prioritize libraries installed in /usr/local with the prefix .../CMakeDemo-*, </span> <span class="c1"># so we make a list of them here</span> <span class="nb">file</span><span class="p">(</span>GLOB lib_glob <span class="s2">"/usr/local/lib/CMakeDemo-*"</span><span class="p">)</span> <span class="nb">file</span><span class="p">(</span>GLOB inc_glob <span class="s2">"/usr/local/include/CMakeDemo-*"</span><span class="p">)</span> <span class="c1"># Find the library with the name "CMakeDemo" on the system. Store the final path</span> <span class="c1"># in the variable CMakeDemo_LIB</span> <span class="nb">find_library</span><span class="p">(</span>CMakeDemo_LIB <span class="c1"># The library is named "CMakeDemo", but can have various library forms, like</span> <span class="c1"># libCMakeDemo.a, libCMakeDemo.so, libCMakeDemo.so.1.x, etc. This should</span> <span class="c1"># search for any of these.</span> NAMES CMakeDemo <span class="c1"># Provide a list of places to look based on prior knowledge about the system.</span> <span class="c1"># We want the user to override /usr/local with environment variables, so</span> <span class="c1"># this is included here.</span> HINTS <span class="si">${</span><span class="nv">CMakeDemo_DIR</span><span class="si">}</span> <span class="si">${</span><span class="nv">CMAKEDEMO_DIR</span><span class="si">}</span>$ENV{CMakeDemo_DIR} $ENV{CMAKEDEMO_DIR} ENV CMAKEDEMO_DIR <span class="c1"># Provide a list of places to look as defaults. /usr/local shows up because</span> <span class="c1"># that's the default install location for most libs. The globbed paths also</span> <span class="c1"># are placed here as well.</span> PATHS /usr /usr/local /usr/local/lib <span class="si">${</span><span class="nv">lib_glob</span><span class="si">}</span> <span class="c1"># Constrain the end of the full path to the detected library, not including</span> <span class="c1"># the name of library itself.</span> PATH_SUFFIXES lib <span class="p">)</span> <span class="c1"># Find the path to the file "source_file.hpp" on the system. Store the final</span> <span class="c1"># path in the variables CMakeDemo_INCLUDE_DIR. The HINTS, PATHS, and</span> <span class="c1"># PATH_SUFFIXES, arguments have the same meaning as in find_library().</span> <span class="nb">find_path</span><span class="p">(</span>CMakeDemo_INCLUDE_DIR source_file.hpp HINTS <span class="si">${</span><span class="nv">CMakeDemo_DIR</span><span class="si">}</span> <span class="si">${</span><span class="nv">CMAKEDEMO_DIR</span><span class="si">}</span> $ENV{CMakeDemo_DIR}$ENV{CMAKEDEMO_DIR} ENV CMAKEDEMO_DIR PATHS /usr /usr/local /usr/local/include <span class="si">${</span><span class="nv">inc_glob</span><span class="si">}</span> PATH_SUFFIXES include <span class="p">)</span> <span class="c1"># Check that both the paths to the include and library directory were found.</span> <span class="nb">include</span><span class="p">(</span>FindPackageHandleStandardArgs<span class="p">)</span> <span class="nf">find_package_handle_standard_args</span><span class="p">(</span>CMakeDemo <span class="s2">"</span><span class="se">\n</span><span class="s2">CMakeDemo not found --- You can download it using:</span><span class="se">\n\t</span><span class="s2">git clone https://github.com/mmorse1217/cmake-project-template</span><span class="se">\n</span><span class="s2"> and setting the CMAKEDEMO_DIR environment variable accordingly"</span> CMakeDemo_LIB CMakeDemo_INCLUDE_DIR<span class="p">)</span> <span class="c1"># These variables don't show up in the GUI version of CMake. Not required but</span> <span class="c1"># people seem to do this...</span> <span class="nb">mark_as_advanced</span><span class="p">(</span>CMakeDemo_INCLUDE_DIR CMakeDemo_LIB<span class="p">)</span> <span class="c1"># Finish defining the variables specified above. Variables names here follow</span> <span class="c1"># CMake convention.</span> <span class="nb">set</span><span class="p">(</span>CMakeDemo_INCLUDE_DIRS <span class="si">${</span><span class="nv">CMakeDemo_INCLUDE_DIR</span><span class="si">}</span><span class="p">)</span> <span class="nb">set</span><span class="p">(</span>CMakeDemo_LIBRARIES <span class="si">${</span><span class="nv">CMakeDemo_LIB</span><span class="si">}</span><span class="p">)</span> <span class="c1"># If the above CMake code was successful and we found the library, and there is</span> <span class="c1"># no target defined, lets make one.</span> <span class="nb">if</span><span class="p">(</span>CMakeDemo_FOUND AND NOT TARGET CMakeDemo::CMakeDemo<span class="p">)</span> <span class="nb">add_library</span><span class="p">(</span>CMakeDemo::CMakeDemo UNKNOWN IMPORTED<span class="p">)</span> <span class="c1"># Set location of interface include directory, i.e., the directory</span> <span class="c1"># containing the header files for the installed library</span> <span class="nb">set_target_properties</span><span class="p">(</span>CMakeDemo::CMakeDemo PROPERTIES INTERFACE_INCLUDE_DIRECTORIES <span class="s2">"</span><span class="si">${</span><span class="nv">CMakeDemo_INCLUDE_DIRS</span><span class="si">}</span><span class="s2">"</span> <span class="p">)</span> <span class="c1"># Set location of the installed library</span> <span class="nb">set_target_properties</span><span class="p">(</span>CMakeDemo::CMakeDemo PROPERTIES IMPORTED_LINK_INTERFACE_LANGUAGES <span class="s2">"CXX"</span> IMPORTED_LOCATION <span class="s2">"</span><span class="si">${</span><span class="nv">CMakeDemo_LIBRARIES</span><span class="si">}</span><span class="s2">"</span> <span class="p">)</span> <span class="nb">endif</span><span class="p">()</span> </code></pre></div></div> <h2 id="putting-it-all-together">Putting it all together</h2> <p>To compile the project with CMake, we prefer <em>out-of-source</em> builds, which seperate the source code and the compiled object files. We make a seperate build directory and the compile the code there:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">mkdir </span>build <span class="nb">cd </span>build cmake .. make </code></pre></div></div> <p>This will compile all targets specified in the project. To install our compiled targets in <code class="language-plaintext highlighter-rouge">/usr/local</code>, we run</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make <span class="nb">install</span> </code></pre></div></div> <p>from within the <code class="language-plaintext highlighter-rouge">build</code> directory. To run unit tests, we can run</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make <span class="nb">test</span> </code></pre></div></div> <p>or</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ctest </code></pre></div></div> <p>again from within the <code class="language-plaintext highlighter-rouge">build</code> directory.</p> <h2 id="some-closing-thoughts">Some closing thoughts</h2> <p>My time learning the ropes of CMake has been somewhat of a wild ride. Since I’m fairly new to it, I think I don’t appreciate the power of it yet. My biggest problem with Makefiles is the different conditionals for different operating systems, setting certain flags for different compilers, etc. It does address the compiler flag issue between <code class="language-plaintext highlighter-rouge">icc</code>, <code class="language-plaintext highlighter-rouge">gcc</code>, and <code class="language-plaintext highlighter-rouge">clang</code>. However, if your find-module requires several conditionals in order to handle different operating system, is it really platform independent?</p> <p>A big problem that I had with Makefiles was simple dependency mistakes. In the past, I have written Makefiles (accidentally) that seem to work at first, but ultimately don’t properly trigger recompilation in certain files when their dependencies change, which wastes a lot of time. Moreover, parallel compilation is <a href="https://www.cmcrossroads.com/article/pitfalls-and-benefits-gnu-make-parallelization">inherently bottlenecked</a> by how well you express your dependencies in <code class="language-plaintext highlighter-rouge">make</code>-speak. CMake seems to solved this issue, at least for my projects; my compilation times have definitely improved.</p> <p>In terms of final opinions, it seems that people on the Internet have intense feelings about CMake. I am pretty indifferent about it. It solves some problems, but seems to create about as many problems as it solves. This <a href="https://izzys.casa/2019/02/everything-you-never-wanted-to-know-about-cmake/">article</a> describes some of the induced insanity nicely.</p> <p>Finally, here’s some links that came in handy in my travels:</p> <ul> <li><a href="https://youtu.be/bsXLMQ6WgIk">C++Now 2017: Daniel Pfeifer “Effective CMake”</a> (also <a href="https://github.com/boostcon/cppnow_presentations_2017/blob/master/05-19-2017_friday/effective_cmake__daniel_pfeifer__cppnow_05-19-2017.pdf">here</a> are the slides themselves)</li> <li><a href="https://foonathan.net/2016/03/cmake-install/">foonathan::blog(): Tutorial: Easily supporting CMake install and find_package()</a></li> <li><a href="https://gist.github.com/mbinna/c61dbb39bca0e4fb7d1f73b0d66a4fd1">Effective Modern CMake</a></li> <li><a href="https://gitlab.kitware.com/cmake/community/-/wikis/doc/tutorials/Exporting-and-Importing-Targets">CMake Documentation: Exporting and Importing Targets</a></li> <li><a href="https://cliutils.gitlab.io/modern-cmake/">An Introduction to Modern CMake</a></li> </ul>{"name"=>nil, "picture"=>"images/bio-photo.jpg", "email"=>nil, "twitter"=>nil, "links"=>[{"title"=>"Github", "url"=>"https://github.com/mmorse1217", "icon"=>"fab fa-github"}, {"title"=>"Google Scholar", "url"=>"https://scholar.google.com/citations?user=wtaBtd8AAAAJ&hl=en&oi=ao", "icon"=>"fab fa-google"}, {"title"=>"LinkedIn", "url"=>"https://linkedin.com/in/matthewjmorse", "icon"=>"fab fa-linkedin-in"}]}An attempt to understand what all the hype is about...Fixing some hiccups in org mode2020-05-02T00:00:00+00:002020-05-02T00:00:00+00:00https://mjmorse.com/blog/emacs-org-hiccups<p>In a clean installation of Emacs 26.3 with Spacemacs, I have two glaring bugs that mess up my workflow:</p> <ol> <li> <p>Archiving a <code class="language-plaintext highlighter-rouge">TODO</code> with <code class="language-plaintext highlighter-rouge">C-c C-x C-a</code> produces an error: <code class="language-plaintext highlighter-rouge">org-copy-subtree: Invalid function: org-preserve-local-variables</code>. I have also seen this error also when using <code class="language-plaintext highlighter-rouge">org-refile</code>.</p> </li> <li> <p>Performing an <code class="language-plaintext highlighter-rouge">org-agenda</code> tag filter via <code class="language-plaintext highlighter-rouge">C-c a m</code> for any tag causes Emacs to hang indefinitely.</p> </li> </ol> <p>It seems that both of these are addressed by explicitly deleting packages in <code class="language-plaintext highlighter-rouge">.emacs.d/elpa</code>, which forces Emacs to reinstall them. To address 1., run (<a href="https://github.com/syl20bnr/spacemacs/issues/11801">source</a>):</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd</span> ~/.emacs.d/elpa/ find org<span class="k">*</span>/<span class="k">*</span>.elc <span class="nt">-print0</span> | xargs <span class="nt">-0</span> <span class="nb">rm</span> </code></pre></div></div> <p>To address 2., run (<a href="https://emacs.stackexchange.com/questions/48505/help-debugging-org-mode-hangs-on-agenda-tag-search">source</a> ) :</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">rm</span> <span class="nt">-rf</span> ~/.emacs.d/elpa/<span class="k">*</span> </code></pre></div></div> <p>I’m not an Emacs expert, so it’s not immediately clear to me why this works. More mysteriously, it seems that the command to solve problem 2. does not solve problem 1. A <code class="language-plaintext highlighter-rouge">.elc</code> file a compiled Elisp file, so this seems to force a recompile of <code class="language-plaintext highlighter-rouge">org</code> packages themselves.</p>{"name"=>nil, "picture"=>"images/bio-photo.jpg", "email"=>nil, "twitter"=>nil, "links"=>[{"title"=>"Github", "url"=>"https://github.com/mmorse1217", "icon"=>"fab fa-github"}, {"title"=>"Google Scholar", "url"=>"https://scholar.google.com/citations?user=wtaBtd8AAAAJ&hl=en&oi=ao", "icon"=>"fab fa-google"}, {"title"=>"LinkedIn", "url"=>"https://linkedin.com/in/matthewjmorse", "icon"=>"fab fa-linkedin-in"}]}A list of subtle tweaks to use fix some parts of org-mode for Emacs 26.3Squashing Git Commits2020-04-28T00:00:00+00:002020-04-28T00:00:00+00:00https://mjmorse.com/blog/git-rebase<p>Let’s say you have a bunch of commits in your git history and you would like to collapse some of the recent ones into a single commit. You can do this with <code class="language-plaintext highlighter-rouge">git rebase</code>; this is affectionately referred to as “squashing” commits. To do so:</p> <ol> <li>Determine either i.) the number of commits that you would like to squash into a single commit or ii.) the commit <em>before</em> the commits that you want to squash.</li> <li> <ul> <li>If you want to squash the last N commits to squash: <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> git rebase -i HEAD~N </code></pre></div> </div> </li> <li>If you have the hash of the commit before the commits to squash: <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> git rebase -i &lt;commit-hash&gt; </code></pre></div> </div> </li> </ul> </li> <li>In the subsequent file that will open in your default editor, pick the commit(s) that you want to survive the rebase. In this case, we pick only the top commit (denoted <code class="language-plaintext highlighter-rouge">pick</code> in the leftmost column), and replace the other commits’ <code class="language-plaintext highlighter-rouge">pick</code>’s with an <code class="language-plaintext highlighter-rouge">s</code> (for squash). Save and exit the editor.</li> <li>Rewrite the commit message and delete the old commits.</li> </ol> <p>For a full worked out example, see <a href="https://www.internalpointers.com/post/squash-commits-into-one-git">this post</a>.</p>{"name"=>nil, "picture"=>"images/bio-photo.jpg", "email"=>nil, "twitter"=>nil, "links"=>[{"title"=>"Github", "url"=>"https://github.com/mmorse1217", "icon"=>"fab fa-github"}, {"title"=>"Google Scholar", "url"=>"https://scholar.google.com/citations?user=wtaBtd8AAAAJ&hl=en&oi=ao", "icon"=>"fab fa-google"}, {"title"=>"LinkedIn", "url"=>"https://linkedin.com/in/matthewjmorse", "icon"=>"fab fa-linkedin-in"}]}Let's say you have a bunch of commits in your git history...Vim and Language Servers2020-04-27T00:00:00+00:002020-04-27T00:00:00+00:00https://mjmorse.com/blog/vim-and-language-servers<p>I spend the majority of my time on a computer using Vim in one form or another. The most frequent criticism of this that I’ve heard is something to effect of: “There’s no autocomplete/error-checking/smart-refactoring/something else in Vim! How can you program like this?!”</p> <p>Although this isn’t technically true (the built-in autocomplete features are actually <a href="https://stackoverflow.com/a/5169683">pretty good</a>), it’s a decent point. Autocomplete and its intelligent cousins, broadly referred to as “intellisense”, are useful tools. Unfortunately, the process of setting up and properly configuring plugins for these features can be a daunting task. I’ve tried many times to do so; <a href="https://github.com/ycm-core/YouCompleteMe">YCM</a> comes to mind. However, I ultimately remove them out of frustration or dissatisfaction and return to the ever-faithful <a href="https://github.com/ervandew/supertab">supertab</a>, which is a text-based autocomplete plugin that is mildly smarter than the built-in ones. I would lying if I said that I didn’t miss these features, though.</p> <h2 id="microsoft-to-the-rescue-with-language-servers">Microsoft to the rescue with Language Servers</h2> <p>Recently, I’ve been hearing a lot of praise for <a href="https://code.visualstudio.com/">VSCode</a>, a new(ish) editor from Microsoft. It seems that the secret sauce of VSCode is the <a href="https://microsoft.github.io/language-server-protocol/">Language Server Protocol</a> (LSP), which is a standardized protocol introduced by Microsoft for communication between an editor and something called a Language Server. From the <a href="https://microsoft.github.io/language-server-protocol/">LSP webpage</a>:</p> <blockquote> <p>A <em>Language Server</em> is meant to provide the language-specific smarts and communicate with development tools over a protocol that enables inter-process communication.</p> </blockquote> <p>It’s basically an interface layer between an editor and a language. Instead of the editor directly calling <code class="language-plaintext highlighter-rouge">python</code> or <code class="language-plaintext highlighter-rouge">clang</code> to analyze the code, it makes a request to a Language Server to provide the information needed to support the smart feature.</p> <p>Why is this abstraction useful? Well, if there are $$m$$ editors and $$n$$ languages, full language support for all editors requires implementing $$m \cdot n$$ plugins for each editor-language pair. Instead, the Language Server model requires each editor to interface with a Language Server via JSON and for each language to implement a Language Server, which is $$m$$ editor plugins and $$n$$ Language Servers. Essentially, if you have a Language Server installed on your machine, each editor on your machine with an LSP plugin can use it. More importantly, because a Language Server is a real server, the source code and build system can live on a remote server and communicate with client editors remotely over TCP. This also has the nice benefit of keeping your editor fast and snappy while the Language Server does the heavy lifting asynchronously.</p> <p>This sounded great to me, so I decided to dive back into the wild world of Vim plugins and configurations to integrate this into my workflow. My goals were:</p> <ul> <li>acquire the superpowers of smart code completion in Vim</li> <li>avoid cluttering up my local environment</li> <li>automate the setup process</li> </ul> <p>After some preliminary research, it seemed that <a href="https://github.com/neoclide/coc.nvim">coc.nvim</a> was far and away the best LSP-compliant plugin for Vim. Other popular alternatives were <a href="https://github.com/autozimu/LanguageClient-neovim">LanguageClient-neovim</a> and <a href="https://github.com/prabirshrestha/vim-lsp">vim-lsp</a>, but I settled on coc.nvim because i.) it is extremely popular and ii.) the maintainers are unbelievably active. These are very promising signs for the future of an open-source project. It seems that others have success with other plugins, so by all means try these out for yourself.</p> <h2 id="setting-up-vim-with-language-servers">Setting up Vim with Language Servers</h2> <p>I was pleasantly surprised by how painless this was to setup. First, coc.nvim runs on <code class="language-plaintext highlighter-rouge">nodejs</code>, so I need to install it along with <code class="language-plaintext highlighter-rouge">yarn</code>:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl <span class="nt">-sL</span> https://deb.nodesource.com/setup_14.x | <span class="nb">sudo</span> <span class="nt">-E</span> bash - <span class="nb">sudo </span>apt update <span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> nodejs yarn </code></pre></div></div> <p>Next, I have to install the Language Servers themselves. I using Language Servers here that are easy to install, rather than the “best.” For Python, I’m using Palantir’s <a href="https://github.com/palantir/python-language-server">version</a>, but Microsoft seems to <a href="https://github.com/microsoft/python-language-server">have their own</a>. I also need <code class="language-plaintext highlighter-rouge">pyflakes</code> for linting (you can choose your favorite linter later):</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip <span class="nb">install </span>python-language-server pyflakes </code></pre></div></div> <p>For C++, I’m using <a href="https://clangd.llvm.org/">clangd</a>. To minimize coc.nvim configuration, I updated the system default for <code class="language-plaintext highlighter-rouge">clangd</code> to use <code class="language-plaintext highlighter-rouge">clangd-9</code>:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> clangd-9 <span class="nb">sudo </span>update-alternatives <span class="nt">--install</span> /usr/bin/clangd clangd /usr/bin/clangd-9 100 </code></pre></div></div> <p>Setting up Latex is a bit more involved. There seemed to be two dominant Language Servers, <a href="https://texlab.netlify.app/">texlab</a> and <a href="https://github.com/astoff/digestif">digestif</a>. It wasn’t immediately clear which option was better; both seemed about equally active, both supported most features I cared about, and both were implemented in languages that I had no experience in (Rust and Lua, respectively). I somewhat randomly picked <code class="language-plaintext highlighter-rouge">texlab</code>, which means that I need to install Rust, Latex and <code class="language-plaintext highlighter-rouge">texlab</code>:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Explicitly install tzdata, required by texlab, by hand to allow for a scripted install, </span> <span class="nb">export </span><span class="nv">DEBIAN_FRONTEND</span><span class="o">=</span>noninteractive <span class="nb">ln</span> <span class="nt">-fs</span> /usr/share/zoneinfo/America/New_York /etc/localtime <span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> tzdata dpkg-reconfigure <span class="nt">--frontend</span> noninteractive tzdata <span class="c"># Install Latex</span> <span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\</span> texlive-latex-extra <span class="se">\</span> texlive-science <span class="se">\</span> curl <span class="c"># Install dependencies for latex Language Server</span> curl <span class="nt">--proto</span> <span class="s1">'=https'</span> <span class="nt">--tlsv1</span>.2 <span class="nt">-sSf</span> https://sh.rustup.rs | sh <span class="nt">-s</span> <span class="nt">--</span> <span class="nt">-y</span> ~/.cargo/bin/cargo <span class="nb">install</span> <span class="nt">--git</span> https://github.com/latex-lsp/texlab.git </code></pre></div></div> <p>Next, I need to add the coc extensions for each language server I will use. This can be done either by typing <code class="language-plaintext highlighter-rouge">:CocInstall &lt;coc-extension&gt;</code> from within Vim, or by specifying the <code class="language-plaintext highlighter-rouge">coc_global_extensions</code> variable in your <code class="language-plaintext highlighter-rouge">.vimrc</code>. I’ve installed a few extras to provide extra code snippet functionality and json parsing:</p> <div class="language-vim highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">let</span> <span class="nv">g:coc_global_extensions</span> <span class="p">=</span> <span class="p">[</span> <span class="se"> \</span> <span class="s1">'coc-json'</span><span class="p">,</span> <span class="se"> \</span> <span class="s1">'coc-clangd'</span><span class="p">,</span> <span class="se"> \</span> <span class="s1">'coc-python'</span><span class="p">,</span> <span class="se"> \</span> <span class="s1">'coc-snippets'</span><span class="p">,</span> <span class="se"> \</span> <span class="s1">'coc-ultisnips'</span><span class="p">,</span> <span class="se"> \</span> <span class="s1">'coc-texlab'</span><span class="p">,</span> <span class="se"> \</span> <span class="p">]</span> </code></pre></div></div> <p>The final step is writing the <code class="language-plaintext highlighter-rouge">coc-settings.json</code> configuration file. This file should live in the <code class="language-plaintext highlighter-rouge">~/.vim</code> directory and contain a single JSON dictionary, with a key <code class="language-plaintext highlighter-rouge">languageserver</code>, whose value contains the configuration required for each Language Server. The <a href="https://github.com/neoclide/coc.nvim/wiki/Language-servers">coc.nvim Github page</a> has a list of sample entries for some popular languages, which I shamelessly copied to build my own configuration file. For Python, I was able to copy the entry as is. For Latex, I needed to change the command field to point to <code class="language-plaintext highlighter-rouge">texlab</code>’s location. Miraculously, C++ needed <em>no</em> configuration, which made me unreasonably happy. My simple <code class="language-plaintext highlighter-rouge">coc-settings.json</code> is available <a href="https://github.com/mmorse1217/dotfiles/blob/master/coc-settings.json">here</a> in case the formatting is unclear.</p> <h2 id="some-downsides-of-cocnvim">Some downsides of coc.nvim</h2> <p>After setting up my Language Servers and playing around with my autocomplete, I noticed a several drawbacks of coc.nvim:</p> <ol> <li> <p>I felt slightly betrayed. I almost always have to install a “coc extension” for a particular language in order to use all of the features of coc.nvim. This seems to defeat the purpose of installing a Language Server in the first place. I’m willing to overlook this since I’m only using Vim for development these days and VSCode also has this behavior. Moreover, it seems that they are truly extensions to a vanilla Language Server, which means that coc should work to some degree without them (although I haven’t tried).</p> </li> <li> <p>Language servers are yet another dependency slowly taking over my machine. For Python and C++, installing <a href="https://github.com/microsoft/python-language-server">python-language-server</a> and <a href="https://clangd.llvm.org/">clangd</a> was painless, but each had a few more dependencies than I would like. But I drew the line at <a href="https://github.com/latex-lsp/texlab">texlab</a>, which had so many dependencies that it made me question whether I really wanted autocomplete at all. Beyond installing Rust, which I have no need for, the <code class="language-plaintext highlighter-rouge">cargo build</code> to install <code class="language-plaintext highlighter-rouge">texlab</code> command tried to install <em>338 Javascript libraries</em>. This made me feel both violently ill and as though someone just stole my wallet.</p> </li> <li> <p>Continuing from the previous point, suppose that I have a large C++ project with many dependencies. Not only will I need to install the dependencies in order to compile the project, but I will also need them in order to use autocomplete. This may seem pedantic, but if the project already lives in a virtual environment, container, or on a remote machine, this defeats the purpose of the isolation. The Language Server needs these dependencies locally as well, because <a href="https://github.com/neoclide/coc.nvim/issues/761">coc.nvim doesn’t seem to support remote Language Servers over TCP</a>.</p> </li> <li> <p>Installing a Language Server for each language that I need on each machine that I use is tedious. Compared to just starting Vim and calling <code class="language-plaintext highlighter-rouge">PlugInstall</code>, this is much more work and I’m extremely lazy.</p> </li> </ol> <p>There must be a better way to deal with this.</p> <h2 id="hiding-the-ugly-bits">Hiding the ugly bits</h2> <p>Fortunately, there’s a somewhat simple solution to my primary complaints. I don’t need these Language Servers at all times on my main machine, only when I’m programming. Usually, <code class="language-plaintext highlighter-rouge">supertab</code> is sufficient (if not overkill) to edit anything else. This means that I can put my code complete tools in the environment where I’m actually using them: inside of a project-specific Docker container. This sandboxes the required dependencies and, as a bonus, automates the configuration of the Language Servers.</p> <p>This requires <code class="language-plaintext highlighter-rouge">bash</code> scripting my Vim installation, building and installing the Language Servers, and installing other Vim plugins. Since I have already <a href="https://github.com/mmorse1217/terraform">started automating my development environment</a>, I can reuse these scripts and dotfiles inside the Dockerfile without changes. Here’s a trimmed Dockerfile for a sample C++ project whose dependencies are first configured in <code class="language-plaintext highlighter-rouge">project-dependencies</code>, which is then extended with Vim and <code class="language-plaintext highlighter-rouge">clangd</code>:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>FROM ubuntu:18.04 as project-dependencies # Set up and install compilers and dependencies for project RUN ... ... CMD ["/bin/bash"] # Make a new image based on the project dependency image FROM project-dependencies as project-dev # Specify some environment variables for: # 1. enabling coc.nvim in our.vimrc # 2. disabling any installation prompts # 3. setting up the proper number of terminal colors inside the container ENV VIM_DEV=1 DEBIAN_FRONTEND=noninteractive TERM=xterm-256color # Clone my repo of configuration scripts RUN git clone https://github.com/mmorse1217/terraform --recursive WORKDIR /terraform # Symlink dotfiles RUN bash dotfiles/setup.sh RUN apt-get upgrade -y &amp;&amp; apt install -y sudo git vim # Setup Language Servers for c++ # install nodejs + yarn for coc.nvim backend + clangd RUN bash vim/lang-servers/setup.sh RUN bash vim/lang-servers/clangd.sh # install vim plugins, including coc.nvim RUN bash vim/install_plugins.sh CMD ["/bin/bash"] </code></pre></div></div> <p>I also tweaked my <code class="language-plaintext highlighter-rouge">.vimrc</code> plugin list to check for the <code class="language-plaintext highlighter-rouge">VIM_DEV</code> environment variable before loading coc.nvim:</p> <div class="language-vim highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">call</span> plug#begin<span class="p">(</span><span class="s1">'~/.vim/vim-plug'</span><span class="p">)</span> <span class="k">if</span> <span class="nb">exists</span><span class="p">(</span><span class="s1">'$VIM_DEV'</span><span class="p">)</span> Plug <span class="s1">'neoclide/coc.nvim'</span><span class="p">,</span> <span class="p">{</span><span class="s1">'branch'</span><span class="p">:</span> <span class="s1">'release'</span><span class="p">}</span> <span class="k">else</span> Plug <span class="s1">'ervandew/supertab'</span> <span class="k">endif</span> <span class="c">" other plugins ...</span> <span class="k">call</span> plug#end </code></pre></div></div> <p>Here’s another Dockerfile to work on a general Latex project, with <code class="language-plaintext highlighter-rouge">texlab</code> and all of its dependencies:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>FROM ubuntu:18.04 as vim # update and install all packages RUN apt-get update # need this for fast fuzzy file seraching with vim RUN apt install -y sudo silversearcher-ag # Clone environment configuration RUN git clone https://github.com/mmorse1217/terraform.git --recursive WORKDIR terraform # Symlink dotfiles RUN bash dotfiles/setup.sh # Same as above ENV VIM_DEV=1 DEBIAN_FRONTEND=noninteractive TERM=xterm-256color # Compile one dependecy of texlab explicitly to avoid any # required terminal input on build RUN ln -fs /usr/share/zoneinfo/America/New_York /etc/localtime &amp;&amp; \ sudo apt-get install -y tzdata &amp;&amp; \ dpkg-reconfigure --frontend noninteractive tzdata # build vim from source RUN bash vim/build_from_source.sh # Setup Language Servers # install nodejs + yarn for coc.nvim backend RUN bash vim/lang-servers/setup.sh # Install texlab + latex (takes a while...) RUN bash vim/lang-servers/texlab.sh # install vim plugins, including coc.nvim RUN bash vim/install_plugins.sh CMD ["/bin/bash"] </code></pre></div></div> <p>Once we have these Dockerfiles, we can build images and create a container in the usual fashion: below is an example with the Latex + <code class="language-plaintext highlighter-rouge">texlab</code> Dockerfile above. Note that here we mounting the local directory as a volume so that we can edit the code on the host machine from inside the container (and vice versa).</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$docker build -t vim-latex .$ docker create -it -vpwd:/src --name latex-proj vim-latex </code></pre></div></div> <p>Then we can start the container and start a new <code class="language-plaintext highlighter-rouge">bash</code> session inside …</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$docker start latex-proj$ docker attach latex-proj root@3160b1a2b8fb:~# </code></pre></div></div> <p>… and verify that we have code complete features working when we edit a <code class="language-plaintext highlighter-rouge">tex</code> file. We can even compile the <code class="language-plaintext highlighter-rouge">tex</code> file inside the container and the pdf along with the build artifacts will appear on the host machine (no X11 forwarding required). Not only is this workflow is reproducible with a two commands, but it leaves the host machine free of dependencies once you are finished.</p> <h2 id="wrapping-up">Wrapping up</h2> <p>I’m fairly happy with Language Servers and coc.nvim so far. They finally are providing the level of quality that many plugins have promised before but failed to deliver on.</p> <p>The only thing that I can imagine that would improve the situation is the ability to “concatenate” prebuilt Docker images. The <code class="language-plaintext highlighter-rouge">project-dependencies</code> image is likely built during development, during continuous integration or pulled from Docker Hub (or both). In an ideal world, one could prebuild <code class="language-plaintext highlighter-rouge">project-dependencies</code> and a <code class="language-plaintext highlighter-rouge">vim-clangd</code> image above, pull them from Docker Hub and add the layers from one image to another. This would require much less time and computation. However, this seems to be a Pandora’s Box due to the generality of containers and appears to <a href="https://github.com/moby/moby/issues/3378">be a hot-button issue</a>. For now, I’ll settle for maximizing code reuse via bash scripts, concatenating Dockerfiles, and taking a coffee break.</p>{"name"=>nil, "picture"=>"images/bio-photo.jpg", "email"=>nil, "twitter"=>nil, "links"=>[{"title"=>"Github", "url"=>"https://github.com/mmorse1217", "icon"=>"fab fa-github"}, {"title"=>"Google Scholar", "url"=>"https://scholar.google.com/citations?user=wtaBtd8AAAAJ&hl=en&oi=ao", "icon"=>"fab fa-google"}, {"title"=>"LinkedIn", "url"=>"https://linkedin.com/in/matthewjmorse", "icon"=>"fab fa-linkedin-in"}]}Setting up autocomplete tools in Vim using Language ServersExport Inkscape SVG to PDF from command line2019-10-14T00:00:00+00:002019-10-14T00:00:00+00:00https://mjmorse.com/blog/export-pdf-inkscape<p>Inkscape comes with a command line interface for scripting various commands. To export an Inkscape SVG file <code class="language-plaintext highlighter-rouge">image.svg</code> to a PDF <code class="language-plaintext highlighter-rouge">image.pdf</code>, we can use the following command</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>inkscape --file=image.svg --without-gui --export-pdf=image.pdf </code></pre></div></div> <p>Check the man page for alternative file types. By default, this renders all drawn elements in the SVG to the PDF, rather than the selected page area. To render only the page selected in Inkscape, add the flag <code class="language-plaintext highlighter-rouge">--export-area-page</code>.</p> <p><a href="https://graphicdesign.stackexchange.com/questions/5880/how-to-export-an-inkscape-svg-file-to-a-pdf-and-maintain-the-integrity-of-the-im">Original SE answer</a></p>{"name"=>nil, "picture"=>"images/bio-photo.jpg", "email"=>nil, "twitter"=>nil, "links"=>[{"title"=>"Github", "url"=>"https://github.com/mmorse1217", "icon"=>"fab fa-github"}, {"title"=>"Google Scholar", "url"=>"https://scholar.google.com/citations?user=wtaBtd8AAAAJ&hl=en&oi=ao", "icon"=>"fab fa-google"}, {"title"=>"LinkedIn", "url"=>"https://linkedin.com/in/matthewjmorse", "icon"=>"fab fa-linkedin-in"}]}Inkscape comes with a command line interface for scripting various commands...Tmux cheatsheet2019-10-14T00:00:00+00:002019-10-14T00:00:00+00:00https://mjmorse.com/blog/tmux-cheatsheet<p>I check Daniel Miessler’s <a href="https://danielmiessler.com/study/tmux/">Tactical tmux</a> constantly, so I decided to make a more condensed cheatsheet specific to check locally. This isn’t a tutorial so much as a list of reference commands and related configurations for good measure. I’m by no means a power-user, so I’ll be updating this as I accumulate <code class="language-plaintext highlighter-rouge">tmux</code>-related knowledge.</p> <h3 id="bash-interface">Bash interface</h3> <ul> <li><code class="language-plaintext highlighter-rouge">tmux new -s session</code> start a new <code class="language-plaintext highlighter-rouge">tmux</code> session with name <code class="language-plaintext highlighter-rouge">session</code></li> <li><code class="language-plaintext highlighter-rouge">tmux deatch</code> detach from current session</li> <li><code class="language-plaintext highlighter-rouge">tmux ls</code> list current sessions</li> <li><code class="language-plaintext highlighter-rouge">tmux a -t session</code> attach to the session with name <code class="language-plaintext highlighter-rouge">session</code></li> <li><code class="language-plaintext highlighter-rouge">tmux kill-session -t session</code> kill the session with name <code class="language-plaintext highlighter-rouge">session</code> <h3 id="commands">Commands</h3> <p>All commands are prefixed by <code class="language-plaintext highlighter-rouge">ctrl-b</code>.</p> </li> <li><code class="language-plaintext highlighter-rouge">d</code> detach from the current session</li> <li><code class="language-plaintext highlighter-rouge">c</code> create a new window</li> <li><code class="language-plaintext highlighter-rouge">%</code> split window horizontally</li> <li><code class="language-plaintext highlighter-rouge">"</code> split window vertically</li> <li><code class="language-plaintext highlighter-rouge">n</code> change to next window</li> <li> <p><code class="language-plaintext highlighter-rouge">p</code> change to previous window</p> </li> <li><code class="language-plaintext highlighter-rouge">:resize-pane -L 5</code> Expand the size of the current pane by 5 columns to the <em>left</em></li> <li><code class="language-plaintext highlighter-rouge">:resize-pane -R 5</code> Expand the size of the current pane by 5 columns to the <em>right</em></li> <li><code class="language-plaintext highlighter-rouge">:respawn-pane -k</code> kill command running in current pane and respawn pane</li> </ul> <h3 id="tmuxconf"><code class="language-plaintext highlighter-rouge">.tmux.conf</code></h3> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bind h select-pane -L bind j select-pane -D bind k select-pane -U bind l select-pane -R </code></pre></div></div> <h3 id="vimrc"><code class="language-plaintext highlighter-rouge">.vimrc</code></h3> <p>Add the following lines to your <code class="language-plaintext highlighter-rouge">.vimrc</code> to allow for tmux to share the color scheme as your terminal:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if \$TERM == 'screen' set t_Co=256 endif </code></pre></div></div> <p><a href="https://unix.stackexchange.com/a/201793">Original SE link</a></p>{"name"=>nil, "picture"=>"images/bio-photo.jpg", "email"=>nil, "twitter"=>nil, "links"=>[{"title"=>"Github", "url"=>"https://github.com/mmorse1217", "icon"=>"fab fa-github"}, {"title"=>"Google Scholar", "url"=>"https://scholar.google.com/citations?user=wtaBtd8AAAAJ&hl=en&oi=ao", "icon"=>"fab fa-google"}, {"title"=>"LinkedIn", "url"=>"https://linkedin.com/in/matthewjmorse", "icon"=>"fab fa-linkedin-in"}]}A minimal tmux cheatsheet.