A typical software application is built from a code base containing thousands of lines, and a complex application easily reaches into the millions as its developers add ever more functionality. However, each line of code in a project is a place where somebody could have made a mistake, and even a single coding mistake among millions of lines can bring everything crashing down if it falls in a critical area. Many programmers become reliant on simple visual inspection and peer review to find lines of code that might contain mistakes, an approach that proves utterly unworkable when scaling to a code base of significant size, not to mention the fact that imperfect human eyes miss things all the time when examining code bases of any size. It's inevitable, then, that we'll need to use tools of some sort to analyze our code base. Ideally, these tools would be rolled into an automated build that processes all code after each change is committed, thereby giving us a system of continuous integration. On a team with multiple developers, such a system is invaluable; if any individual commits a change found to cause errors, we can easily revert the code its previous and known good state.
Today, we'll take a look at valgrind, a tool that comes in very handy as part of an automated build for C/C++ code. In applications built from these languages, the memory leak is one of the most common types of bugs as well as one of the most pernicious, often causing the user's system to slow to a crawl before an inexorable process crash hits. With valgrind's help we can flush out such leaks, along with several other types of issues related to memory management. Furthermore, by incorporating valgrind into a project as a build step, we can detect critical bugs as soon as they are coded, hopefully preventing their propagation to production sources and keeping them out of the hands of users.
This article refers to sample source files, which are available for cloning in the valgrind-testapp project on GitHub.
Call it a fact of life: computer systems fail, sometimes catastrophically. One afternoon not too long ago, a web server of mine became the latest in a long line of systems throughout history to do just that. It was midway through just another day of programming in the office when I noticed this server go completely offline without a warning or apparent cause. Contacting the data center support staff, we soon discovered that the server had been accidentally wiped and reinstalled in what I can only assume was a bad click or fat finger type of error. This server had been running several web sites, including membranesoftware.com and the forums we use for blog comments, but due to this mishap it was now rendered dead in the water, a purposeless brick.
Back in the old days, fixing our dead server would mean carefully reinstalling and reconfiguring the many software packages involved with a set of web sites, including: nginx, apache, PHP, MongoDB, and others, not to mention any custom software on the sites that we hope runs exactly the same way when transported into a shiny new system environment (hint: sometimes it doesn't). Dealing with all of this mess takes time and effort, which is not exactly ideal when there's other work to be done. In this case we were prepared, however, and reduced an afternoon's worth of work to just a few commands. Today, we'll see how that was possible thanks to Docker, a containerization layer providing an efficient and reliable paradigm for deployment of software applications. We'll also look at examples of a working project on GitHub that could be used by anyone to run a web server on any host able to start a Docker container.