In the programming courses that I have taught, I have provided students with a Makefile. It has been remarkably valuable in order to speed up common tasks and students report to continue using it in later courses. This document provides a tutorial how to use it. The Makefile is available at:

It currently supports C, C++, Java, and Python programs. The requirements are a Unix-like command line. Section 1 provides an applied example how to use the Makefile for a single project. If you want to reuse the Makefile for several projects, see Section 3.1. See Section 3.11 if you are using the Makefile on a fresh system. Command make help lists supported commands and options.

1. Project typical lifecycle

This section shows an example of use of the Makefile. Suppose your objective is to create a small program called div to do a simple calculation.

1.1. Getting the Makefile

You start by creating an empty folder in your computer. Be sure your folder or any of its parents do not contain spaces or special characters. You can download the generic Makefile to your new folder:

$ mkdir div
$ cd div
$ wget jeisson.work/Makefile

If your system does not have wget installed, try cURL instead:

$ curl jeisson.work/Makefile -O

Now you have a folder with a Makefile that provide you a number of actions. You can get list them:

$ make help

If you are curious, the first line of the Makefile indicates its version and some license details:

$ make version
# Makefile v3.5.0 2023-Oct-22 Jeisson Hidalgo-Cespedes ECCI-UCR CC-BY 4.0

1.2. Create a project

Now populate the directory with some files distributed into a typical project layout. We are going to use the C++ programming language (cpp) for this example. You may use c, java, or py as alternative languages instead of cpp.

$ make project=cpp

Now make will create some empty files and download others for C++. An internet connection is required. Let’s use the tree command to see project structure (you may need install tree on your system):

$ tree -a
.
├── .gitignore
├── Makefile
├── readme.adoc
├── src
│   └── solution.cpp
└── tests
    ├── input001.txt
    └── output001.txt

3 directories, 6 files

The purpose of each one of these files and folders are explained in next sections. You can add more language-specific files later. For example, by running make project=py it will add a solution.py into the src/ directory. You can provide several language names separated by spaces, as long as they are enclosed in single or double quotes for the command line interpreter, for example:

make 'project=cpp java c'

Now you can edit your project files using a text editor or integrated development enviroment (IDE). Here we use as Visual Studio Code (VSCode) as an example. If VSCode is already installed on your system, you can open it from command line:

code .

You can see the resulting files for this project visiting the div folder.

1.3. Analyze the problem

The analysis is the first phase of the problem solving process. It implies understanding the problem. Products of the analysis phase are mainly documents. The readme file is a good point to explain the problem to be solved and provide a user manual of the expected solution. The make project command generated a readme.adoc with some common sections in AsciiDoc notation. AsciiDoc is a very convenient notation, but you are free to change it for any other you want, such as Markdown, LaTeX, or HTML. The following excerpt explains the problem we will use for explaining the use of the Makefile.

= Integer division

How many 17-page exams can you print with a ream of 500 sheets? How many blank sheets do you have left? How many 50-staple bars do you need to staple the exams? Integer-division (`div`) helps answer questions like these and floating-point division complicates it. However the integer-division operation is not available in most of pocket or digital calculators.

This simple program read lines from standard input. Each line contains two integer numbers, a dividend `a` and a divisor `b`. The program calculates the quotient `q` and the remainder `r`, and print all of them as a relation `a = b * q + r` in its own line in standard output.

_Input example_:

[source]
----
17 2
18 2
500 9
55 50
----

_Output example_:

[source]
----
17 = 2 * 8 + 1
18 = 2 * 9 + 0
500 = 9 * 55 + 5
55 = 50 * 1 + 5
----

The third line of output example indicates that 55 whole exams can be printed with 500 sheet ream and 5 sheet remain unused. The fourth line indicates that you will require two 50-staple bars, one bar consumed entirely and just 5 staples are required from the second bar. If 0 is provided as a divisor, the program prints `invalid data` for that line.

As an advice, write the problem as something that your program have solved (e.g: "This program calculates the integer division…​"), instead of using the typical homework imperative wording (e.g: "Write a program that calculates the integer division…​"). It is recommended you add a user manual, your contact information, and license restrictions to your project’s readme document.

1.4. Create black-box test cases

The black-box testing compares the output that your executable program generates against the expected output for a specific input. The make project command generated one test case within the tests/ subfolder. Test cases are numerated. A test case is a pair of two files:

input###.txt
output###.txt

where ### stands by the number of test case, for example input001.txt. You can create more test cases (tc). The action make tc=N makes sure you have at least N test cases in folder tests/. For example, if you have 1 test case and you want to have 3 test cases, type:

$ make tc=3
touch tests/input002.txt
touch tests/input003.txt
touch tests/output002.txt
touch tests/output003.txt

The action tc will create empty files for the missing test cases, in this example for test case 002 and 003. Now you can edit your new test cases using your preferred text editor or IDE. Remember good testing practices, e.g test extreme values, negative values, null values, permute values if it makes sense, provide no values at all, or invalid data such as texts instead of numbers, or any other situation that makes your software more robust for your loved clients. For the integer-division problem, 0 values are interesting as divisors, and negatives produce different values for remainder and module functions according to the programming language. Here are the contents for our new two example test cases:

$ cat tests/input002.txt
0 0
0 1
1 1
1 0
0 2
1 2
2 1
2 2
3 2
2 3

$ cat tests/output002.txt
invalid data
0 = 1 * 0 + 0
1 = 1 * 1 + 0
invalid data
0 = 2 * 0 + 0
1 = 2 * 0 + 1
2 = 1 * 2 + 0
2 = 2 * 1 + 0
3 = 2 * 1 + 1
2 = 3 * 0 + 2

$ cat tests/input003.txt
0 -1
-1 -1
-1 0
1 -1
-1 1
2 -1
-2 1
-2 -1
1 -2
-1 2
-1 -2
3 -2
-3 2
-3 -2
2 -3
-2 3
-2 -3

$ cat tests/output003.txt
0 = -1 * 0 + 0
-1 = -1 * 1 + 0
invalid data
1 = -1 * -1 + 0
-1 = 1 * -1 + 0
2 = -1 * -2 + 0
-2 = 1 * -2 + 0
-2 = -1 * 2 + 0
1 = -2 * 0 + 1
-1 = 2 * 0 + -1
-1 = -2 * 0 + -1
3 = -2 * -1 + 1
-3 = 2 * -1 + -1
-3 = -2 * 1 + -1
2 = -3 * 0 + 2
-2 = 3 * 0 + -2
-2 = -3 * 0 + -2

If you call make tc=N providing an N to that is lower than the number the actual test cases in the tests/ folder, nothing is changed. If you want to remove some test cases, you have to do it manually, considering if they are already under version control.

1.5. Design a solution

Once you have created test cases, you will have a deeper insight of the problem and the user needs. You can now solve the problem using cheap-and-abstract artifacts called models before dealing with low-level idiosyncrasies of programming languages. In this phase you design a solution model. Models depend on the computing paradigms you are using. Some examples:

  1. For functional programming, models are math and you may want to use Latex documents.

  2. For procedural programming, models are algorithms and you may want to use flowcharts or pseudocode.

  3. For object-oriented programming, models are UML diagrams (Unified Modeling Language) and you may want to use an UML design tool.

To solve our integer-division problem we are going to use the procedural programming paradigm and pseudocode artifacts. For this particular paradigm, the Makefile can create some initial files:

$ make project=design

The make project=design command creates:

  1. a design/ subfolder

  2. a design/solution.pseudo file containing the main() procedure.

  3. a design/readme.adoc that includes (imports) the solution.pseudo file.

  4. Now the project directory structure looks like:

$ tree
├── Makefile
├── design
│   ├── readme.adoc
│   └── solution.pseudo
├── readme.adoc
├── src
│   └── solution.cpp
└── tests
    ├── input001.txt
    ├── input002.txt
    ├── input003.txt
    ├── output001.txt
    ├── output002.txt
    └── output003.txt

You can use your preferred text editor or IDE for creating your design. For VSCode you can install an extension that provides syntax highlighting for .pseudo files, such as Pseudocode. Remember there is not a standard for pseudocode. There are notations oriented to a natural language, math notation, or a programming language. The following design uses the Pseudocode extension’s nomenclature:

procedure main()
  while there are input data do
    input dividend, divisor
    if divisor is 0 then
      output "invalid data"
    else
      set quotient = dividend div divisor
      set remainder = dividend mod divisor
      output "{dividend} = {divisor} * {quotient} + {remainder}"
    end if
  end while
end procedure

Remember to test (verify) your design before implementing it. You can trace instruction by instruction using a paper sheet or spreadsheet. In your sheet you write down the variables into a table and trace their values. If your solution effectively solves the problem, you can move to the next phase.

1.6. Build the solution

Once you have a tested design, you translate the model to a programming language. This phase is called implementation or coding. By convention the source code files are store into the src/ folder (short for "source code"). You can create subfolders within src/ to keep your code ordered as long as your project increases. Use your preferred text editor or IDE for modify the source files.

If you want to preserve the pseudocode into the source code, copy your pseudocode into your source code files. Convert the pasted pseudocode into comments. The following regular expression preserves the pseudocode indentation:

  • Search pattern: ^(\s*)(\S)

  • Replace by: \1// \2 or $1// $2. For Python use # instead of //.

Finally insert the code in your programming language below each pseudocode comment. The following is a result for C++:

// Copyright 2023 Jeisson Hidalgo <jeisson.hidalgo@ucr.ac.cr> CC-BY-4
#include <iostream>

/**
 * @brief Read dividends and divisors from stdin and print quotients and
 * remainders of their integer division.
 *
 * @return Status code to the operating system, 0 means success.
 */
int main() {
  long dividend = 0, divisor = 0;
  // while there are input data do
  while (std::cin >> dividend >> divisor) {
    // input dividend, divisor
    // if divisor is 0 then
    if (divisor == 0) {
      // output "invalid data"
      std::cout << "invalid data" << std::endl;
    } else {
      // set quotient = dividend div divisor
      const long quotient = dividend / divisor;
      // set remainder = dividend mod divisor
      const long remainder = dividend % divisor;
      // output "{dividend} = {divisor} * {quotient} + {remainder}"
      std::cout << dividend << " = " << divisor << " * " << quotient << " + "
          << remainder << std::endl;
    }  // end if
  }  // end while
}  // end procedure

In order to compile your solution you require a compiler installed on your system. If this is not the case, you may run the make instdeps for installing the dependencies for the supported languages of our Makefile, such as a C/C++ compiler, Java JDK, Python interpreter, and other tools for file comparison, quality assurance, documentation, and linting. Yoy need administrative permissions to install these programs.

$ make instdeps

Compiling a solution is historically the main goal of makefiles. Our Makefile’s default rule compiles your source code (in C/C++/Java) into an executable program for debugging. Simply issue the make command.

$ make
mkdir -p build/
g++ -c -Wall -Wextra -g -std=c++17 -Isrc -MMD src/solution.cpp -o build/solution.o
mkdir -p bin/
g++ -Wall -Wextra -g -Isrc build/solution.o -o bin/div

The make command runs the default rule that is the make debug action. The make debug creates a build/ subdirectory to store the object code files (.o for C/C++ and .class for Java). It detects which object files are outdated and compiles only their corresponding source files. If your solution comprises several source files, you can compile a number of files in parallel using the -j option. For example, if your computer has 4 CPU cores, you may issue:

$ make -j4

When all source files are compiled, make debug creates a bin/ subfolder to store executables. The name of the executable is obtained automatically from the project’s directory. You may override it editing your Makefile and setting the desired name to the APPNAME variable (section Section 3.2 provides more details). For C/C++ the default rule make debug creates a not optimized executable containing a copy of the source code for debugging purposes. If you want an optimized executable with no source code for publishing to your users, use the release action:

$ make clean
$ make release -j4

The make clean action removes the bin/ and build/ subfolders and other automatically generated files. It is necessary when an already compiled solution exists and you need to compile the executable using different arguments (also known as flags). In this case, if you already has an executable compiled for debugging, it is needed to remove (clean) the executable and intermediate object files (.o) in order to re-generate these files for an optimized executable. This action is usually called rebuild. You may combine several actions in one command:

$ make clean release

The previous command is acceptable when just few files are needed to be compiled. If you want to compile in parallel, you may use the AND operator of the command line interpreter (&&):

$ make clean && release -j4

For Java a .jar file is generated within the bin/ folder. Python does not require compilation, therefore no files are generated by make debug or make release actions.

1.7. Running the solution

You naturally can run your executable directly from the command line. However the make run action assists in this step:

$ make run
    bin/div
17 3
    17 = 3 * 5 + 2
17 -3
    17 = -3 * -5 + 2
-17 3
    -17 = 3 * -5 + -2
-17 0
    invalid data
0 17
    0 = 17 * 0 + 0

The make run action will run your executable and the command used is printed (bin/div in previous listing). Now your executable will run as usual. Our div program repeatedly expects two integers in standard input, therefore it waits while user types values interactively until the end-of-file character (Ctrl+D) is provided. In order to make the interaction more readable, in previous example program responses were indented.

If your program expects values in command line arguments, you can pass them overriding the ARGS variable. For example:

$ make run ARGS=238
$ make run 'ARGS=1 2 "hello world"'

If you want to make some arguments permanent, edit your Makefile and provide them in the ARGS= variable. Remember spaces are argument delimiters, and you should enclose several words by quotes (single or double) if they should be handled as one single argument.

You can chain several rules when you call make, for example:

$ make clean release run

If you have several source files, you may want compile a number of files in parallel. The next example compiles 4 files in parallel for debugging and finally run the generated executable:

$ make clean && make debug -j4 && make run

If you edit source files and issue the make run command, it will detect your executable is outdated and it will compile the source files and update the executable automatically (but just one file at the time). Therefore an updated version of the executable will be always run.

1.8. Test implementation

The make test run your executable against all test cases stored in the tests/ subfolder. If your solution passes all test cases, you will only see the issued test commands with no output (cheers!), such as the following example:

$ make test
icdiff --no-headers tests/output001.txt <(bin/div < tests/input001.txt) ||:
icdiff --no-headers tests/output002.txt <(bin/div < tests/input002.txt) ||:
icdiff --no-headers tests/output003.txt <(bin/div < tests/input003.txt) ||:

If your solution fails test cases, you will see a comparison of the expected output at the left column, and your solution’s output at the right column. Differences are colorized by the icdiff command. For example:

$ make test
icdiff --no-headers tests/output001.txt <(bin/div < tests/input001.txt) ||:
icdiff --no-headers tests/output002.txt <(bin/div < tests/input002.txt) ||:
invalid data                     0 = 0 * 0 + 0
0 = 1 * 0 + 0                    0 = 1 * 0 + 0
1 = 1 * 1 + 0                    1 = 1 * 1 + 0
invalid data                     1 = 0 * 0 + 1
0 = 2 * 0 + 0                    0 = 2 * 0 + 0
icdiff --no-headers tests/output003.txt <(bin/div < tests/input003.txt) ||:
0 = -1 * 0 + 0                   0 = -1 * 0 + 0
-1 = -1 * 1 + 0                  -1 = -1 * 1 + 0
invalid data                     -1 = 0 * 0 + -1
1 = -1 * -1 + 0                  1 = -1 * -1 + 0
-1 = 1 * -1 + 0                  -1 = 1 * -1 + 0

In previous example test cases 002 and 003 were failed because solution reports invalid divisions by zero instead of printing invalid data. If icdiff is not installed, diff command will be used instead, however its comparison is mainly intended for programs instead of humans. You can override the DIFF variable to use another program you want, for example:

$ make test DIFF=diff
diff tests/output001.txt <(bin/div < tests/input001.txt) ||:
diff tests/output002.txt <(bin/div < tests/input002.txt) ||:
1c1
< invalid data
---
> 0 = 0 * 0 + 0
4c4
< invalid data
---
> 1 = 0 * 0 + 1
diff tests/output003.txt <(bin/div < tests/input003.txt) ||:
3c3
< invalid data
---
> -1 = 0 * 0 + -1

If you want to permanently change the diff tool, you can edit your Makefile and set the DIFF variable. If you edit your source files and issue the make test command, it will update your executable before testing it.

Remember a test case in black-box testing is a pair of files input###.txt and output###.txt identified by the same natural number ###. The make test action will run an instance of your program (known as process) for each test case. The standard input of your process will be redirected to the input file of the test case (using the shell’s < operator). The contents that your process writes to standard output will be captured (using the <() expression of the shell) and compared against the expected output file of the test case using the diff tool.

1.9. Quality assurance

If you use C/C++, you will want to reduce the probability that the solution that you will deliver to your clients have anomalies such as invalid memory accesses, memory leaks, reads from uninitialized memory, race conditions, or non-portable code. The following commands generate several versions of your program and test it against the available test cases. You may replace test by run if you want. These commands are explained in chapter Section 2.

# Google Sanitizers
$ make -j4 clean asan test
$ make -j4 clean msan test
$ make -j4 clean tsan test
$ make -j4 clean ubsan test

# Valgrind
$ make -j4 clean memcheck
$ make -j4 clean helgrind

1.10. Code style (linting)

Linters are static analysis tools that perform further checks to your source code in order to warn about potential common anomalies or violations to a code style convention. If you repair all linter diagnostics, you will deliver a better quality product to your clients, and your team will work more fluently keeping the organization’s base code consistent.

There are many code style conventions, and you may have your own. For educational purposes, the convention itself is not the most important goal, but the habit of adhering to some convention is. The reusable Makefile provides support for the following code conventions:

Language Convention Tool

C/C++

Google C++ code style

cppcheck

Java

Google Java code style
Sun Java code style

checkstyle

Python

 — 

pylint

The command make lint will check the files store in src/ folder using the corresponding linter. It is very good practice you run the linter as soon as you start implementing your solution in any programming language. You will develop the habit of adhering to a convention. Later in your own projects you may change the convention, but the habit will remain unaltered.

$ make lint

1.11. External documentation

You usually write code not only for computers, also for humans including yourself when you come back to it some time later. Documentation is the way to explain reusable code and not trivial designs to make them easier to understand. Some programming language have very strict conventions for documentation of interfaces, others do not have one:

Language Convention Tool

C/C++

(not a convention)

doxygen

Java

JavaDoc

javadoc

Python

PyDoc

pydoc

After documenting your code, you may want to watch the result. The command make doc will create a doc/ folder, and populate it with a website generated from the documentation in your source files. Using a file explorer you can locate the index.html file and open it in a web browser. The doc/ folder should not be under source version control (e.g: git).

$ make doc

For C/C++ make doc will simply call doxygen. Doxygen is a tool that works similar to make. Doxygen requires a Doxyfile in the same fashion make requires a Makefile. If your project does not have a Doxyfile, the Makefile will generate one running (doxygen -gs) and configures it to extract documentation from src/ folder.

2. Dynamic code analysis

C/C++ programming languages provide you almost entire control of the underlying architecture. This feature makes these languages suitable for building systems, such as operating systems, embedded systems, realtime systems, and so on. However these languages impose humans to have a lot of programming education to build free-of-anomalies solutions. A number of static and dynamic code analysis tools have been developed to help programmers to detect anomalies. As expected, all of them make your executable to run slower. In this section we cover two families of dynamic code analysis tools:

  1. Google Sanitizers inflates your executable file by adding several tests that are evaluated when your executable runs normally. This technique is called source code instrumentalization. You need to have the source code in order to compile it enabling Sanitizers. Once the inflated executable is generated, you run it as usual. If any anomaly is detected, a report is generated in standard error output.

  2. Valgrind runs your executable under a simulated environment. This technique is known as binary code instrumentalization. You may think Valgrind is like as a special operating system that runs your program while collecting lots of data, and reports to you valuable statistics. For example, it can record all heap allocations (malloc) and all de-allocations (free) that you request to the operating system and reports if your executable finished with some non-freed heap allocations (memory leaks). Because Valgrind works with executables, you do not require to have source code access to use it.

Please bear in mind both tools are extremely useful but not exhaustive or perfect. You may get false-positives, that is, you get a diagnostic of a problem that really does not exist, for example a race condition for printing to standard output. It is a false-positive because C/C++ standard library protects file output using mutual exclusion internal mechanisms. You may get anomaly reports of third-party code that you use, such as C/C++ standard library. Finally, if you get no diagnostics at all, it is not a warranty your program is completely free of them. As any other form of testing, it only reduces the probability of anomalies in the solution that you will deliver to your valuable clients, and you get this extremely useful insight free of charge.

2.1. Address Sanitizer (ASan)

Google’s Address Sanitizer (ASan) checks for invalid access to memory (e.g: array index out of bounds) and heap memory leaks. In order to generate an instrumentalized executable, you may need to remove a previous non-instrumentalized executable and its object files (make clean). Then you use make asan to call compiler with -f flags for enabling instrumentalization:

$ make asan -j4
g++ -c -Wall -Wextra -g -fsanitize=address -fno-omit-frame-pointer -std=c++17 -Isrc -MMD src/solution.cpp -o build/solution.o
g++ -Wall -Wextra -g -fsanitize=address -fno-omit-frame-pointer -Isrc build/solution.o -o bin/div

$ make run
bin/div
17 -3
17 = -3 * -5 + 2

Flag -fsanitize=address instructs GCC or Clang compilers to build an executable that is inflated with a number of tests for checking all access to memory. After the executable is generated you run it as usual. If an anomaly is detected by the injected tests into your executable, a detailed report will be printed to the standard error output. In such case, study the report to find the files and lines of your source code that likely provoked the anomaly.

If you have test cases, you may run your instrumentalized executable against them. You can combine these commands in one, such the following two examples. The first one is useful for small projects. The second one for projects containing several sources. Remember to change 4 by the number of CPUs you have in your system.

$ make clean asan test
$ make clean && make asan -j4 && make test

2.2. Memory Sanitizer (MSan)

Google’s Memory Sanitizer (MSan) checks for usage of non-initialized memory. MSan was extracted from ASan because algorithms for detecting non-initialized memory usage are significantly slower than those for invalid accesses or memory leaks. At the time of writing this tutorial, GCC 12 and previous versions do not support MSan. Therefore Makefile will use CLang instead.

$ make asan -j4
clang++ -c -Wall -Wextra -g -fsanitize=memory -std=c++17 -Isrc -MMD src/solution.cpp -o build/solution.o
g++ -Wall -Wextra -g -fsanitize=memory -Isrc build/solution.o -o bin/div

$ make run
bin/div
17 -3
17 = -3 * -5 + 2

If you have test cases, you may run your instrumentalized executable against them. You can combine these commands in one, such the following two examples. The first one is useful for small projects. The second one for projects containing several sources. Remember to change 4 by the number of CPUs you have in your system.

$ make clean msan test
$ make clean && make msan -j4 && make test

2.3. Thread Sanitizer (tsan)

Google’s Thread Sanitizer (TSan) checks for race conditions (i.e: concurrent write access to a same memory position) and other anomalies of concurrent code. In order to generate an instrumentalized executable, you may need to remove a previous non-instrumentalized executable and its object files (make clean). Then you use make tsan to call compiler with -f flags for enabling instrumentalization:

$ make tsan -j4
g++ -c -Wall -Wextra -fsanitize=thread -g -std=c++17 -Isrc -MMD src/solution.cpp -o build/solution.o
g++ -Wall -Wextra -fsanitize=thread -g -Isrc build/solution.o -o bin/div

$ make run
bin/div
17 -4
17 = -4 * -4 + 1

Flag -fsanitize=thread instructs GCC or Clang compilers to build an executable that is inflated with a number of tests for checking race conditions, thread leaks, and other anomalies. After the executable is generated you run it as usual. If an anomaly is detected by the injected tests into your executable, a detailed report will be printed to the standard error output. In such case, study the report to find the files and lines of your source code that likely provoked the anomaly.

If you have test cases, you may run your instrumentalized executable against them. You can combine these commands in one, such the following two examples. The first one is useful for small projects. The second one for projects containing several sources. Remember to change 4 by the number of CPUs you have in your system.

$ make clean tsan test
$ make clean && make tsan -j4 && make test

2.4. Undefined Behavior Sanitizer (UBSan)

Google’s Undefined Behavior Sanitizer (UBan) checks for code that will behave differently from one environment to another. In order to generate an instrumentalized executable, you may need to remove a previous non-instrumentalized executable and its object files (make clean). Then you use make ubsan to call compiler with -f flags for enabling instrumentalization:

$ make ubsan -j4
g++ -c -Wall -Wextra -fsanitize=undefined -g -std=c++17 -Isrc -MMD src/solution.cpp -o build/solution.o
mkdir -p bin/
g++ -Wall -Wextra -fsanitize=undefined -g -Isrc build/solution.o -o bin/div

$ make run
bin/div
-17 -3
-17 = -3 * 5 + -2

Flag -fsanitize=undefined instructs GCC or Clang compilers to build an executable that is inflated with a number of tests for checking code that may run without errors in your computer, but it may fail in another computer or platform. After the executable is generated you run it as usual. If an anomaly is detected by the injected tests into your executable, a detailed report will be printed to the standard error output. In such case, study the report to find the files and lines of your source code that likely provoked the anomaly.

If you have test cases, you may run your instrumentalized executable against them. You can combine these commands in one, such the following two examples. The first one is useful for small projects. The second one for projects containing several sources. Remember to change 4 by the number of CPUs you have in your system.

$ make clean ubsan test
$ make clean && make ubsan -j4 && make test

2.5. Valgrind’s MemCheck

Valgrind’s Memory Checker (MemCheck) runs your existing executable under a simulated environment, that records data about memory accesses. It detects invalid memory accesses, memory leaks, and uninitialized memory reads, among others. An executable for debugging (instead of release) is recommended, because it contains a copy of the source code. Therefore, errors reported by MemCheck will be contextualized to your source files making it easier to fix. Example of usage:

$ make memcheck
valgrind -q -s --sigill-diagnostics=yes --leak-check=full bin/div
17 -1
17 = -1 * -17 + 0
==29302== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Because MemCheck is the Valgrind’s default tool, Makefile does not provide the argument --tool=memcheck to valgrind. If your project does not have an executable or it is outdated, make memcheck will update it using debugging information (-g flag for GCC). If your executable requires arguments, be sure to set the ARGS variable in your Makefile.

2.6. Valgrind’s Helgrind

Valgrind’s Helgrind runs your existing executable under a simulated environment, that records data about concurrency. It detects race conditions among other problems. An executable for debugging (instead of release) is recommended, because it contains a copy of the source code. Therefore, errors reported by Helgrind will be contextualized to your source files making it easier to fix. Example of usage:

$ make helgrind
valgrind -q -s --sigill-diagnostics=yes --tool=helgrind bin/div
0 0
invalid data
==29730== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

If your project does not have an executable or it is outdated, make helgrind will update it using debugging information (-g flag for GCC). If your executable requires arguments, be sure to set the ARGS variable in your Makefile.

3. Advanced topics

This chapter provides some useful scenarios.

3.1. Multi-project reuse

If you want to use the Makefile in several projects, do not copy it for each project. You will end with non-controlled redundancy that is difficult to maintain. Instead you are recommended to reuse your Makefile. Store your Makefile into a "shared" folder, e.g: common/Makefile. For each project create a one-line Makefile that includes your shared Makefile, such as:

include ../common/Makefile

Now you can use the one-line Makefile as the normal one. Let’s consider two setup scenarios: (1) starting from scratch, when you do not have a Makefile nor projects, (2) when you already has a project and want to reuse your existing Makefile for the new project.

Scenario 1. If you do not have a Makefile nor projects, you can create them and reuse the Makefile. The following commands suppose you want to create two projects named project1 and project2 both inside a folder named repo.

# 1. Get the Makefile into repo/common/ folder
cd repo
mkdir common
cd common
wget jeisson.work/Makefile # curl jeisson.work/Makefile -O
cd ..

# 2. Create first project, let's say for C
mkdir project1
cd project1
echo include ../common/Makefile > Makefile
make project=c
mv .gitignore ..
cd ..

# 3. Repeat to create second project, let's say for Java
mkdir project2
cd project2
echo include ../common/Makefile > Makefile
make project=java
rm .gitignore
cd ..

The following commands overview your new repo structure, and the contents of one of the including Makefiles.

$ cd repo

$ tree -a
.
├── common
│   └── Makefile
├── .gitignore
├── project1
│   ├── Makefile
│   ├── readme.adoc
│   ├── src
│   │   └── solution.c
│   └── tests
│       ├── input001.txt
│       └── output001.txt
└── project2
    ├── Makefile
    ├── readme.adoc
    ├── src
    │   ├── package-info.java
    │   └── Solution.java
    └── tests
        ├── input001.txt
        └── output001.txt

8 directories, 13 files

$ cat project1/Makefile
include ../common/Makefile

Scenario 2. If you already has a project, let’s say project1, and you want to reuse its existing Makefile for the new project2`, you could simply include the existing ../project1/Makefile into your new project2/Makefile. If you prefer a more tidy arrangement, you may move your existing Makefile to a shared folder (e.g: common) and include it in all projects you have in your repository. The following commands show this second approach. They suppose repo/ is an actual git repository. If it is not the case for your files, just remove git word in the respective commands.

# 1. Move existing Makefile into repo/common/ folder
cd repo
mkdir common
git mv project1/Makefile common

# 2. Include the moved Makefile into project1/
echo include ../common/Makefile > project1/Makefile
git mv project1/.gitignore .

# 3. Create second project, let's say for Java
mkdir project2
cd project2
echo include ../common/Makefile > Makefile
make project=java
rm .gitignore
cd ..

3.2. Overriding variables

(Writing pending)

$ make helpvars

Override a variable in command line call:

VAR=value  Overrides a variable, e.g CC=mpicc DEFS=-DGUI. See helpvars

Edit original Makefile is not recommended. Override can be done after include line. Example:

$ include ../common/Makefile

FLAGS=-pthread
ARGS=2 3 5

3.3. Updating test cases

Make your executable solution to update output files of test cases.

<L>out     Generate test case output using language L: cpp|java|py
tc=N       Generates N empty test cases in tests/
test       Run executable against test cases in folder tests/

Test case file extensions other than .txt.

3.4. Version control support

Configure your name, email, and time to keep your password in memory to avoid typing credentials.

gitconfig  Configure name/email. Cache password in memory for some time

Ignoring files in version control

.gitignore Generate a .gitignore file

Requires Internet connection. It is automatically generated with make project.

3.5. Combined targets

all        Run targets: test lint doc

3.6. Clean binaries

clean      Remove generated directories and files

3.7. Debug and release binaries

debug      Build an executable for debugging [default]
release    Build an optimized executable

3.8. Install dependencies

instdeps   Install needed packages on Debian/RedHat-based distributions

3.9. Run your executable

run        Run executable using ARGS value as arguments

Updates executable. Variable ARGS.

3.10. Update Makefile

update     Update this Makefile to latest version"
version    Show this Makefile version

3.11. Configure your system

If you make a fresh operating system installation, or you login in a new lab computer, you may find yourself reapeadly configuring each environment. This section briefs some common commands for this goal. The steps in this section should be run just one time for each operating system installation you have.

If you already have a repository for your code projects, and they already have the Makefile, you may need to install Git and clone your repository:

$ sudo apt install git
$ cd Documents
$ git clone <url>
$ cd <repo/project>

If you do not need a repository, you can get the Makefile directly

$ cd Documents
$ mkdir <project>
$ cd <project>
$ wget jeisson.work/Makefile

At this point, it is supposed you have access to a copy of the Makefile. If you have administrative rights, you can install the dependencies. That is, the programs used by Makefile, such as compilers and interpreters for C, C++, Java, and Python.

$ make instdeps

If you work using version control with Git, you may need to configure the current environment. You may issue:

$ make gitconfig "GITUSER=Ana Soto" GITEMAIL=ana@soto.com GITTIME=3600

The previous command will configure Git at the global level, to provide your full name and email. These data will be used when you crate a commit with git commit. The GITTIME indicates to git to store your username and password in RAM for 3600 seconds (1 hour), so you only have to type them the first time you make git pull or git push commands in the next hour for a repository accesed by HTTPS protocol.

If you do not provide a full name, Makefile will try to get it from your system (Linux-only, for macOS you may use "gituser=$(id -F)"). If you does not overrides the GITTIME variable, Makefile stores your credentials by 3 hours. Email is not inferred from your system, so using the defaults, you may issue:

$ make gitconfig GITEMAIL=ana@soto.com

If the Makefile is used only by personal projects, that is, only you work for them, you may edit the Makefile and change these variables, e.g:

GITUSER=Ana Soto
GITEMAIL=ana@soto.com
GITTIME=28800

Each time you login into a new system, clone the repository, you may configure Git by

$ make gitconfig

3.12. Future work

  1. make install

  2. Reduce Makefile redundancy among languages.

  3. Generate static and dynamic libraries

  4. Automatically build in parallel?

3.13. Acknowledges