Makecmdgoals recursive relationship

sql - Database design for a recursive relationship - Stack Overflow

relationships among files in your program and provides commands for updating each file. .. (see Section [Recursive Use of make], page 49). Make will set the special variable MAKECMDGOALS to the list of goals you. Errors in Commands; Interrupting or Killing make; Recursive Use of make must write a file called the makefile that describes the relationships among files in Make will set the special variable MAKECMDGOALS to the list of goals you. know if there exists a sensible theory that can handle such relations. The idea is to use the MAKECMDGOALS GNU make variable and the.

The X Consortium, for example, uses this flavor. The symbolic link farm has the same advantages as the Source Copy, and it ameliorates the worst of its disadvantages: Link farm copies take up considerably less space and are faster to create though still not free than normal copies. Nevertheless, symlinks can be annoying to work with; a small example: Also, adding new directories or files to the source tree can be problematic: In this method, you write your makefiles such that every reference to every target is prefixed with the pathname where it exists.

The pathname can and probably should be calculated internally to your makefiles based on the current host architecture, or compiler flags, or both. Often the target directory is a simple subdirectory of the current directory, but it could also be someplace completely different; this can allow, for example, building sources that exist on read-only media without copying them elsewhere first: If you write your makefiles carefully you can easily accommodate both styles by simply changing a variable value or two.

The problem here is with the makefiles. This can make for a lot of redundancy in your makefiles. This can get unwieldy quickly. So, this method does just that. Like the source copy methodwe write our makefiles to create all targets in the current working directory. Then, the makefile uses VPATH to locate the source files for use, so we can write the source filenames normally and without a path prefix either.

Now all that has to be done is invoke the build from within the target directory and voila! The makefiles are tidy and easy to understand, without pathnames prefixed everywhere. The most popular example of this method are the build environments created with a combination of GNU autoconf and GNU automake.

There, the configure script is run from a remote directory and it sets things up for you in that remote directory without modifying the original sources. But even worse, the makefile for your build is back in the source directory. Or, you could set up a symbolic link in the target directory pointing back to the makefile in the source directory.

What does it need to make it just what we want? Well, we need to avoid having to change directories. So, what the advanced VPATH method describes is a way of convincing make itself to change directories for you, rather than requiring you to do it yourself. The algorithm is simple: If make is in the target directory, then it builds the requested targets. How can this be done? Basically, we enclose almost the entire makefile in an if-then-else statement. The test of the if statement checks the current directory.

In this case, the best we can do is to ensure that the db makefile is always executed before executing the ui makefile. This higher-level dependency must be encoded by hand.

We were astute enough in the first version of our makefile to recognize this, but, in general, this is a very difficult maintenance problem. As code is written and modified, the top-level makefile will fail to properly record the intermodule dependencies. To continue the example, if the yacc grammar in db is updated and the ui makefile is run before the db makefile by executing it directly instead of through the top-level makefilethe ui makefile does not know there is an unsatisfied dependency in the db makefile and that yacc must be run to update the header file.

Instead, the ui makefile compiles its program with the old yacc header. If new symbols have been defined and are now being referenced, then a compilation error is reported. Thus, the recursive make approach is inherently more fragile than a single makefile. The problem worsens when code generators are used more extensively. Suppose that the use of an RPC stub generator is added to ui and the headers are referenced in db.

Now we have mutual reference to contend with. To resolve this, it may be required to visit db to generate the yacc header, then visit ui to generate the RPC stubs, then visit db to compile the files, and finally visit ui to complete the compilation process. The number of passes required to create and compile the source for a project is dependent on the structure of the code and the tools used to create it.

This kind of mutual reference is common in complex systems. The standard solution in real-world makefiles is usually a hack. To ensure that all files are up to date, every makefile is executed when a command is given to the top-level makefile.

Notice that this is precisely what our mp3 player makefile does. When the top-level makefile is run, each of the four sub-makefiles is unconditionally run. In complex cases, makefiles are run repeatedly to ensure that all code is first generated then compiled. Often this iterative execution is a complete waste of time, but occasionally it is required.

Avoiding Duplicate Code The directory layout of our application includes three libraries. The makefiles for these libraries are very similar. This makes sense because the three libraries serve different purposes in the final application but are all built with similar commands. This kind of decomposition is typical of large projects and leads to many similar makefiles and lots of makefile code duplication.

Code duplication is bad, even makefile code duplication. It increases the maintenance costs of the software and leads to more bugs. It also makes it more difficult to understand algorithms and identify minor variations in them.

Multi-Architecture Builds

So we would like to avoid code duplication in our makefiles as much as possible. This is most easily accomplished by moving the common pieces of a makefile into a common include file.

For example, the codec makefile contains: The only lines that change for each library are the name of the library itself and the source files the library contains. When duplicate code is moved into common.

The original makefiles used the default target all. That would cause problems with nonlibrary makefiles that need to specify a different set of prerequisites for their default goal.

So the shared code version uses a default target of library. Notice that because this common file contains targets it must be included after the default target for nonlibrary makefiles. For library makefiles, the program variable is empty; for program makefiles, the library variable is empty. This makefile uses the variable to denote code generated by yacc. As more makefile code is moved into the common makefile, it evolves into a generic makefile for the entire project. Nonrecursive make Multidirectory projects can also be managed without recursive makes.

The difference here is that the source manipulated by the makefile lives in more than one directory. To accommodate this, references to files in subdirectories must include the path to the file—either absolute or relative. Often, the makefile managing a large project has many targets, one for each module in the project. For our mp3 player example, we would need targets for each of the libraries and each of the applications.

It can also be useful to add phony targets for collections of modules such as the collection of all libraries. The default goal would typically build all of these targets.

Unsupported SSL/TLS Version

Often the default goal builds documentation and runs a testing procedure as well. The most straightforward use of nonrecursive make includes targets, object file references, and dependencies in a single makefile.

This is often unsatisfying to developers familiar with recursive make because information about the files in a directory is centralized in a single file while the source files themselves are distributed in the filesystem.

To address this issue, the Miller paper on nonrecursive make suggests using one make include file for each directory containing file lists and module-specific rules. The top-level makefile includes these sub-makefiles. A nonrecursive makefile Collect information from each module in these four variables. Initialize them here as simple variables. The top-level makefile contains only a list of modules and include directives. Therefore, it uses simple variables those assigned with: The library name and source file lists use a relative path as discussed earlier.

Finally, the include file defines a rule for updating the local library. These variables must be simple variables because each module will append to them using the same local variable name: An explicit assignment is required to initialize these simple variables, even though they are assigned null values, since variables are recursive by default.

The next section computes the object file list, objects, and dependency file list from the sources variable. These variables are recursive because at this point in the makefile the sources variable is empty. It will not be populated until later when the include files are read. In this makefile, it is perfectly reasonable to move the definition of these variables after the includes and change their type to simple variables, but keeping the basic file lists e.

Also, in other makefile situations, mutual references between variables require the use of recursive variables.

GNU make: Recursion

This allows the compiler to find the headers. The vpath directive allows make to find the headers stored in other directories. Variables for mv, rm, and sed are defined to avoid hard coding programs into the makefile. Notice the case of variables. We are following the conventions suggested in the make manual. Variables that are internal to the makefile are lowercased; variables that might be set from the command line are uppercased.

In the next section of the makefile, things get more interesting. We would like to begin the explicit rules with the default target, all. Unfortunately, the prerequisite for all is the variable programs. This variable is evaluated immediately, but is set by reading the module include files. So, we must read the include files before the all target is defined. Unfortunately again, the include modules contain targets, the first of which will be considered the default goal. To work through this dilemma, we can specify the all target with no prerequisites, source the include files, then add the prerequisites to all later.

The remainder of the makefile is already familiar from previous examples, but how make applies implicit rules is worth noting. Our source files now reside in subdirectories. Thus, make automagically does the Right Thing. There is one final glitch.

Although make is handling paths correctly, not all the tools used by the makefile are. In particular, when using gcc, the generated dependency file does not include the relative path to the target object file. That is, the output of gcc -M is: To fix this problem we can alter the sed command to add relative path information: Portable makefiles are often very complex due to vagarities of the diverse set of tools they are forced to rely upon.

We now have a decent nonrecursive makefile, but there are maintenance problems. A change to one will likely involve a change to all of them. For small projects like our mp3 player it is annoying. For large projects with several hundred include files it can be fatal. By using consistent variable names and regularizing the contents of the include files, we position ourselves nicely to cure these ills.

The make-library function now performs the bulk of the tasks for an include file. This function is defined at the top of our project makefile as: The source-to-object function translates a list of source files to their corresponding object files: Instead, we listed the source files as the generated.

In addition to modifying source-to-object, we need another function to compute the yacc and lex output files so the clean target can perform proper clean up. The generated-source function simply accepts a list of sources and produces a list of intermediate files as output: Using a simple patsubst, we can extract the relative path from the top-level makefile.

This eliminates another variable and reduces the differences between include files. Our final optimization at least for this exampleuses wildcard to acquire the source file list.

This works well in most environments where the source tree is kept clean. However, I have worked on projects where this is not the case. If you are using a modern source code control system, such as CVS, keeping dead code in the source tree is unnecessary since it resides in the repository and using wildcard becomes feasible. The include directives can also be optimzed: Under these circumstances, it might be preferable to define modules as a find command: My own experience with a large Java project indicates that a single top-level makefile, effectively inserting all the module.

This project included separate modules, about two dozen libraries, and half a dozen applications. There were several makefiles for disjoint sets of code. These makefiles were roughly 2, lines long. A common include file containing global variables, user-defined functions, and pattern rules was another 2, lines. Whether you choose a single makefile or break out module information into include files, the nonrecursive make solution is a viable approach to building large projects.

It also solves many traditional problems found in the recursive make approach. Components of Large Systems For the purposes of this discussion, there are two styles of development popular today: In the free software model, each developer is largely on his own.

The principals of the project want things to work well and want to receive contributions from a large community, but they are mostly interested in contributions from the skilled and well-motivated. This is not a criticism. In this point of view, software should be written well, and not necessarily to a schedule. In the commercial development model, developers come in a wide variety of skill levels and all of them must be able to develop software to contribute to the bottom line.

To handle these issues, the development process is managed by an engineering support team that coordinates the build process, configuration of software tools, coordination of new development and maintenance work, and the management of releases.

In this environment, efficiency concerns dominate the process. It is the commercial development model that tends to create elaborate build systems. The primary reason for this is pressure to reduce the cost of software development by increasing programmer efficiency. This, in turn, should lead to increased profit. It is this model that requires the most support from make. Nevertheless, the techniques we discuss here apply to the free software model as well when their requirements demand it.

This section contains a lot of high-level information with very few specifics and no examples. Requirements Of course requirements vary with every project and every work environment. Here we cover a wide range that are often considered important in many commercial development environments.

The most common feature desired by development teams is the separation of source code from binary code. That is, the object files generated from a compile should be placed in a separate binary tree. This, in turn, allows many other features to be added. Separate binary trees offer many advantages: It is easier to manage disk resources when the location of large binary trees can be specified.

Many versions of a binary tree can be managed in parallel. For instance, a single source tree may have optimized, debug, and profiling binary versions available. Multiple platforms can be supported simultaneously. A properly implemented source tree can be used to compile binaries for many platforms in parallel. Source trees can be protected with read-only access. This provides added assurance that the builds reflect the source code in the repository.

Some targets, such as clean, can be implemented trivially and will execute dramatically faster if a tree can be treated as a single unit rather than searching the tree for files to operate on. Most of the above points are themselves important build features and may be project requirements. Being able to maintain reference builds of a project is often an important system feature. The idea is that a clean check-out and build of the source is performed nightly, typically by a cron job.

Since the resulting source and binary trees are unmodified with respect to the CVS source, I refer to these as reference source and binary trees.