kazinator 4 days ago

CPPFLAGS and CXXFLAGS are different. CPPFLAGS are for options for the C preprocessor, regardless of what language it is being used for. CXXFLAGS are for C++. The XX's are ++ rotated 45 degrees.

Don't be fooled by the convention used in some necks of the woods of a .cpp suffix for C++ files; CPPFLAGS have to do with the "cpp" program, not the .cpp suffix.

LDLIBS is sister to LDFLAGS. Both these variables hold options for the linker command line destructured into two groups: LDFLAGS are the early options that go before the object files. LDLIBS are the -l options that give libraries, like -lssl -ldl -lcrypto ... these go after the object files.

If you're writing a Makefile, with your own custom recipes for linking, be sure you interpolate both LDFLAGS and LDLIBS in the right places.

  • zabzonk 14 hours ago

    A brief, simplified explanation of why the -l option order matters - the linker is only going to look for unresolved symbols in libraries after it has linked the object files and found that there actually _are_ unresolved symbols (hopefully provided by the libraries). Thus -l options placed before the linking of the object files are effectively ignored.

    • wpollock 13 hours ago

      This is only true when statically linking. Order does not matter when dynamically linking. (This was a surprise to me but I have tested it.)

      • o11c 11 hours ago

        I'm pretty sure distro-specific spec files can make it matter.

        But you really, really should be using `pkg-config` (and no other obsolete `foo-config`) so you don't have to worry about it regardless.

        (If you really want to do it yourself, learn about the `tsort` program first. Although allegedly obsolete for its original purpose, it is still useful to automate some sanity, and applies just as well between libraries as within them.)

        • cozzyd 9 hours ago

          This depends on the default value of the -as-needed linker options. Fedora/red hat and debian/Ubuntu diverge here

          • wpollock 6 hours ago

            Good to know, thanks! I only tested on Fedora. If anyone cares, here's a link for a tarball for testing this; read the simple makefile for details:

            <https://wpollock.com/AUnix2/dll-demo.tgz>

            (I wrote this long ago, as a linking demo for system administration students.)

      • bluGill 12 hours ago

        Order matters if you violating the one definiton rule - the first one wins one linux. This is illegal undefined behavior that doesn't work on windows and I don't know what mac does.

        hopefully this is useless trivia.

  • emmelaich 7 hours ago

    Maybe it's upcoming, but Julia should mention `pkgconfig` to get the various options for compiling.

ashishb 14 hours ago

I love Makefile. It is the easiest build system.

In my projects, whether the language is Go, Rust, Python, typescript, or even Android, there are standard make commands, if applicable, always work

  - make format
  - make lint
  - make build
  - make docker_build
  - make docker_run
Once might migrate from one build system eg pipenv to poetry to uv, but the high level `make format` commands doesn't change.
  • gylterud 13 hours ago

    I love the make reimagination in plan 9: mk. In my experience ok is easier and slightly saner than make. On unix like systems you can get it from plan9port, along with other gems, such as acme.

    https://plan9.io/sys/doc/mk.html

    https://9fans.github.io/plan9port/

    • o11c 11 hours ago

      How so?

      From what I see it is almost* strictly less useful than BSD make, which (IMO) is strictly less useful than Solaris make, all of which are strictly less useful than GNU make. It only might be more useful than SysV make.

      * Regex support is unique, but you know what they say about solving problems with regexes, and you can usually do any useful think with `subst` and `word`. The `Pcmp -s` at first seems unique, but can easily be replaced by some stamp/dep file logic, with much better performance.

      Don't get confused by the fact that `automake` chooses to maintain compatibility with versions of `make` more than 4 decades old.

      • gylterud 7 hours ago

        I haven’t tried every version of make there is, but a few things about mk off the top of my head:

        - Readability: $target, $prereq, $stem is much easier to read than Make’s $@, $<. Any white space can be used to indent.

        - Flags after the target to do useful thing such as autodelete target om error (no more confusion on about partial files), and controlling verbosity, and specifying virtualness.

        - Stricter when dealing with overlapping targets.

        - Has a flag for printing why each target is being generated.

        - Regex in targets is sometimes really useful! Taken together with the strictness towards overlapping targets this leads to less confusion overall.

    • xelxebar 9 hours ago

      This was going through my head, too. Though I am somewhat doubtful whether it would be a win on Linux. Mk's ergonomics come from leveraging the cohesiveness of Plan 9 as a system, IMHO.

      That said, the automatic variables ($target, $stem, etc.) are more descriptive than those of make ($@, $*, etc) since the former are just normal shell variables passed in at rule runtime I think.

  • ElectricalUnion 14 hours ago

    Do you pull the entire toolchain inside the makefile if it doesn't exist in the host? Or you're just using it as a glorified alias over the (rather arcane) podman container run/docker run calls?

    Also, how do you solve the problem of actually bootstrapping over different versions of docker/podman/make? Do you have some sort of ./makew like how Maven uses to bootstrap itself?

    • toast0 7 hours ago

      You certainly can have Make pull in dependencies... But I would usually just have Make look for the dependencies and ask the user to install them.

      You would want to mark the dependencies anyway, since when you update your compiler, IMHO, that invalidates the objects compiled with that compiler, so the compiler is a dependency of the object.

      That said, I don't distribute much software. What works for personal software and small team software may not be effective for widely distributed software that needs to build in many environments.

    • ashishb 13 hours ago

      I have `make install_deps` command for that.

      But it is closer to being nice aliases as opposed to fetching the tool chain.

  • onionisafruit 13 hours ago

    I do the same except I use shell scripts. script/fmt, script/lint etc are consistent across projects even though the implementation details differ. I only use make for actually making files. It’s great for that, but it’s a pretty crappy replacement for a shell script.

    • ashishb 13 hours ago

      Make is a good replacement for shell script.

      Especially, given how bad the defaults in bash are https://ashishb.net/programming/better-bash/

      > I do the same except I use shell scripts. script/fmt, script/lint

      Do you create a separate file for every single target then?

      • akdev1l 13 hours ago

        > Make is a good replacement for shell script.

        it’s not, you need to start thinking about .PHONY targets and other stuff quite quickly

  • frainfreeze 14 hours ago

    I've been doing the same for past 10 years. Plus automatic help target using comments and perl snippet

jwrallie 13 hours ago

If you have a single file C program you can also use Make without a Makefile.

I always had the idea that Make reads a Makefile, so when I saw this for the first time it blew my mind.

So if you have file.c you can just call “make file” and that is it.

  • fuzztester 9 hours ago

    You can even do just:

      make 
    
    in many cases.

    For example, google:

    ./configure; make; make install

    to check out a common method of building packages from source on Unix.

    Done it dozens of times, for packages like Oracle, MySQL, Python, and many others.

    Although readymade binary packages exist for many such products, what you get by the above method is a good amount of configurability, for special needs or environments (hence the name of the configure command).

    • wahern 6 hours ago

      If you don't specify a target, make builds the first target declared. This is traditionally "all", and it's common to define "all:" as the first target in a Makefile (it doesn't need a recipe--a bare "all:" is okay; you can fully define it later in the file). Alternatively, GNU Make supports the ".DEFAULT_GOAL" variable, which you can define anywhere to set the implicit target.

      Somewhat confusingly, there's also a special target, ".DEFAULT", which you can define as a catchall; it's run for any requested target that has no definition of its own.

  • fuzztester 12 hours ago

    The tool is called "make", not "Make", both in the docs and in the file system.

    And the file it processes can be called Makefile or makefile or even other names (see the -f option).

    https://man7.org/linux/man-pages/man1/make.1.html

    And that applies even to pre-GNU Linux make.

    Source: been using Unix since some years before Linux was created.

    • jwrallie 5 hours ago

      On my defense, I’m sure I’m not the only person to have ever called it Make before [0]. It’s OK to capitalize proper nouns in English. You can even find this link in your reference [1], but you can argue that this is not the original version. I know how to call it in the shell, but I’m not convinced that is the correct way to call it everywhere.

      [0] https://en.m.wikipedia.org/wiki/Make_(software)

      [1] https://www.gnu.org/software/make/

brunokim 13 hours ago

Autotools looks like a great idea: try to compile some programs and see if it works to learn the specifics of your environment. Yet I share the feeling from Julia that I hope to never have to learn how it works.

  • bluGill 12 hours ago

    Nobody else knows how it works, which is why every configure script checks for a working fortran compiler. that is also why you can never be sure things like cross compiling work even though it can.

    i use cmake - not a great system but at least it isn't hard and it always works. I've heard good things about a few others as well. There is no reason to use autotools.

    • alextingle 9 hours ago

      My configure scripts never checked for a FORTRAN compiler. I'm not going to claim that using autotools is a pleasant experience, but it's not that bad, and it's very well documented.

      • bluGill 30 minutes ago

        The exception that proves the rule...

jiehong 14 hours ago

I wish more projects would include a dockerfile in which a compiler and all dependencies are installed to run the right make command as ‘docker build .’

So you get a working build example, and if you’re lucky and the base image exists for your architecture, it’ll compile a binary for your system too (assuming static linking I guess).

  • jrop 14 hours ago

    This and/or a shell.nix that bundles all dependencies outside of those provided by the language's package manager.

  • akdev1l 13 hours ago

    > it’ll compile a binary for your system too (assuming static linking I guess).

    I’ve tried as you said and I’ve found that this is not a good assumption to make

    glibc doesn’t support static linking so this is not working in the general case

singpolyma3 14 hours ago

> C doesn’t have a dependency manager

On the contrary it has many. My favorite is apt but maybe you prefer cargo or homebrew or rubygems.

  • dgfitz 13 hours ago

    Huh?

    That doesn’t make sense. C absolutely does not have a dependency manager.

    • singpolyma3 19 minutes ago

      I just listed several. Other comments in this thread list other of them

    • akdev1l 13 hours ago

      C the language doesn’t but if you’re truly writing a C program you’ll just default to using the platform’s package manager unless you have specific requirements

      So in that sense dnf/apt/zypper/pacman are all “C” package managers (not sure I agree with the OP but I think this is what they meant)

      • ethagnawl 13 hours ago

        What about projects like Conan and vcpkg?

        • akdev1l 12 hours ago

          I think technically those are targeted towards C++ but sure they probably work also

          It will likely have less packages than the other established ones though

    • bluGill 12 hours ago

      Which is a good thing because c is targeted at complex projects which often have more than one language and so a languare specific one is harmful.

zabzonk 14 hours ago

If you feel the need say "C/C++" you probably want to think twice.

o11c 11 hours ago

One again, this site misses a lot of details, some of which are quite important:

* build-essential - this actually installs what is considered "essential" for Debian packages. This happens to include some common compilers, other tools, and libraries, but also a bunch of junk you probably don't need. Still, this is harmless.

* There is `pkg-config` for detailing dependencies that are already installed, there's just no standard way to automatically download and run code from the internet (and good riddance)

* Sometimes you have to run `autoreconf -i` first to generate `configure` in the first place. This normally happens if you're running from a git clone rather than a tarball. Autotools is mostly useless if you're only working on Linux, marginally useful if you're also working on a different modern OS (especially since the first step is "install the GNU version of all the build tools"), but extremely useful if MyWeirdPrePosixProprietaryUnix. Despite this, `./configure` remains the single best interface for end-user installing - cmake in particular is an atrocity.

* There is a little confusion about "build/host/target", due to the question "what machine am I running on, at what time?" If you aren't cross-compiling you can ignore all this though.

* For more about `./configure` and `make` options, see https://www.gnu.org/prep/standards/html_node/Managing-Releas... - there's also some good reference material in the documentation for `autotools` and `make`. I won't repeat all the things there, many of which people really need to learn before posting.

* One annoying thing is that `./configure` and `make` disagree about whether libraries go in `LIBS` or `LDLIBS`. You should normally set variable for `./configure` so you don't have to remember them; one exception is `DESTDIR`.

* `as` and `ld` are called internally by then compiler, you should never call them directly, since it needs to mangle flags.

* The `checkinstall` tool, if supported for your distro, can help call `make install` in a way that uninstallation is reliable.

* rpath is the correct solution instead of `LD_LIBRARY_PATH`, and can be automated based copying `-L` arguments to `-R`. There's some FUD floating around the internet, but it only affects `setuid` (and other privileged) packages on multiuser systems, and even then only if they set it to certain values.