C++ and C rely, heavily, on skill and discipline instead of automated checks to stay safe. Over time, and in larger groups of people that always fails. People just aren't that disciplined and they get overconfident of their own skills (or level of discipline). Decades of endless memory leaks, buffer overflows, etc. and the related security issues, crash bugs, data corruption, etc. shows that no code base is really immune to this.
The best attitude in programmers (regardless of the language) is the awareness that "my code probably contains embarrassing bugs, I just haven't found them yet". Act accordingly.
There are of course lots of valid reasons to continue to use C/C++ on projects where it is used and there are a lot such projects. Rewrites are disruptive, time consuming, expensive, and risky.
It is true that there are ways in C++ to mitigate some of these issues. Mostly this boils down to using tools, libraries, and avoiding some of the more dark corners of the language and standard library. And if you have a large legacy code base, adopting some of these practices is prudent.
However, a lot of this stuff boils down to discipline and skill. You need to know what to use and do, and why. And then you need to be disciplined enough to stick with that. And hope that everybody around you is equally skilled and disciplined.
However, for new projects, there usually are valid alternatives. Even performance and memory are not the arguments they used to be. Rust seems to be building a decent reputation for combining compile time safety with performance and robustness; often beating C/C++ implementations of things where Rust is used to provide a drop in replacement. Given that, I can see why major companies are reluctant to take on new C/C++ projects. I don't think there are many (or any) upsides to the well documented downsides.
People innately admire difficult skills, regardless of their usefulness. Acrobatic skateboarding is impressive, even when it would be faster and safer to go in a straight line or use a different mode of transport.
To me skill and effort is misplaced and wasted when it's spent on manually checking invariants that a compiler could check better automatically, or implementing clever workarounds for language warts that no longer provide any value.
Removal of busywork and pointless obstacles won't make smart programmers dumb and lazy. It allows smart programmers to use their brainpower on bigger more ambitious problems.
To me a compiler's effort is misplaced and wasted when it's spent on checking invariants that could be checked by a linter or a sidecar analysis module.
Checking of whole-program invariants can be accurate and done basically for free if the language has suitable semantics.
For example, if a language has non-nullable types, then you get this information locally for free everywhere, even from 3rd party code. When the language doesn't track it, then you need a linter that can do symbolic execution, construct call graphs, data flows, find every possible assignment, and still end up with a lot of unknowns and waste your time on false positives and false negatives.
Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart. Recovering required information from arbitrary code may be literally impossible (Rice's theorem), and getting even approximate results quickly ends up requiring whole-program analysis and prohibitively expensive algorithms.
And it's not even an either-or choice. You can have robust checks for fundamental invariants built into the language/compiler, and still use additional linters for detecting less clear-cut issues.
If the compiler is not checking them then it can't assume them, and that reduces the opportunities for optimizations. If the checks don't run on the compiler then they're not running every time; if you do want them to run every time then they may as well live in the compiler instead.
These type comments always remind me that we forget where we come from in terms of computation, every time.
It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.
It's easy to publicly shame people who do hard things for a long time in the light of newer tools. However, many people who likes these languages are using them longer than the languages we champion today were mere ideas.
I personally like Go in these days for its stupid simplicity, but when I'm going to do something serious, I'll always use C++. You can fight me, but never pry C++ from my cold, dead hands.
For note, I don't like C & C++ because they are hard. I like them because they provide a more transparent window the processor, which is a glorified, hardware implemented PDP-11 emulator.
Last, we shall not forget that all processors are C VMs, anyway.
> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.
The core of the borrow checker was being formulated in 2012[1], which is 13 years ago. No infeasibility then. And it's based on ideas that are much older, going back to the 90s.
Plus, you are vastly overestimating the expense of borrow checking, it is very fast, and not the reason for Rust's compile times being slow. You absolutely could have done borrow checking much earlier, even with less computing power available.
> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.
IIRC borrow checking usually doesn't consume that much compilation time for most crates - maybe a few percent or thereabouts. Monomorphization can be significantly more expensive and that's been much more widely used for much longer.
> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.
I think you're setting the bar a little too high. Rust's borrow-checking semantics draw on much earlier research (for example, Cyclone had a form of region-checking in 2006); and Turbo Pascal was churning through 127-character identifiers on 8088s in 1983, one year before C++ stream I/O was designed.
I remember, I was there in the 1980's coding, hence why I know C and C++ were not the only alternatives, rather the ones that eventually won in the end.
> And no, thinking that C is "closer to the processor" today is incorrect
THIS thinking is about 5 years out of date.
Sure, this thinking you exhibit gained prominence and got endlessly repeated by every critic of C who once spent a summer doing a C project in undergrad, but it's been more than 5 years that this opinion was essentially nullified by
Okay, If C is "not close to the process", what's closer?
Assembler? After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on, that has a lower bound which none of the data gets close to?
You're repeating something that was fashionable years ago.
Standard C doesn't have inline assembly, even though many compilers provide it as an extension. Other languages do.
> After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on
The claim about C being "close to the machine" means different things to different people. Some people literally believe that C maps directly to the machine, when it does not. This is just a factual inaccuracy. For the people that believe that there's a spectrum, it's often implied that C is uniquely close to the machine in ways that other languages are not. The pushback here is that C is not uniquely so. "just as close, but not closer" is about that uniqueness statement, and it doesn't mean that the spectrum isn't there.
Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.
> Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.
Well, you're talking about languages that don't have standards, they have a reference implementation.
IOW, no language has standards for processor intrinsics; they all have implementations that support intrinsics.
> LLVM IR is closer. Still higher level than Assembly
So your reasoning for repeating the once-fashionable statement is because "an intermediate representation that no human codes in is closer than the source code"?
> C++ and C rely, heavily, on skill and discipline instead of automated checks to stay safe.
You can't sensibly talk about C and C++ as a single language. One is the most simple language there is, most of the rules to which can be held in the head of a single person while reading code.
The other is one of the most complex programming languages to ever have existed, in which even world-renowned experts in lose their facility for the language after a short break from it.
And yet, they both still suffer from the flaw that the parent comment cites. Describing a shared property doesn't imply a claim that they're the same language.
> And yet, they both still suffer from the flaw that the parent comment cites.
I dunno; the flaw is not really comparable, is it? The skill and discipline required to write C bug-free is an orders of magnitude less than the skill and discipline required to write C++.
Unless you read GGPs post to mean a flaw different to "skill and discipline required".
I'd argue that their point was that the required amount of skill and discipline of either is higher than it's worth at this point for new projects. The difference doesn't matter if even the lower of the two is too high.
Most people don't write C, nor use the C compiler, even when writing C. You use C++ and the C++ compiler. For (nearly) all intents and purposes, C++ has subsumed and replaced C. Most of the time when someone says something is "written in C" it actually means it's C++ without the +± features. It's still C++ on the C++ compiler.
Actual uses of actual C are pretty esoteric and rare in the modern era. Everything else is varying degrees of C++.
Skype was written without exception handling and RTTI, although using a lot of C++ features. You can write good C++ code without these dependencies. You don't use STL but with cautious use of hand-built classes you go far.
Today I wouldn't recommnend Skype built in any language except Rust. But the Skype founders Ahti Heinla, Jaan Tallinn and Priit Kasesalu found exactly the right balance of C and C++ for the time.
I also wrote a few lines of code in that dialect of C++ (no exceptions). And it didn't feel much different from modern C++ (exception are really fatal errors)
And regarding to embedded, the same codebase was embedded in literally all the ubiquitous TVs of the time, even DECT phones. I bet there are only a few (if any) application codebases of significant size to have been deployed at that scale.
> Have you written significant amounts of C or C++?
Yes.
> Most of the time when someone says something is "written in C" it actually means it's C++ without the +± features.
Those "someone's" have not written a significant amount of C. Maybe they wrote a significant amount of C++.
The cognitive load when dealing with C++ code is in no way comparable to the cognitive load required when dealing with C code, outside of code-golfing exercises which is as unidiomatic as can be for both languages.
Other then hardcore embedded guys and/or folks dealing with legacy C code, I and most folks i know almost always use C++ in various forms i.e. "C++ as a better C", "Object-Oriented C++ with no template shenanigans", "Generic programming in C++ with templates and no OO", "Template metaprogramming magic", "use any subset of C++ from C++98 to C++23" etc. And of course you can mix-and-match all of the above as needed.
C++'s multi-paradigm support is so versatile that i don't know why folks on HN keep moaning about its complexity; it is the price you pay for the power you get. It is the only language that i can program in for itty-bitty MCUs all the way to large complicated distributed systems on multiple servers plus i can span all of applications to systems to bare-metal programming.
> I don't think there are many (or any) upsides to the well documented downsides.
C++ template metaprogramming still remains extremely powerful. Projects like CUTLASS, etc could not be written to give best performance in as ergonomic a way in Rust.
There is a reason why the ML infra community mostly goes with Python-like DSL's, or template metaprogramming frameworks.
Last I checked there are no alternatives at scale for this.
Even new projects have good reasons to use c++. Maybe the ecosystem is built around it. Maybe competent c++ programmers are easier to find than rust ones. Maybe you need lots of dynamic loading. Maybe you want drop-in interop with C. Maybe you’re just more comfortable with c++.
I agree with the discipline aspect. C++ has a lot going against it. But despite everything it will continue to be mainstream for a long time, and by the looks of it not in the way of COBOL but more like C.
This is a good article but it only scratches the surface, as is always the case when it comes to C++.
When I made a meme about C++ [1] I was purposeful in choosing the iceberg format. To me it's not quite satisfying to say that C++ is merely complex or vast. A more fitting word would be "arcane", "monumental" or "titanic" (get it?). There's a specific feeling you get when you're trying to understand what the hell is an xvalue, why std::move doesn't move or why std::remove doesn't remove.
The Forest Gump C++ is another meme that captures this feeling very well (not by me) [2].
What it comes down to is developer experience (DX), and C++ has a terrible one. Down to syntax and all the way up to package management a C++ developper feels stuck to a time before they were born. At least we have a lot of time to think about all that while our code compiles. But that might just be the price for all the power it gives you.
- Use a build system like make, you can't just `c++ build`
- Understand that C++ compilers by default have no idea where most things are, you have to tell them exactly where to search
- Use an external tool that's not your build system or compiler to actually inform the compiler what those search paths are
- Oh also understand the compiler doesn't actually output what you want, you also need a linker
- That linker also doesn't know where to find things, so you need the external tool to use it
- Oh and you still have to use a package manager to install those dependencies to work with pkg-config, and it will install them globally. If you want to use it in different projects you better hope you're ok with them all sharing the same version.
Now you can see why things like IDEs became default tools for teaching students how to write C and C++, because there's no "open a text editor and then `c++ build file.cpp` to get output" for anything except hello world examples.
It's really not that big of a deal once you know how it works, and there are tools like CMake and IDEs that will take care of it.
On Windows and OSX it's even easier - if you're okay writing only for those platforms.
It's more difficult to learn, and it seems convoluted for people coming from Python and Javascript, but there are a lot of advantages to not having package management and build tooling tightly integrated with the language or compiler, too.
I agree -- I've been at it long enough -- cmake etc makes stuff pretty darn easy.
But in industrial settings where multi groups share and change libs something like debpkg may be used. You add caching and you can go quite deep quickly esp after bolting on cdci.
One must cop to the fact that a go build or zig build is just fundamentally better.
Yeah, I definitely agree the newer tools are better, but sometimes the arguments against C++ get blown out of proportion.
It definitely has a lot of flaws, but in practice most of them have solutions or workarounds, and on a day-to-day basis most C++ programmers aren't struggling with this stuff.
"It's not really that big a deal once you know how it works" is the case with pretty much everything though. The question is whether the amount of time needed to learn how something works is worthwhile though, and the sheer number of things you need to invest the time to learn in a language like C++ compared to more modern languages is a big deal. Looking at a single one of them in isolation like the build system essentially just zooms in one problem far enough to remove the other ones from the picture.
So I come from the C/C++ world, that's part of why I disagree with these takes. I wouldn't say any process involving CMake is "not that big of a deal" because I routinely see veteran developers struggle to edit cmake files to get their code to compile and link.
This is pure Stockholm syndrome. If I were forced to choose between creating a cross-platform C++ project from scratch or taking an honest to god arrow to the knee, the arrow would be less painful.
The main reason I don't want to use C/C++ are the header files. You have to write everything in a header file and then in an implementation file. Every time you want to change a function you need to do this at least twice. And you don't even get fast compilation speed compared to some languages because your headers will #include some library that is immense and then every header that includes that header will have transitive header dependencies, and to solve this you use precompiled headers which you might have to set up manually dependending on what IDE you are using.
It gets better with experience. You can have a minimal base layer of common but rarely changing functionality. You can reduce static inline functions in headers. You reduce data structure definitions, but put only forward declarations in header files. (Don't use C++ methods, at least don't put them in an API, because they force you to expose your implementation details needlessly). You can separate data structures from functions in different header files. Grouping functions together with types is often a bad idea since most useful functionality combines data from two or more "unrelated" types -- so you'd rather make function headers "by topic" than putting them alongside types.
I just created a subsystem for a performance intensive application -- a caching layer for millions or even billions of objects. The implementation encompasses over a 1000 LOC, but the header only includes <stdint.h>. There are about 5 forward struct declarations and maybe a dozen function calls in that API.
To a degree it might be stockholm syndrome, but I feel like having had to work around a lot of C's shortcomings I actually learned quite a lot that helps me in architecting bigger systems now. Turns out a lot of the flexibility and ease that you get from more modern languages mostly allows you to code more sloppily, but being sloppy only works for smaller systems.
If you were forced to choose between creating a cross-platform project in one of the trendy language, but of course, which must also work on tiny hardware with a weird custom OSes on some hobbyist hardware, and with 30-year-old machines in some large organization's server farm - then you would choose the C++ project, since you will be able to make that happen, with some pain. And with the other languages - you'll probably just give up or need to re-develop all userspace for a bunch of platforms, so that it can accommodate the trendy language build tool. And even that might not be enough.
Also: If you are on platforms which support, say, CMake - then the multi-platform C++ project is not even that painful.
> but of course, which must also work on tiny hardware with a weird custom OSes on some hobbyist hardware, and with 30-year-old machines in some large organization's server farm - then you would choose the C++ projectt, since you will be able to make that happen, with some pain.
With the old and proprietary toolchains involved, I would bet dollars to doughnuts that there's a 50% odds of C++11 being the latest supported standard. In that context, modern C++ is the trendy language.
Why? There are lots of cross platform libraries and most aspects are not platform specific. It's really not a big deal. Use FLTK and you get most of the cross platform stuff for free in a small package.
It is a big deal even after you know how it works.
The thing is, the languages like Rust only make this easier within their controlled "garden". But for C and C++, you build in the "world outside the garden" to begin with, where you are not guaranteed of everyone having prepared everything for you. So, it's harder, and you may need third-party tools or putting in some elbow grease, or both. The upside is that when rustaceans or go-phers and such wander outside their respective gardens, most of them are completely lost and have no idea what to do; but C and C++ people are kinda-sorta at home there, already.
I used to write a lot of C++ in 2017. Now in 2025 I have no memory of how to do that anymore. It's bespoke Makefile nonsense with zero hope of standardization. It's definitively something that doesn't grow with experience. Meanwhile my gradle setups have been almost unchanged since that time if it wasn't for the stupid backwards incompatible gradle releases.
> I used to write a lot of C++ in 2017... It's bespoke Makefile nonsense
1. Makefiles are for build systems; they are not C++.
2. Even for building C++ - in 2017, there was no need to write bespoke Makefiles, or any Makefiles. You could, and should, have written CMake; and your CMake files would be usable and relevant today.
> Meanwhile my gradle setups have been almost unchanged since that time
... but, typically, with far narrower applicability.
> Use a build system like make, you can't just `c++ build`
This is a strength not a weakness because it allows you to choose your build system independently of the language. It also means that you get build systems that can support compiling complex projects using multiple programming languages.
> Understand that C++ compilers by default have no idea where most things are, you have to tell them exactly where to search
This is a strength not a weakness because it allows you to organize your dependencies and their locations on your computer however you want and are not bound by whatever your language designer wants.
> Use an external tool that's not your build system or compiler to actually inform the compiler what those search paths are
This is a strength not a weakness because you are not bound to a particular way of how this should work.
> Oh also understand the compiler doesn't actually output what you want, you also need a linker
This is a strength not a weakness because now you can link together parts written in different programming languages which allows you to reuse good code instead of reinventing the universe.
> That linker also doesn't know where to find things, so you need the external tool to use it
This is a strength not a weakness for the reasons already mentioned above.
> Oh and you still have to use a package manager to install those dependencies to work with pkg-config, and it will install them globally. If you want to use it in different projects you better hope you're ok with them all sharing the same version.
This is a strength not a weakness because you can have fully offline builds including ways to distribute dependencies to air-gapped systems and are not reliant on one specific online service to do your job.
Also all of this is a non-issue if you use a half-modern build system. Conflating the language, compiler, build system and package manager is one of the main reason why I stay away from "modern" programming languages. You are basically arguing against the Unix philosophy of having different tools that work together with each tool focusing on one specific task. This allows different tools to evolve independently and for alternatives to exist rather than a single tool that has to fit everyone.
Massive cope, there's no excuse for the lack of decent infrastructure. I mean, the C++ committee for years said explicitly that they don't care about infrastructure and build systems, so it's not really surprising.
Not sure how relevant the "in order to use a tool, you need to learn how to use the tool".
Or from the other side: not sure what I should think about the quality of the work produced by people who don't want to learn relatively basic skills... it does not take two PhDs to understand how to use pkg-config.
I'm just pointing out that one reason devex sucks in C++ is because the fact you need a wide array of tools, that are non portable, and require learning and teaching magic incantations at the command line or in build scripts to work, doesn't foster what one could call a "good" experience.
Frankly the idea that your compiler driver should not be a basic build system, package manager, and linker is an idea best left in the 80s where it belongs.
For most people this is a feature not a bug as you suggest. It may come across as PITA, and for many people will do, but as far as I am concerned, while also having experienced the pain of package managers in C++, this is the right way. In the end it's always about the trade-offs. And all the (large) codebases that used conan, bazel or vcpkg induced a magnitude more issues that you would have to handle which otherwise in a plain CMake you would not have. Package managers are for convenience but not all projects can afford themselves the trouble this convenience brings with it.
Coming from a different ground (TypeScript) I agree, in a sense that there is a line where apparent convenience because a trouble. JS ecosystem is known for its hype for build tools. Long term all of them become a problem due to trying to be more convenient, leading to more and more abstractions and hidden behaviors, which turns into a mess impossible to debug or solve when user diverges from author's happy path. Thus I promote using only the necessities, and gluing them together by yourself. Even if something doesn't work, at least it can be tracked down and solved.
> require learning and teaching magic incantations at the command line
That's exactly my point: if you think that calling `cmake --build build` is "magic", then maybe you don't have the right profile to use C++ in the first place, because you will have to learn some harder concepts there (like... pointers).
To be honest, I find it hard to understand how a software developer can write code and still consider that command line instructions are "magic incantations". To me it's like saying that calling a function like `println("Some text, {}, {}", some_parameter, some_other_parameter)` is a "magic incantation". Calling a function with parameters counts as "the basics" to me.
that idea that packages and builds belongs to simple problem, large projects need things like more than one laguage and so end up fighting the language
Every modern language seems to have an answer to this problem that C and C++ refuse to touch because it's out of scope for their respective committees and standards orgs
C++ has a plethora of available build and package management systems. They just aren't bundled with the compiler. IMO that is a good thing, because it keeps the compiler writers honest.
Coming from c++, pip and python dependency management is the bane of my life. How do you make a python software leveraging pytorch that will ship as a single .exe and be able to target whatever gpu the user has without downloads?
Funnily enough a lot of the challenges in this particular case is related to PyTorch and CUDA being native code (mostly in C++). Of course combined with the fact that pip is not really adequate as a native/C++ code package manager.
Perhaps if C++ had a decent standardized package manager, the Python package system reuse that? ;p
"Massively loved" and "good decision" are orthogonal axes. See the current npm drama. People love wantonly importing dependencies the way they love drinking. Both feel great but neither is good for you.
Not that npm-style package management is the best we can do or anything, but I would be more sympathetic to this argument if C or C++ had a clearly better security story than JS, Python, etc. (pick your poison), but they're also disasters in this area.
What happens in practice is people end up writing their own insecure code instead of using someone else's insecure code. Of course, we can debate the tradeoffs of one or the other!
This isn't only about security. This is about interoperability, in the real world we mix (and should mix!) C, C++, Rust, python.... In the real world lawyers audit every dependency to ensure they can legally use it. In the real world we are responsible for our dependencies and so need to audit the code.
I'm getting the impression that C/C++ cultists love it whenever there's an npm exploit because then they can gleefully point at it and pretend that any first-party package manager for C/C++ would inevitably result in the same, nevermind the other languages that do not have this issue, or have it to a far, far lesser extent. Do these cultists just not use dependencies? Are they just [probably inexpertly] reinventing every wheel? Or do they use system packages like that's any better *cough* AUR exploits *cought*. While dependency hell on nodejs (and even Rust if we're honest) is certainly a concern, it's npm's permissiveness and lack of auditing that's the real problem. That's why Debian is so praised.
What makes me a C++ "cultist"? I like the language, but I don't think it's a cult. And yes, they do implement their own wheel all the time (usually expertly) because libraries are reserved for functions that really need it: writing left pad is really easy. They also use third-party libraries all the time, too. They just generally pay attention to the source of that library. Google and Facebook also publish a lot of C++ libraries under one umbrella (abseil and folly respectively), and people often use one of them.
STOP SAYING CULTIST! The word has very strong meaning and does not apply to anyone working with C or C++. I take offense at being called a cultist just because I say C++ is not nearly as bad as the haters keep claiming it is - as well I should.
> Or do they use system packages like that's any better cough AUR exploits cought.
AUR stands for "Arch User Repository". It's not the official system repository.
> I'm getting the impression that C/C++ cultists love it whenever there's an npm exploit
I am not a C/C++ cultist at all, and I actually don't like C++ (the language) so much (I've worked with it for years). I, for one, do not love it when there is an exploit in a language package manager.
My problem with language package managers is that people love them precisely because they don't want to learn how to deal with dependencies. Which is actually the problem: if I pull a random Rust library, it will itself pull many transitive dependencies. I recently compared two implementations of the same standard (C++ vs Rust): in C++ it had 8 dependencies (I can audit that myself). In Rust... it had 260 of them. 260! I won't even read through all those names.
"It's too hard to add a dependency in C++" is, in my opinion, missing the point. In C++, you have to actually deal with the dependency. You know it exists, you have seen it at least once in your life. The fact that you can't easily pull 260 dependencies you have never heard about is a feature, not a bug.
I would be totally fine with great tooling like cargo, if it looked like the problem of random third-party dependencies was under control. But it is not. Not remotely.
> Do these cultists just not use dependencies?
I choose my dependencies carefully. If I need a couple functions from an open source dependency I don't know, I can often just pull those two functions and maintain them myself (instead of pulling the dependency and its 10 dependencies).
> Are they just [probably inexpertly] reinventing every wheel?
I find it ironic that when I explain that my problem is that I want to be able to audit (and maintain, if necessary) my dependencies, the answer that comes suggests that I am incompetent and "inexpertly" doing my job.
Would it make me more of an expert if I was pulling, running and distributing random code from the Internet without having the smallest clue about who wrote it?
Do I need to complain about how hard CMake is and compare a command line to a "magic incantation" to be considered an expert?
> AUR stands for "Arch User Repository". It's not the official system repository.
Okay... and? The point being made was that the issue of package managers remains: do you really think users are auditing all those "lib<slam-head-on-keyboard>" dependencies that they're forced to install? Whether they install those dependencies from the official repository or from homebrew, or nix, or AUR, or whatever, is immaterial, the developer washed their hands of this, instead leaving it to the user who in all likelihood knows significantly less than the developers to be able to make an informed decision, so they YOLO it. Third-party repositories would not exist if they had no utility. But this is why Debian is so revered: they understand this dynamic and so maintain repositories that can be trusted. Whereas the solution C/C++ cultists seem to implicitly prefer is having no repositories because dependencies are, at best, a slippery slope.
> "It's too hard to add a dependency in C++"
It's not hard to add a dependency. I actually prefer the dependencies-as-git-submodules approach to package managers: it's explicit and you know what you're getting and from where. But using those dependencies is a different story altogether. Don't you just love it when one or more of your dependencies has a completely different build system to the others? So now you have to start building dependencies independently, whose artefacts are in different places, etc, etc, this shouldn't be a problem.
> I, for one, do not love it when there is an exploit in a language package manager.
Oh please, I believe that about as much as ambulance chasers saying they don't love medical emergencies. Otherwise, why are any and all comments begging for a first-party package manager immediately swamped with strawmans about npm as if anyone is actually asking for that, instead of, say, what Zig or Go has? It's because of the cultism, and every npm exploit further entrenches it.
C++ usage has nothing to do with static/dynamic linking. One is a language and the other is a way of using libraries. Dynamic linking gives you small binaries with a lot of cross-compatibility, and static linking gives you big binaries with known function. Most production C++ out there follows the same pattern as Rust and Go and uses static linking (where do you think Rust and Go got that pattern from?). Python is a weird language that has tons of dynamic linking while also having a big package manager, which is why pip is hell to use and PyTorch is infamously hard to install.
Dynamic linking shifts responsibility for the linked libraries over to the user and their OS, and if it's an Arch user using AUR they are likely very interested in assuming that risk for themselves. 99.9% of Linux users are using Debian or Ubuntu with apt for all these libs, and those maintainers do pay a lot of attention to libraries.
> But this is why Debian is so revered: they understand this dynamic and so maintain repositories that can be trusted.
So you do understand my point about AUR. AUR is like adding a third-party repo to your Debian configuration. So it's not a good example if you want to talk about official repositories.
Debian is a good example (it's not the only distribution that has that concept), which proves my point and not yours: this is better than unchecked repositories in terms of security.
> Whereas the solution C/C++ cultists seem to implicitly prefer is having no repositories because dependencies are, at best, a slippery slope.
Nobody says that ever. Either you make up your cult just to win an argument, or you don't understand what C/C++ people say. The whole goddamn point is to have a trusted system repository, and if you need to pull something that is not there, then you do it properly.
Which is better than pulling random stuff from random repositories, again.
> I actually prefer the dependencies-as-git-submodules approach
Oh right. So you do it wrong, it's good to know and it will answer your next complaint:
> Don't you just love it when one or more of your dependencies has a completely different build system to the others
I don't give a damn because I handle dependencies properly (not as git submodules). I don't have a single project where the dependencies all use the same build system. It's just not a problem at all, because I do it properly. What do I do then? Well exactly the same as what your system package manager does.
> this shouldn't be a problem.
I agree with you. Call it a footgun if you wish, you are the one pulling the trigger. It isn't a problem for me.
> why are any and all comments begging for a first-party package manager immediately swamped with strawmans about npm
Where did I do that?
> It's because of the cultism, and every npm exploit further entrenches it.
It's because npm is a good example of what happens when it goes out of control. Pip has the same problem, and Rust as well. But npm seems to be the worse, I guess because it's used by more people?
Your defensiveness is completely hindering you and I cannot be bothered with that so here are some much needed clarifications:
> I am not a C/C++ cultist at all, and I actually don't like C++ (the language) so much (I've worked with it for years). I, for one, do not love it when there is an exploit in a language package manager.
If you do neither of those things then did it ever occur to you that this might not be about YOU?
> I find it ironic that when I explain that my problem is that I want to be able to audit (and maintain, if necessary) my dependencies, the answer that comes suggests that I am incompetent and "inexpertly" doing my job.
Yeah, hi, no you didn't explain that. You're probably mistaking me for someone else in some other conversation you had. The only comment of yours prior to mine in the thread is you saying "I can use pkg-config just fine." And again, you're thinking that I'm calling YOU incompetent, or even that I'm calling you incompetent. But okay, I'm sure your code never has bugs, never has memory issues, is never poorly designed or untested, that you can whip out an OpenGL alternative whatever in no time and it be just as stable and battle-tested, and to say otherwise must be calling you incompetent. That makes total sense.
> AUR stands for "Arch User Repository". It's not the official system repository.
> So it's not a good example if you want to talk about official repositories.
I said system package, not official repository. I don't know why you keep insisting on countering an argument I did not make. Yes, system packages can be installed from unofficial repositories. I don't know how I could've made this clearer.
--
Overall, getting bored of this, though the part where you harp on about doing dependencies properly compared to me and not elaborating one bit is very funny. Have a nice day.
Start by not calling everybody disagreeing with you a cultist, next time.
> I said system package, not official repository. I don't know why you keep insisting on countering an argument I did not make. Yes, system packages can be installed from unofficial repositories. I don't know how I could've made this clearer.
It's not that it is unclear, it's just that it doesn't make sense. When we compare npm to a system package manager in this context, the thing we compare is whether or not is it curated. Agreed, I was maybe not using the right words (I should have said curated package managers vs not curated package managers), but it did not occur to me that it was unclear because comparing npm to a system package manager makes no sense otherwise. It's all just installing binaries somewhere on disk.
AUR is much like npm in that it is not curated. So if you find that it is a security problem: great! We agree! If you want to pull something from AUR, you should read its PKGBUILD first. And if it pulls tens of packages from AUR, you should think twice before you actually install it. Just like if someone tells you to do `curl https://some_website.com/some_script.sh | sudo sh`, no matter how convenient that is.
Most Linux distributions have a curated repository, which is the default for the "system package manager". Obviously, if users add custom, not curated repositories, it's a security problem. AUR is a bad example because it isn't different from npm in that regard.
> though the part where you harp on about doing dependencies properly compared to me and not elaborating one bit is very funny
Well I did elaborate at least one bit, but I doubt you are interested in more details than what I wrote: "What do I do then? Well exactly the same as what your system package manager does."
I install the dependencies somewhere (just like the system package manager does), and I let my build system find them. It could be with CMake's `find_package`, it could be with pkg-config, whatever knows how to find packages. There is no need to install the dependencies in the place where the system package manager installs stuff: it can go anywhere you want. And you just tell CMake or pkg-config or Meson or whatever you use to look there, too.
Using git submodules is just a bad idea for many reasons, including the fact that you need all of them to use the same build system (which you mentioned), or that a clean build usually implies rebuilding the dependencies (for nothing) or that it doesn't work with package managers (system or not). And usually, projects that use git submodule only support that, without offering a way to use the system package(s).
They are massively loved because people don't want to learn how it works. But the result is that people massively don't understand how package management works, and miss the real cost of dependencies.
Modern languages don't generally play nice with linux distributions, IMO.
C and C++ have an answer to the dependency problem, you just have to learn how to do it. It's not rocket science, but you have to learn something. Modern languages remove this barrier, so that people who don't want to learn can still produce stuff. Good for them.
no they don't - at least not a good answer. It generally amounts to running a different build system and waiting - this destroys parralism and slows the build down.
I'm not going to defend the fact that the C++ devex sucks. There are really a lot of reasons for it, some of which can't sensibly be blamed on the language and some of which absolutely can be. (Most of it probably just comes down to the language and tooling being really old and not having changed in some specific fundamental ways.)
However, it's definitely wrong to say that the typical tools are "non-portable". The UNIX-style C++ toolchains work basically anywhere, including Windows, although I admit some of the tools require MSys/Cygwin. You can definitely use GNU Makefiles with pkg-config using MSys2 and have a fine experience. Needless to say, this also works on Linux, macOS, FreeBSD, Solaris, etc. More modern tooling like CMake and Ninja work perfectly fine on Windows and don't need any special environment like Cygwin or MSys, can use your MSVC installation just fine.
I don't really think applying the mantra of Rust package management and build processes to C++ is a good idea. C++'s toolchain is amenable to many things that Rust and Cargo aren't. Instead, it'd be better to talk about why C++ sucks to use, and then try to figure out what steps could be taken to make it suck less. Like:
- Building C++ software is hard. There's no canonical build system, and many build systems are arcane.
This one really might be a tough nut to crack. The issue is that creating yet another system is bound to just cause xkcd 927. As it is, there are many popular ways to build, including GNU Make, GNU Autotools + Make, Meson, CMake, Visual Studio Solutions, etc.
CMake is the most obvious winner right now. It has achieved defacto standard support. It works on basically any operating system, and IDEs like CLion and Visual Studio 2022 have robust support for CMake projects.
Most importantly, building with CMake couldn't be much simpler. It looks like this:
And you have a build in .build. I think this is acceptable. (A one-step build would be simpler, but this is definitely more flexible, I think it is very passable.)
This does require learning CMake, and CMake lists files are definitely a bit ugly and sometimes confusing. Still, they are pretty practical, and rather easy to get started with, so I think it's a clear win. CMake is the "defacto" way to go here.
- Managing dependencies in C++ is hard. Sometimes you want external dependencies, sometimes you want vendored dependencies.
This problem's even worse. CMake helps a little here, because it has really robust mechanisms for finding external dependencies. However, while robust, the mechanism is definitely a bit arcane; it has two modes, the legacy Find scripts mode, and the newer Config mode, and some things like version constraints can have strange and surprising behavior (it differs on a lot of factors!)
But sometimes you don't want to use external dependencies, like on Windows, where it just doesn't make sense. What can do you really do here?
I think the most obvious thing to do is use vcpkg. As the name implies, it's Microsoft's solution to source-level dependencies. Using vcpkg with Visual Studio and CMake is relatively easy, and it can be configured with a couple of JSON files (and there is a simple CLI that you can use to add/remove dependencies, etc.) When you configure your CMake build, your dependencies will be fetched and built appropriately for your targets, and then CMake's find package mechanism can be used just as it is used for external dependencies.
CMake itself is also capable of vendoring projects within itself, and it's absolutely possible to support all three modalities of manual vendoring, vcpkg, and external dependencies. However, for obvious reasons this is generally not advisable. It's really complicated to write CMake scripts that actually work properly in every possible case, and many cases need to be prevented because they won't actually work.
All of that considered, I think the best existing solution here is CMake + vcpkg. When using external dependencies is desired, simply not using vcpkg is sufficient and the external dependencies will be picked up as long as they are installed. This gives an experience much closer to what you'd expect from a modern toolchain, but without limiting you from using external dependencies which is often unavoidable in C++ (especially on Linux.)
- Cross-compiling with C++ is hard.
In my opinion this is mostly not solved by the "defacto" toolchains. :)
It absolutely is possible to solve this. Clang is already better off than most of the other C++ toolchains in that it can handle cross-compiling with selecting cross-compile targets at runtime rather than build time. This avoids the issue in GCC where you need a toolchain built for each target triplet you wish to target, but you still run into the issue of needing libc/etc. for each target.
Both CMake and vcpkg technically do support cross-compilation to some extent, but I think it rarely works without some hacking around in practice, in contrast to something like Go.
If cross-compiling is a priority, the Zig toolchain offers a solution for C/C++ projects that includes both effortless cross-compiling as well as an easy to use build command. It is probably the closest to solving every (toolchain) problem C++ has, at least in theory. However, I think it doesn't really offer much for C/C++ dependencies yet. There were plans to integrate vcpkg for this I think, but I don't know where they went.
If Zig integrates vcpkg deeply, I think it would become the obvious choice for modern C++ projects.
I get that by not having a "standard" solution, C++ remains somewhat of a nightmare for people to get started in, and I've generally been doing very little C++ lately because of this. However I've found that there is actually a reasonable happy path in modern C++ development, and I'd definitely recommend beginners to go down that path if they want to use C++.
> Using vcpkg [...] When you configure your CMake build, your dependencies will be fetched and built appropriately for your targets, and then CMake's find package mechanism can be used just as it is used for external dependencies.
Yes! I believe this is powerful: if CMake is used properly, it does not have to know where the dependencies come from, it will just "find" them. So they could be installed on the system, or fetched by a package manager like vcpkg or conan, or just built and installed manually somewhere.
> Cross-compiling with C++ is hard.
Just wanted to mention the dockcross project here. I find it very useful (you just build in a docker container that has the toolchain setup for cross-compilation) and it "just works".
You also don't do "rustc build". Cargo is a build system too.
The whole point of pkg-config is to tell the compiler where those packages are.
I mean yeah, that's the point of having a tool like that. It's fine that the compiler doesn't know that, because its job is turning source into executables, not being the OS glue.
I'm not sure "having a linker" is a weakness? What are talking about?
It is true that you need to use the package manager to install the dependencies. This is more effort than having a package manager download them for you automatically, but on the other hand you don't end up in a situation where you need virtual environments for every application because they've all downloaded slightly different versions of the same packages. It's a bit of a philosophical argument as to what is the better solution.
The argument that it is too hard for students seems a bit overblown. The instructions for getting this up and running are:
1. apt install build-essential
2. extract the example files (Makefile and c file), cd into the directory
3. type "make"
4. run your program with ./programname
I'd argue that is fewer steps than setting up almost any IDE. The Makefile is 6 lines and is easy to adapt to any similar size project. The only major weakness is headers, in which case you can do something like:
If you change any header it will trigger a full system rebuild, but on C projects this is fine for a long time. It's just annoying that you have to create a new entry for every c file you add to the project instead of being able to tell make to add that to every object automatically. I suspect there is a very arcane way to do this, but I try to keep it as simple as possible.
I'm not your parent, but the overall point of this kind of thing is that all of these individual steps are more annoying and error-prone than one command that just takes care of it. `cargo build` is all you need to build the vast majority of Rust projects. No need to edit the Makefile for those headers, or remember which commands you need to install the various dependencies, and name them individually, figuring out which name maps to your distro's naming scheme, etc. It's not just "one command vs five" it's "one command for every project vs five commands that differ slightly per project and per platform". `make` can come close to this, and it's why people love `./configure; make`, and there's no inherent reason why this couldn't paper over some more differences to make it near universal, but that still only gets you Unix platforms.
> but on the other hand you don't end up in a situation where you need virtual environments for every application because they've all downloaded slightly different versions of the same packages.
The real downside here is that if you need two different programs with two different versions of packages, you're stuck. This is often mitigated by things like foo vs foo2, but I have been in a situation where two projects both rely on different versions of foo2, and cannot be unified. The per-project dependency strategy handles this with ease, the global strategy cannot.
> There are a lot of problems, but having to carefully construct the build environment is a minor one time hassle.
I've observed the existence in larger projects of "build engineers" whose sole job is to keep the project building on a regular cadence. These jobs predominantly seem to exist in C++ land.
> These jobs predominantly seem to exist in C++ land.
You wish.
These jobs exist for companies with large monorepos in other languages too and/or when you have many projects.
Plenty of stuff to handle in big companies (directory ownership, Jenkins setup, in-company dependency management and release versioning, developer experience in genernal, etc.)
Most of what I have seen came from technical debt aquired over decades. With some of the build engineers hired to "manage" that themselves not being treated as programmers and just adding on top of the mess with "fixes" that are never reviewed or even checked in. Had a fun time once after we reinstalled the build server and found out that the last build engineer created a local folder to store various dependencies instead of of using vcpkg to fetch everything as we had mandated for several years by then.
You have a team of 20 engineers on a project you want to maintain velocity on. With that many coooks, you have patches on top of patches of your build system where everyone does the bare minimum to meet the near term task only and it devolves into a mess no one wants to touch over enough time.
Your choice: do you have the most senior engineers spend time sporadically maintaining the build system, perhaps declaring fires to try to pay off tech debt, or hire someone full time, perhaps cheaper and with better expertise, dedicated to the task instead?
CI is an orthogonal problem but that too requires maintenance - do you maintain it ad-hoc or make it the official responsibility for someone to keep maintained and flexible for the team’s needs?
I think you think I’m saying the task is keeping the build green whereas I’m saying someone has to keep the system that’s keeping the build green going and functional.
> You have a team of 20 engineers on a project you want to maintain velocity on. With that many coooks, you have patches on top of patches of your build system ...
The scenario you are describing does not make sense for the commonly accepted industry definition of "build system." It would make sense if, instead, the description was "application", "product", or "system."
Many software engineers use and interpret the phrase "build system" to be something akin to make[0] or similar solution used to produce executable artifacts from source code assets.
I can only relate to you what I’ve observed. Engineers were hired to rewrite the Make-based system into Bazel and maintain it for single executable distributed to the edge. I’ve also observed this for embedded applications and other stuff.
I’m not sure why you’re dismissing it as something else without knowing any of the details or presuming I don’t know what I’m talking about.
I dont know if you're jokingor just naïve, but cmake and the like are massive time sinks if you want anything beyond "here's a few source files, make me an application"
Wow, I don't understand what anything means in those memes. And I'm so glad I don't!
It seems to me that the people/committees who built C++ just spent decades inventing new and creative ways for developers to shoot themselves in the foot. Like, why does the language need to offer a hundred different ways to accomplish each trivial task (and 98 of them are bad)?
I don't plan on ever using C++ again, but FWIW in Rust there are lots of cases where you specify `move` and stuff doesn't get moved, or don't specify it and it does, and it's also a specific feeling.
but it's true that when a user first sees `std::move(x)` with no argument saying _where_ to move it to, they either get frustrated or understand they have to get philosophical :-)
> in C++, you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
This... doesn't really hold water. You have to learn about what the insane move semantics are (and the syntax for move ctors/operators) to do fairly basic things with the language. Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood. Basic standard library datatypes like std::vector use templates, so you're debugging template instantiation issues whether you write your own templated code or not.
> You have to learn about what the insane move semantics are (and the syntax for move ctors/operators) to do fairly basic things with the language
That is simply not true. You can write a lot of C++ code without even touching move stuff. Hell, we've been fine without move semantics for the last 30 years :P
> Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood
Partially true. operator*() is used through the standard library a lot, because it nicely wraps pointer semantics. Still, you don't have to know about implementation details, as they depend on how the standard library implements the underlying containers.
AFAIK operator<<() is mainly (ab)used by streams. And you can freely skip that part; many C++ developers find them unnecessarily slow and complex.
> Basic standard library datatypes like std::vector use templates, so you're debugging template instantiation issues whether you write your own templated code or not.
As long as you keep things simple, errors are going to be simple. The problem with "modern C++" is that people overuse these new features without fully comprehending their pros and cons, simply because they look cool.
> Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood
you don't need to understand what an overloaded operator is doing any more than you have to understand the implementation of every function you call, recursively
I mean, you kinda do. Otherwise you won’t understand why bit-shifting to std::cout prints something, which is pretty much day 1 of C++ hello world introduction (yes, I know there are introductions that don’t use that silly syntax sugar. They’re rare, like it or not.)
Like, sure, you don’t have to understand cout’s implementation of operator <<, but you have to know a) that it’s overloadable in the first place, b) that overloads can be arbitrary functions on arbitrary types (surprising if coming from languages that support more limited operator overloading), and c) probably how to google/go-to-documentation on operators for a type to see what bit-shifting a string into stdio does.
Exactly, the "you dont have" part lasts until the first error message and then it's 10 feet of stl template instantiation error avalanche. And stl implementation is really advanced c++.
Also, a lot of "modern" code cannot be debugged at all (because putting in a print statement breaks constexprness) and your only recourse is reading code.
This is in big part also because of committee, that prefers hundred-line template monster "can_this_call_that_v" to a language feature, probably thinking that by not including something in language standard and offloading it to library they do good job.
I'm pretty sure the post you are responding to is not seriously suggesting using floating point multiplication and exponentiation as a performance optimization ;)
(oh and I think you can write a whole book on the different ways to initialize variables in C++).
The result is you might be able to use C++ to write something new, and stick to a style that's readable... to you! But it might not make everyone else who "knows C++" instantly able to work on your code.
Overloaded operators are great. But overloaded operators that do something entirely different than their intended purpose is bad. So a + operator that does an add in your custom numeric data type is good. But using << for output is bad.
The first programming language that used overloaded operators I really got into was Scala, and I still love it. I love that instead of Java's x.add(y); I can overload + so that it calls .add when between two objects of type a. It of course has to be used responsibly, but it makes a lot of code really more readable.
> The first programming language that used overloaded operators I really got into was Scala, and I still love it. I love that instead of Java's x.add(y); I can overload + so that it calls .add when between two objects of type a. It of course has to be used responsibly, but it makes a lot of code really more readable.
The problem, for me, with overloaded operators in something like C++ is that it frequently feels like an afterthought.
Doing "overloaded operators" in Lisp (CLOS + MOP) has much better "vibes" to me than doing overloaded operators in C++ or Scala.
Those languages need a dedicated operator because they are loosely typed which would make it ambiguous like + in JavaScript.
But C++ doesn't have that problem. Sure, a separate operator would have been cleaner (but | is already used for bitwise or) but I have never seen any bug that resulted from it and have never felt it to be an issue when writing code myself.
Though then you can have code like "hello" + "world" that doesn't compile and "hello" + 10 that will do something completely different. In some situations you could actually end up with that by gradual modification of the original code..
Tangential, but Lua is the most write-only language I have had pleasure working with. The implementation and language design are 12 out of 10, top class. But once you need to read someone else's code, and they use overloads liberally to implement MCP and OODB and stuff, all in one codebase, and you have no idea if "." will index table, launch Voyager, or dump core, because everything is dispatched at runtime, it's panic followed by ennui.
It works with arrays (both fixed size, and dynamically sized) and arrays; between arrays and elements; but not between two scalar types that don't overload opBinary!"~", so no it won't work between two `ushorts` to produce a `uint`
Python managed to totally confuse this. "+" for built-in arrays is concatenation. "+" for NumPy arrays is elementwise addition. Some functions accept both types. That can end badly.
Regrettably, “intended purpose” is highly subjective.
Sure, << for stream output is pretty unintuitive and silly. But what about pipes for function chaining/composition (many languages overload thus), or overriding call to do e.g. HTML element wrapping, or overriding * for matrices multiplied by simple ints/vectors?
Reasonable minds can and do differ about where the line is in many of those cases. And because of that variability of interpretation, we get extremely hard to understand code. As much as I have seen value in overloading at times, I’m forced to agree that it should probably not exist entirely.
The thing is code without operator overloading is also hard to understand because you might have this math thing (BigIntegers, Matrices) and you can't use standard notation.
Let's say I have matrices, and I've overloaded * for multiplying a matrix by a matrix, and a matrix by a vector, and a matrix by a number. And now I write
a = b * c;
If I'm trying to understand this as one of a series of steps of linear algebra that I'm trying to make sure are right, that is far more comprehensible than
a = mat_mult(b,c);
because it uses math notation, and that's closer to the way linear algebra is written.
But if I take the exact same line and try to understand exactly which functions get called, because I'm worried about numerical stability or performance or something, then the first approach hides the details and the second one is easier to understand.
This is always the way it goes with abstraction. Abstraction hides the details, so we can think at a higher level. And that's good, when you're trying to think at the higher level. When you're not, then abstraction just hides what you're really trying to understand.
If you've done any university-level maths you should have seen the + sign used in many other contexts than adding numbers, why should that be a problem when programming?
There is usually another operator used for concatenation in math though: | or || or ⊕
The first two are already used for bitwise and logical or and the third isn't available in ASCII so I still think overloading + was a reasonable choice and doesn't cause any actual problems IME.
So, what programmers wanted (yes, already before C++ got this) was what are called "destructive move semantics".
These assignment semantics work how real life works. If I give you this Rubik's Cube now you have the Rubik's Cube and I do not have it any more. This unlocks important optimisations for non-trivial objects which have associated resources, if I can give you a Rubik's Cube then we don't need to clone mine, give you the clone and then destroy my original which is potentially much more work.
C++ 98 didn't have such semantics, and it had this property called RAII which means when a local variable leaves scope we destroy any values in that variable. So if I have a block of code which makes a local Rubik's Cube and then the block ends the Rubik's Cube is destroyed, I wrote no code to do that it just happens.
Thus for compatibility, C++ got this terrible "C++ move" where when I give you a Rubik's Cube, I also make a new hollow Rubik's Cube which exists just to say "I'm not really a Rubik's Cube, sorry, that's gone" and this way, when the local variable goes out of scope the destruction code says "Oh, it's not really a Rubik's Cube, no need to do more work".
For trivial objects, moving is not an improvement, the CPU can do less work if we just copy the object, and it may be easier to write code which doesn't act as though they were moved when in fact they were not - this is obviously true for say an integer, and hopefully you can see it will work out better for say an IPv6 address, but it's often better for even larger objects in some cases. Rust has a Copy marker trait to say "No, we don't need to move this type".
In particular, move is important if there is something like a unique_ptr. To make a copy, I have to make a deep copy of whatever the unique_ptr points to, which could be very expensive. To do a move, I just copy the bits of the unique_ptr, but now the original object can't be the one that owns what's pointed to.
Sure. Notice std::unique_ptr<T> is roughly equivalent to Rust's Option<Box<T>>
The C++ "move" is basically Rust's core::mem::take - we don't just move the T from inside our box, we have to also replace it, in this case with the default, None, and in C++ our std::unique_ptr now has no object inside it.
But while Rust can carefully move things which don't have a default, C++ has to have some "hollow" moved-from state because it doesn't have destructive move.
I personally think that operator overloading itself is justified, but the pervasive scope of operator overloading is bad. To me the best solution is from OCaml: all operators are regular functions (`a + b` is `(+) a b`) and default bindings can't be changed but you can import them locally, like `let (+) = my_add in ...`. OCaml also comes with a great convenience syntax where `MyOps.(a + b * c)` is `MyOps.(+) a (MyOps.(*) b c)` (assuming that MyOps defines both `(+)` and `(*)`), which scopes operator overloading in a clear and still convenient way.
A benefit of operator overloads is that you can design drop-in replacements for primitive types to which those operators apply but with stronger safety guarantees e.g. fully defining their behavior instead of leaving it up to the compiler.
This wasn't possible when they were added to the language and wasn't really transparent until C++17 or so but it has grown to be a useful safety feature.
However C++ offers several overloads where you don't get to provide a drop-in replacement.
Take the short-circuiting boolean operators || and &&. You can overload these in C++ but you shouldn't because the overloaded versions silently lose short-circuiting. Bjarne just didn't have a nice way to write that so, it's not provided.
So while the expression `foo(a) && bar(b)` won't execute function bar [when foo is "falsy"] if these functions just return an ordinary type which doesn't have the overloading, if they do enable overloading both functions are always executed then the results given to the overloading function.
Edited:: Numerous tweaks because apparently I can't boolean today.
This is a failed attempt at muddying the waters. You don't know what move semantics is? You go learn what they are. Best/simplest way is to just disable your copy ctor.
You need to know the operator overload semantics for a particular use case? It is not exactly hidden lore, there are even man pages (libstdc++-doc, man 3 std::ostream) or just use std::println.
You are stuck instantiating std::vector? Then you will be stuck in any language anyway.
> in C++, you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
Only if you have full control on what others are writing. In reality, you're going to read a lot, lots of "clever" codes. And I'm saying as a person who have written a good amount of template meta programming codes. Even for me, some codes take hours to understand and I was usually able to cut 90% of its code after that.
I’m probably guilty of gratuitous template stuff, because it adds fun to the otherwise boring code I spend a lot of time on. But I feel like the 90% cutdowns are when someone used copy-paste instead of templates, overloads, and inheritance. I don’t think both problems happen at the same time, though, or maybe I misunderstood.
When people are obsessed with over-abstraction and over-generalization, you can often see FizzBuzz Enterprise in action where a single switch statement is more than enough.
I see that more with inheritance including pure virtual interface for things that only have one implementation and actor patterns that make the execution flow unnecessarily hard to follow. Basically, Java written in C++.
Most templates are much easier to read in comparison.
Agreed that template itself is not the problem but people are. It is still arguable that template is much more of fun to write clever codes because of its meta programming capability as well as its runtime performance advantages.
With a pure virtual interface you can at least track down the execution path as long as you can spot down where the object is created, but with template black magics? Good luck. Static dispatch with all those type traits and SFINAE practically makes it impossible to know before running it. Concept was supposed to solve this but this won't automatically solve all those problems lurking in legacy codes.
Being able to cut 90% of code sounds like someone was getting paid by LoC (which is also a practice from a time when C++ was considered a "modern" language).
Yes, but not always. For example, what can now be written as single "requires requires" or a short chain of "else if constexpr" statements, used to be sprawling, incomprehensible template class hierarchy, before that feature got added.
> Countless companies have cited how they improved their security or the amount of reported bugs or memory leaks by simply rewriting their C++ codebases in Rust. Now is that because of Rust? I’d argue in some small part, yes.
Just delete this. Even an hour's familiarity with Rust will give you a visceral understanding that "Rewrites of C++ codebases to Rust always yield more memory-safe results than before" is absolutely not because "any rewrite of an existing codebase is going to yield better results". If you don't have that, skip it, because it weakens the whole piece.
C++ will always stay relevant. Software has eaten the world. That transition is almost complete now. The languages that were around when it happened will stay deeply embedded in our fundamental tech stacks for another couple decades at least, if not centuries. And C and C++ are the lion's share of that.
COBOL sticks around 66 years after its first release. Fortran is 68 years old and is still enormously relevant. Much, much more software was written in newer languages and has become so complex that replacements have become practically impossible (Fuchsia hasn't replaces Linux in Google products, wayland isn't ready to replace X11 etc)
It seems likely that C++ will end up in a similar place as COBOL or Fortran, but I don't see that as a good future for a language.
These languages are not among the top contenders for new projects. They're a legacy problem, and are kept alive only by a slowly shrinking number of projects. It may take a while to literally drop to zero, but it's a path of exponential decay towards extinction.
C++ has strong arguments for sticking around as a legacy language for several too-big-to-rewrite C++ projects, but it's becoming less and less attractive for starting new projects.
C++ needs a better selling point than being a language that some old projects are stuck with. Without growth from new projects, it's only a matter of time until it's going to be eclipsed by other languages and relegated to shrinking niches.
As long as people write software (no pun intended), software will follow trends. For instance, in many scientific ecosystems, Matlab was successfully replaced by Scipy. Which happens to get replaced by Julia. Things don't neccessarily have to stay the same. Interestingly, such a generational trend currently happens with Rust, despite there has been numerous other popular languages such as D or Zig which didn't have the same traction.
Sure, there are still Fortran codes. But I can hardly imagine that Fortran still plays a big role in another 68 years from now on.
Matlab/Scipy/Julia are totally different since those function more like user interfaces, they are directly user facing. You're not building an app with matlab (though you might be with scipy and julia, it's not the primary use case), you're working with data. C++ on the other hand underpins a lot of key infrastructure.
I am not saying that these languages will stay around forever, mind you. But we have solidified the tech stacks involving these languages by making them ridiculously complex. Replacement of a programming language in one of the core components can only come through gradual and glacially slow evolution at this point. "Rewrite it in XYZ" as a clean slate approach on a big scale is simply a pipe dream.
Re Matlab: I still see it thriving in the industry, for better or worse. Many engineers just seem to love it. I haven't seen many users of Julia yet. Where do you see those? I think that Julia deserves a fair chance, but it just doesn't have a presence in the fields I work in.
You’re thinking of software that is being written today. GP is talking about software we use every day in every device on the planet that hasn’t changed since it was written 30+ years ago.
> For instance, in many scientific ecosystems, Matlab was successfully replaced by Scipy. Which happens to get replaced by Julia
If by scientific ecosystems you mean people making prototypes for papers, then yes. But in commercial, industrial setting there is still no alternative for many of Matlab toolboxes, and as for Julia, as cool as it is, you need to be careful to distinguish between real usage and vetted marketing materials created by JuliaSim.
Especially the 'backend' languages that do all the heavy lifting for domain-specific software. Just in my vertical of choice, financial software, there are literally billions of lines of Java and .NET code powering critical systems. The code is the documentation, and there's little appetite to rewrite all that at enormous cost and risk.
Perhaps AI will get reliable enough to pour through these double-digit million LOC codebases and convert them flawlessly, but that looks like it's decades off at this point.
I'm not so sure. The user experience has really crystallized over the years. It's not hard to imagine a smart tv or something like it just reimplementing that experience in hardware in the not too distant future (say 2055 if transistor and memory scaling stall in 2035).
We live in a special time when general processing efficiency has always been increasing. The future is full of domain specific hardware (enabling the continued use of COBOL code written for slower mainframes). Maybe this will be a half measure like cuda or your c++ will just be a thin wrapper around a makeYoutube() ASIC
Of course if there is a breakthrough in general purpose computing or a new killer app it will wipe out all those products which is why they don't just do it now
"you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language."
You could also inherit a massive codebase old enough to need a prostate exam that was written by many people who wanted to prove just how much of the language spec they could use.
If selecting a job mostly under the Veil of Ignorance, I'll take a large legacy C project over C++ any day.
I've been programming in c++ for 25 years (15 professionally) and I really don't see any reason to keep using it apart from dealing with legacy codebases.
Most arguments in the article boil down to "c++ has the reputation of X, which is partly true, but you can avoid problems with discipline". Amusingly, this also applies to assembly. This is _exactly_ why I don't want to code in c++ anymore: I don't want the constant cognitive load to remember not to shoot myself in the foot, and I don't want to spend time debugging silly issues when I screw up. I don't want the outdated tooling, compilation model and such.
Incidentally, I've also been coding in Rust for 5 years or so, and I'm always amazed that code that compiles actually works as intended and I can spend time on things that matter.
Going back to c++ makes me feel like a caveman coder, every single time.
Exactly. I’ve been using „Stupid Rust“ for years now where I just liberally clone if I can’t have my way. It‘s not bitten me yet and once the code compiles, it works.
> You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
Maybe you can do that. But you are probably working in a team. And inevitably someone else in your team thinks that operator overloading and template metaprogramming are beautiful things, and you have to work with their code. I speak from experience.
This is true and I will concede this point. Appreciate your feedback!
However if I may raise my counter point I like to have a rule that C++ should be written mostly as if you were writing C as much as possible until you need some of it's additional features and complexities.
Problem is when somebody on the team does not share this view though, that much is true :)
Counter-counter point: if you're going to actively avoid using the majority of a language's features and for the most part write code in it as if it were a different language, doesn't that suggest the language is deeply flawed?
(Note: I'm not saying it is deeply flawed, just that this particular way of using it suggests so).
I wouldn't necessarily put it like that no. I'd say all languages have features that fit certain situations but should be avoided in other situations.
It's like a well equiped workshop, just because you have access to a chainsaw but do not need to use it to build a table does not mean it's a bad workshop.
C is very barebones, languages like C++. C#, Rust and so on are not. Just because you don't need all of it's features does not make those languages inherently bad.
Great question or in this case counter-counter point though.
> However if I may raise my counter point I like to have a rule that C++ should be written mostly as if you were writing C as much as possible until you need some of it's additional features and complexities.
How do you define “need” for extra features? C and C++ can fundamentally both do the same thing so if you’re going to write C style C++, why not just write C and avoid all of C++’s foot guns?
RAII. It's the major C++ feature I miss in C, and the one that fixes most memory leak problems in C. Also, std::vector, which solves the remaining memory leak (and most bounds problems) in C. And std::string, which solves the remaining memory leak problems.
A pet peeve of mine is when people claim C++ is a superset of C. It really isn't. There's a lot of little nuanced differences that can bite you.
Ignore the fact that having more keywords in C++ precludes the legality of some C code being C++. (`int class;`)
void * implicit casting in C just works, but in C++ it must be an explicit cast (which is kind of funny considering all the confusing implicit behavior in C++).
C++20 does have C11's designated initialization now, which helps in some cases, but that was a pain for a long time.
enums and conversion between integers is very strict in C++.
`char * message = "Hello"` is valid C but not C++ (since you cannot mutate the pointed to string, it must be `const` in C++)
C99 introduced variadic macros that didn't become standard C++ until 2011.
C doesn't allow for empty structs. You can do it in C++, but sizeof(EmptyStruct) is 1. And if C lets you get away with it in some compilers, I'll bet it's 0.
Anyway, all of these things and likely more can ruin your party if you think you're going to compile C code with a C++ compiler.
Also don't forget if you want code to be C callable in C++ you have to use `extern "C"` wrappers.
> It really isn't. There's a lot of little nuanced differences that can bite you.
These are mostly inconsequential when using code other people write. It is trivial to mix C and C++ object files, and where the differences (in headers) do matter, they can be ifdefed away.
> void * implicit casting in C just works, but in C++ it must be an explicit cast (which is kind of funny considering all the confusing implicit behavior in C++).
This makes sense because void* -> T* is a downcast. I find the C behavior worse.
> enums and conversion between integers is very strict in C++.
As it should, but unscoped enums are promoted to integers the same way they are in C
> `char * message = "Hello"` is valid C but not C++
Code smell anyway, you can and should use char[] in both languages
You didn't mention the difference in inline semantics which IMO has more impact than what you cited
And I think you're downplaying many of the ones I mentioned, but I think this level of "importance" is subjective to the task at hand and one's level of frustrations.
Not for temporaries initialized from a string constant. That would create a new array on the stack which is rarely what you want.
And for globals this would preclude the the data backing your string from being shared with other instances of the same string (suffix) unless you use non-standard compiler options, which is again undesirable.
In modern C++ you probably want to convert to a string_view asap (ideally using the sv literal suffix) but that has problems with C interoperability.
Right, I've checked that char foo = "bar"; is indeed the same as the const char variant (both reference a string literal in rodata), which IMO makes it worse.
About string literals, the C23 standard states:
It is unspecified whether these arrays are distinct provided their elements have the appropriate values. If the program attempts to modify such an array, the behavior is undefined.
therefore `char foo = "bar";` is very bad practice (compared to using const char).
I assumed you wanted a mutable array of char initializable from a string literal, which is provided by std::string and char[] (depending on usecase).
> In modern C++ you probably want to convert to a string_view asap (ideally using the sv literal suffix)
The two will also continue to diverge over time, after all, C2y should have the defer feature, which C++ will likely never add. Even if we used polyfills to let C++ compilers support it, the performance characteristics could be quite different; if we compare a polyfill (as suggested in either N3488 or N3434) to a defer feature, C++ would be in for a nasty shock as the "zero cost abstractions" language, compared to how GCC does the trivial re-ordering and inlining even at -O1, as quickly tested here: https://godbolt.org/z/qoh861Gch
I used the [[gnu::cleanup]] attribute macro (as in N3434) since it was simple and worked with the current default GCC on CE, but based on TS 25755 the implementation of defer and its optimisation should be almost trivial, and some compilers have already added it. Oh, and the polyfills don't support the braceless `defer free(p);` syntax for simple defer statements, so there goes the full compatibility story...
While there are existing areas where C diverged, as other features such as case ranges (N3370, and maybe N3601) are added that C++ does not have parity with, C++ will continue to drift further away from the "superset of C" claim some of the 'adherents' have clung to for so long. Of course, C has adopted features and syntax from C++ (C2y finally getting if-declarations via N3356 comes to mind), and some features are still likely to get C++ versions (labelled breaks come to mind, via N3355, and maybe N3474 or N3377, with C++ following via P3568), so the (in)compatibility story is simply going to continue getting more nuanced and complicated over time, and we should probably get this illusion of compatibility out of our collective culture sooner rather than later.
> A pet peeve of mine is when people claim C++ is a superset of C. It really isn't. There's a lot of little nuanced differences that can bite you.
> Ignore the fact that having more keywords in C++ precludes the legality of some C code being C++. (`int class;`)
Your very first example reverses the definitions of superset and subset. "C++ is a superset of C" implies that C++ will have at least as many, if not more, keywords than C.
> void * implicit casting in C just works, but in C++ it must be an explicit cast
In C, casting a `void *` is a code smell, I feel.
Most confusing one is how the meaning of `const` differs between C and C++; I'm pretty certain the C `const` keyword is broken compared to `const` in C++.
Not sure I understand, since they're available in c++ designated initializes are one of the features I use most, to the point of making custom structs to pass the arguments if a type cannot be changed to be an aggregate. It makes a huge positive difference in readability and has helped me solve many subtle bugs ; and not initializing things in order will throw a warning so you catch it immediately in your ide
The problem is that there are a lot of APIs (even system and system-ish ones) that don't want to specify the order of their fields (or outright differ between platforms). Or that can't use a meaningful order due to ABI compatibility, yet the caller wants to pass fields in a meaningful order.
platform APIs like this are likely much less less than 1% of the things I call in my codebases. The few files i have open right now have absolutely no such call.
When it comes to programming, I generally decide my thoughts based on pain-in-my-ass levels. If I constantly have to fiddle with something to get it working, if it's fragile, if it frequently becomes a pain point - then it's not great.
And out of all the tools and architecture I work with, C++ has been some of the least problematic. The STL is well-formed and easy to work with, creating user-defined types is easy, it's fast, and generally it has few issues when deploying. If there's something I need, there's a very high chance a C or C++ library exists to do what I need. Even crossing multiple major compiler versions doesn't seem to break anything, with rare exceptions.
The biggest problem I have with C++ is how easy it is to get very long compile times, and how hard it feels like it is to analyze and fix that on a 'macro' (whole project) level. I waste ungodly amounts of time compiling. I swear I'm going to be on deaths door and see GCC running as my life flashes by.
Some others that have been not-so-nice:
* Python - Slow enough to be a bottleneck semi-frequently, hard to debug especially in a cross-language environment, frequently has library/deployment/initialization problems, and I find it generally hard to read because of the lack of types, significant whitespace, and that I can't easily jump with an IDE to see who owns what data. Also pip is demon spawn. I never want to see another Wheel error until the day I die.
* VSC's IntelliSense - My god IntelliSense is picky. Having to manually specify every goddamn macro, one at a time in two different locations just to get it to stop breaking down is a nightmare. I wish it were more tolerant of having incomplete information, instead of just shutting down completely.
* Fortran - It could just be me, but IDEs struggle with it. If you have any global data it may as well not exist as far as the IDE is concerned, which makes dealing with such projects very hard.
* CMake - I'm amazed it works at all. It looks great for simple toy projects and has the power to handle larger projects, but it seems to quickly become an ungodly mess of strange comments and rules that aren't spelled out - and you have no way of stepping into it and seeing what it's doing. I try to touch it as infrequently as possible. It feels like C macros, in a bad way.
CMake is not a great language, but great effort has been put into cleaning up how things should be done. However you can't just upgrade, someone needs to go through the effort of using all that new stuff. In almost all projects the build system is an after thought that developers touch as little as possible to make things work and so it builds cruft constantly.
You can do much better in CMake if you put some effort into cleaning it up - I have little hope anyone will do this though. We have a hard time getting developers to clean up messes in production code and that gets a lot more care and love.
I agree. Unless the project is huge, it's totally possible to use CMake in a maintainable way. It just requires some effort (not so much, but not nothing).
If you are willing to give up incremental compilation, concatenating all C++ files into a single file and compiling that on a single core will often outperform a multi-core compilation. The reason is that the compiler spends most of its time parsing headers and when you concentrate everything into a single file (use the C preprocessor for this), it only needs to parse headers once.
Merely parsing C++ code requires a higher time complexity than parsing C code (linear time parsers cannot be used for C++), which is likely where part of the long compile times originate. I believe the parsing complexity is related to templates (and the headers are full of them), but there might be other parts that also contribute to it. Having to deal with far more abstractions is likely another part.
That said, I have been incrementally rewriting a C++ code base at a health care startup into a subset of C with the goal of replacing the C++ compiler with a C compiler. The closer the codebase comes to being C, the faster it builds.
> Fortran - It could just be me, but IDEs struggle with it. If you have any global data it may as well not exist as far as the IDE is concerned, which makes dealing with such projects very hard.
You really should not have global data. Modules are the way to go and have been since Fortran90.
> CMake - I'm amazed it works at all. It looks great for simple toy projects and has the power to handle larger projects, but it seems to quickly become an ungodly mess of strange comments and rules that aren't spelled out - and you have no way of stepping into it and seeing what it's doing. I try to touch it as infrequently as possible. It feels like C macros, in a bad way.
> I never want to see another Wheel error until the day I die.
What exactly do you mean by a "Wheel error"? Show me a reproducer and a proper error message and I'll be happy to help to the best of my ability.
By and large, the reason pip fails to install a package is because doing so requires building non-Python code locally, following instructions included in the package. Only in rare cases are there problems due to dependency conflicts, and these are usually resolved by creating a separate environment for the thing you're trying to install — which you should generally be doing anyway. In the remaining cases where two packages simply can't co-exist, this is fundamentally Python's fault, not the installer's: module imports are cached, and quite a lot of code depends on the singleton nature of modules for correctness, so you really can't safely load up two versions of a dependency in the same process, even if you hacked around the import system (which is absolutely doable!) to enable it.
As for finding significant whitespace (meaning indentation used to indicate code structure; it's not significant in other places) hard to read, I'm genuinely at a loss to understand how. Python has types; what it lacks is manifest typing, and there are many languages like this (including Haskell, whose advocates are famous for explaining how much more "typed" their language is than everyone else's). And Python has a REPL, the -i switch, and a built-in debugger in the standard library, on top of not requiring the user to do the kinds of things that most often need debugging (i.e. memory management). How can it be called hard to debug?
Unfortunately that Wheel situation was far enough back now that I don't have details on hand. I just know it was awful at the time.
As for significant whitespace, the problem is that I'm often dealing with files with several thousand lines of code and heavily nested functions. It's very easy to lose track of scope in that situation. Am I in the inner loop, or this outer loop? Scrolling up and down, up and down to figure out where I am. Feels easier to make mistakes as well.
It works well if everything fits on one screen, it gets harder otherwise, at least for me.
As for types, I'm not claiming it's unique to Python. Just that it makes working with Python harder for me. Being able to see the type of data at a glance tells me a LOT about what the code is doing and how it's doing it - and Python doesn't let me see this information.
As for debugging, it's great if you have pure Python. Mix other languages in and suddenly it becomes pain. There's no way to step from another language into Python (or vice-versa), at least not cleanly and consistently. This isn't always true for compiled->compiled. I can step from C++ into Fortran just fine.
Pip has changed a lot in the last few years, and there are many new ecosystem standards, along with greater adoption of existing ones.
> I'm often dealing with files with several thousand lines of code and heavily nested functions.
This is the problem. Also, a proper editor can "fold" blocks for you.
> Being able to see the type of data at a glance tells me a LOT about what the code is doing and how it's doing it - and Python doesn't let me see this information.
If you want to use annotations, you can, and have been able to since 3.0. Since 3.5 (see https://peps.python.org/pep-0484/; it's been over a decade now), there's been a standard for understanding annotations as type information, which is recognized by multiple different third-party tools and has been iteratively refined ever since. It just isn't enforced by the language itself.
> Mix other languages in and suddenly it becomes pain.... This isn't always true for compiled->compiled.
Sure, but then you have to understand the assembly that you've stepped into.
>This is the problem. Also, a proper editor can "fold" blocks for you.
I can't fix that. I just work here. I've got to deal with the code I've got to deal with. And for old legacy code that's sprawling, I find braces help a LOT with keeping track of scope.
>Sure, but then you have to understand the assembly that you've stepped into.
Assembly? I haven't touched raw assembly since college.
> And for old legacy code that's sprawling, I find braces help a LOT with keeping track of scope.
How exactly are they more helpful than following the line of the indentation that you're supposed to have as a matter of good style anyway? Do you not have formatting tools? How do you not have a tool that can find the top of a level of indentation, but do have one that can find a paired brace?
>Assembly? I haven't touched raw assembly since college.
How exactly does your debugger know whether the compiled code it stepped into came from C++ or Fortran source?
> How exactly does your debugger know whether the compiled code it stepped into came from C++ or Fortran source?
Executables with debug symbols contain the names of the source files it was built from. Your debugger understands the debug symbols, or you can use tools like `addr2line` to find the source file and line number of an instruction in an executable.
Debugger does not need to understand the source language. It's possible to cross language boundaries in just vanilla GDB for example.
>How exactly does your debugger know whether the compiled code it stepped into came from C++ or Fortran source?
I don't know what IDE GP might be using, but mixed-language debuggers for native code are pretty simple as long as you just want to step over. Adding support for Fortran to, say, Visual Studio wouldn't be a huge undertaking. The mechanism to detect where to put the cursor when you step into a function is essentially the same as for C and C++. Look at the instruction pointer, search the known functions for an address that matches, and jump to the file and line.
> Yes, C++ can be unsafe if you don’t know what you’re doing. But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
I think this is one of the worst (and most often repeated arguments) about C++. C and C++ are inherently unsafe in ways that trip up _all_ developers even the most seasoned ones, even when using ALL the modern C++ features designed to help make C++ somewhat safer.
There are two levels on which this argument feels weak:
* The author is confusing memory safety with other kinds of safety. This is evident from the fact that they say you can write unsafe code in GC languages like python and javascript. unsafe != memory unsafe. Rust only gives you memory safety, it won't magically fix all your bugs.
* The slippery slope trick. I've seen this so often, people say because Rust has unsafe keyword it's the same as c/c++. The reason it's not is because in c/c++ you don't have any idea where to look for undefined behaviour. In Rust at least the code points you to look at the unsafe blocks. The difference is of degree which for practial purposes makes a huge difference.
The problem with C++ vs. unsafety is that there is really no boundary: All code is by default unsafe. You will need to go to great lengths to make it all somewhat safe, and then to even greater lengths to ensure any libraries you use won't undermine your safety.
In Rust, if you have unsafe code, the onus is on you to ensure its soundness at the module level. And yes, that's harder than writing the corresponding C++, but it makes the safe code using that abstraction a lot easier to reason about. And if you don't have unsafe code (which is possible for a lot of problems), you won't need to worry about UB at all. Imagine never needing to keep all the object lifetimes in your head because the compiler does it for you.
Good article overall. There's one part I don't really agree with:
> Here’s a rule of thumb I like to follow for C++: make it look as much like C as you possibly can, and avoid using too many advanced features of the language unless you really need to.
This has me scratching my head a bit. In spite of C++ being nearly a superset of C, they are very different languages, and idiomatic C++ doesn't look very much like C. In fact, I'd argue that most of the stuff C++ adds to C allows you to write code that's much cleaner than the equivalent C code, if you use it the intended way. The one big exception I can think of is template metaprogramming, since the template code can be confusing, but if done well, the downstream code can be incredibly clean.
There's an even bigger problem with this recommendation, which is how it relates to something else talked about in the article, namely "safety." I agree with the author that modern C++ can be a safe language, with programmer discipline. C++ offers a very good discipline to avoid resource leaks of all kinds (not just memory leaks), called RAII [1]. The problem here is that C++ code that leverages RAII looks nothing like C.
Stepping back a bit, I feel there may be a more fundamental fallacy in this "C++ is Hard to Read" section in that the author seems to be saying that C++ can be hard to read for people who don't know the language well, and that this is a problem that should be addressed. This could be a little controversial, but in my opinion you shouldn't target your code to the level of programmers who don't know the language well. I think that's ultimately neither good for the code nor good for other programmers. I'm definitely not an expert on all the corners of C++, but I wouldn't avoid features I am familiar with just because other programmers might not be.
Great article. Modern C++ has come a really long way. I think lots of people have no idea about the newer features of the standard library and how much they minimize footguns.
Lambdas, a modern C++ feature, can borrow from the stack and escape the stack. (This led to one of the more memorable bugs I've been part of debugging.) It's hard to take any claims about modern C++ seriously when the WG thought this was an acceptable feature to ship.
Capturing lambdas are no different from handwritten structures with operator() ("functors"), therefore it makes no sense castrating them.
Borrowing from stack is super useful when your lambda also lives in the stack; stack escaping is a problem, but it can be made harder by having templates take Fn& instead of const Fn& or Fn&&; that or just a plain function pointer.
Like, I'm not god's gift to programming or anything, but I'm decently good at it, and I wrote a use-after-return bug due to a lambda reference last week.
It looks like (a) this is a warning, not an error (why? the code is always wrong) and (b) the warning was added in clang 21 which came out this year. I also suspect that it wouldn't be able to detect complex cases that require interprocedural analysis.
The bug I saw happened a few years ago, and convinced me to switch to Rust where it simply cannot happen.
I'm glad, but my problem is with the claim that modern C++ is safer. They added new features that are very easy to misuse.
Meanwhile in Rust you can freely borrow from the stack in closures, and the borrow checker ensures that you'll not screw up. That's what (psychological) safety feels like.
Lambdas are syntactic sugar over functors, and it was possible all along to define a functor that stores a local address and then return it from the scope, thus leaving a dangling pointer. They don't introduce any new places for bugs to creep in, other than confusing programmers who are used to garbage-collected languages. That C++11 is safer than C++98 is still true, as this and other convenience features make it harder to introduce bugs from boilerplate code.
The ergonomics matter a lot. Of course a lambda is equivalent to a functor that stores a local reference, but making errors with lambdas requires disturbingly little friction.
In any case, if you want safety and performance, use Rust.
>making errors with lambdas requires disturbingly little friction
Not any less than other parts of the language. If you capture by reference you need to mind your lifetimes. If you need something more dynamic then capture by copy and use pointers as needed. It unfortunate the developer who introduced that bug you mentioned didn't keep that in mind, but this is not a problem that lambdas introduced; it's been there all along. The exact same thing would've happened if they had stored a reference to a dynamic object in another dynamic object. If the latter lives longer than the former you get a dangling reference.
>In any case, if you want safety and performance, use Rust.
Personally, I prefer performance and stability. I've already had to fix broken dependencies multiple times after a new rustc version was released. Wake me up when the language is done evolving on a monthly basis.
Yeah, it's great that the C++ community starts to take safety in consideration, but one has to admit that safety always comes as the last priority, behind compatibility, convenience, performance and expressiveness.
Its worse. The day I discovered that std::array is explicitly not range/bounds checked by default I really wanted to write some angry letters to the committee members.
Why go through all the trouble to make a better array, and require the user to call a special .at() function to get range checking rather than the other way around? I promptly went into my standard library and reversed that decision because if i'm going to the trouble to use a C++ array class, it better damn well give me a tiny bit of additional protection. The .at() call should have been the version that reverted to C array behavior without the bounds checking.
And its these kinds of decisions repeated over and over. I get its a committee. Some of the decisions won't be the best, but by 2011 everyone had already been complaining about memory safety issues for 15+ years and there wasn't enough politics on the comittee to recognize that a big reason for using C++ over C was the ability of the language to protect some of the sharper edges of C?
>Why go through all the trouble to make a better array, and require the user to call a special .at() function to get range checking rather than the other way around?
Because the point was not to make an array type that's safe by default, but rather to make an array type that behaves like an object, and can be returned, copied, etc. I mean, I agree with you, I think operator[]() should range-check by default, but you're simply misunderstanding the rationale for the class.
Which goes to the GP's point, which is that security and robustness are not on the radar.
And my point in providing a concrete example, where a decision was made to prioritize unsafe behavior in a known problematic area, when they could just as well have made a half dozen other decisions which would have solved a long standing problem rather than just perpetuating it with some new syntactic sugar.
I didn't dispute that, I was simply addressing the point about std::array. The class is not meant to be "arrays, but as good as they could possibly be". It's "arrays, but as first-class objects instead of weird language constructs".
That said, making std::array::operator[]() range-checking would have been worse, because it would have been the only overload that did that. Could they have, in the same version, made all the overloads range-checking? Maybe, I don't know.
Good news! Contracts were approved for c++26 so they should be in compilers by like 2031 and then you can configure arrays and vectors to abort on out-of-bounds errors instead of corrupting your program.
Let no one accuse the committee of being unresponsive.
This is like writing an article entitled "In Defense of Guns", and then belittling the fact it can kill by saying "You always have to track your bullets".[1]
[1] Not me making this up - I started getting into guns and this is what people say.
To me it's as if someone releases a new gun model and people single that gun out and complain that if you shoot someone with it they may die. Like it's a critique of guns as a concept not of that particular one.
In a complete tangent I think that "smart guns" that only let you shoot bullseye targets, animals and designated un-persons are not far off.
I eagerly await the day when they do away with the distinction between ".cpp" and ".hpp" files and the textual substitution nature of "#include" and replace them all with a proper module system.
It's hard enough to get programmers to care enough about how their code affects build times. Modules make it impossible for them to care, and will lead to horrible problems when building large projects.
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
You know, not sure I even agree with the memory leaks part. If you define a memory leak very narrowly as forgetting to free a pointer, this is correct. But in my experience working with many languages including C/C++, forgotten pointers are almost never the problem. You're gonna be dealing with issues involving "peaky" memory usage e.g. erroneously persistent references to objects or bursty memory allocation patterns. And these occur in all languages.
Nice thing about Rust is not that you cannot write such code, it is you know exactly where you used peaky memory or re-interpreted something as a unsigned integer or replaced your program stack with something else. All of such cases require unsafe blocks in Rust. It is a screaming indicator "here be dragons". It is the do not press this red button unless you intend to.
In C and C++ no such thing exists. It is walking in a minefield. It is worse with C++ because they piled so much stuff, nobody knows on the top of their head how a variable is initialized. The initialization rules are insane: https://accu.org/journals/overload/25/139/brand_2379/
So if you are doing peaky memory stuff with complex partially self-initializing code in C++, there are so many ways of blowing yourself and your entire team up without knowing which bit of code you committed years ago caused it.
> All of such cases require unsafe blocks in Rust.
It's true that Rust makes it much harder to leak memory compared to C and even C++, especially when writing idiomatic Rust -- if nothing else, simply because Rust forces the programmer to think more deeply about memory ownership.
But it's simply not the case that leaking memory in Rust requires unsafe blocks. There's a section in the Rust book explaining this in detail[1] ("memory leaks are memory safe in Rust").
> You're gonna be dealing with issues involving "peaky" memory usage e.g. erroneously persistent references to objects
I use Rust in a company in a team who made the C++ -> Rust switch for many system services we provide on our embedded devices. I use Rust daily. I am aware that leaking is actually safe.
C++'s design encourages that kind of allocation "leak" though. The article suggests using smart pointers, so let's take an example from there and mix make_shared with weak_ptr. Congrats, you've now extended the lifetime of the allocation to whatever the lifetime of your weak pointer is.
Rc::Weak does the same thing in Rust, but I rarely see anyone use it.
std::weak_ptr is rarely used in C++ too - "I don't care if this thing goes away but still want to keep a reference to it" is just not good design in most cases.
You are correct, it does not affect the lifetime of the pointed object (pointee).
But a shared_ptr manages at least 3 things: control block lifetime, pointee lifetime, and the lifetime of the underlying storage. The weak pointer shares ownership of the control block but not the pointee. As I understand this is because the weak_ptr needs to modify the control block to try and lock the pointer and to do so it must ensure the control block's lifetime has not ended. (It manages the control blocks lifetime by maintaining a weak count in the control block but that is not really why it shares ownership.)
As a bonus trivia, make_shared uses a single allocation for both the control block and the owned object's storage. In this case weak pointers share ownership of the allocation for the pointee in addition to the control block itself. This is viewed as an optimization except in the case where weak pointers may significantly outlive the pointee and you think the "leaked" memory is significant.
It has no effect on the lifetime of the object, but it can affect the lifetime of the allocation. The reason is that weak_ptr needs the control block, which make_shared bundles into the same allocation as the object for optimization reasons.
Quoting cppreference [0]:
If any std::weak_ptr references the control block created by std::make_shared after the lifetime of all shared owners ended, the memory occupied by T persists until all weak owners get destroyed as well, which may be undesirable if sizeof(T) is large.
What's worse in languages like Go, which I love, is that you won't even immediately how to solve this unless you have experience dropping down into doing things you just would have normally done in C or C++.
Even the Go authors themselves on Go's website display a process of debugging memory usage that looks identical to a workflow you would have done in C++. So, like, what's the point? Just use C++.
I really do think Go is nice, but at this point I would relegate it to the workplace where I know I am working with a highly variable team of developers who in almost all cases will have a very poor background in debugging anything meaningful at all.
C++ as a language is truly a mess. But as far as a niche it has a pretty unique offering, even today.
My background is games and I've been heavily in Unreal lately. The language feels modern enough with smart pointers and such. Their standard library equivalent is solid.
The macros still feel very hacky and, ironically, Unreal actually does its own prepass over the source to parse for certain macros.... kind of shows that it's not a good language feature if that's needed. BUT the syntax used fits right into the language, so it feels idiomatic enough.
Templates are as powerful as they are just a mess to read.
Does anything come close to the speed and flexibility of the language? I think the biggest reason C++ sticks around is momentum but beyond that nothing _really_ replaces the messy but performance critical nature of it.
You need something like std::launder in any systems language for certain situations, it isn’t a C++ artifact.
Before C++ added it we relied on undefined behavior that the compilers agreed to interpret in the necessary way if and only if you made the right incantations. I’ve seen bugs in the wild because developers got the incantations wrong. std::launder makes it explicit.
For the broader audience because I see a lot of code that gets this wrong, std::launder does not generate code. It is a compiler barrier that blocks constant folding optimizations of specific in-memory constants at the point of invocation. It tells the compiler that the constant it believes lives at a memory address has been modified by an external process. In a C++ context, these are typically restricted to variables labeled ‘const’.
This mostly only occurs in a way that confuses the compiler if you are doing direct I/O into the process address space. Unless you are a low-level systems developer it is unlikely to affect you.
If you are doing something equivalent to placement new on top of existing objects, the compiler often sees that. If that is your case you can avoid it in most cases. That is not what std::launder is for. It is for an exotic case.
std::launder is a tool for object instances that magically appear where other object instances previously existed but are not visible to the compiler. The typical case is some kind of DMA like direct I/O. The compiler can’t see this at compile time and therefore assumes it can’t happen. std::launder informs the compiler that some things it believes to be constant are no longer true and it needs to update its priors.
Alas none of gcc/clang/msvc(?) have implemented start_lifetime_as, so if you want to create an object in-place and obtain a mutable pointer to it, you're stuck with the:
I feel like C++ is a bunch of long chains of solutions creating problems that require new solutions, that start from claiming that it can do things better than C.
Problem 1: You might fail to initialize an object in memory correctly.
Solution 1: Constructors.
Problem 2: Now you cannot preallocate memory as in SLAB allocation since the constructor does an allocator call.
Solution 2: Placement new
Problem 3: Now the type system has led the compiler to assume your preallocated memory cannot change since you declared it const.
Solution 3: std::launder()
If it is not clear what I mean about placement new and const needing std::lauder(), see this:
C has a very simple solution that avoids this chain. Use structured programming to initialize your objects correctly. You are not going to escape the need to do this with C++, but you are guaranteed to have to consider a great many things in C++ that would not have needed consideration in C since C avoided the slippery slope of syntactic sugar that C++ took.
But the c++ solution is transparent to the user. You can write entire useful programs that will use std:: containers willy-nilly and all propagate their allocators automatically and recursively without you having to lift a finger because all the steps you've mentioned have been turned in a reusable library, once.
That's a false comparison. There's a huge difference between a standard container library, and the combination of a (a) best-in-class byte code interpreter with a (b) caching, optimizing JIT, supported by (c) a best-in-class garbage collector.
I would argue that it's reasonable to say that creating a robust data structure library at the level of the STL shouldn't be that arcane.
I absolutely agree - your chain of reasoning follows as well.
It doesn't seem like it at first, but the often praised constructor/destructor is actually a source of incredible complexity, probably more than virtual.
Problem 1 happens, say, 10% of the time when using a C struct.
Problem 2 happens only when doing SLAB allocations - say, 1% of the time when using a C++ class. (Might be more or less, depending on what problem space you're in.)
Problem 3 happens only if you are also declaring your allocated stuff const - say, maybe 20% of the time?
So, while not perfect, each solution solves most of the problem for most of the people. Complaining about std::launder is complaining that solution 2 wasn't perfect; it's not in any way an argument that solution 1 wasn't massively better than problem 1.
I would really like to see more people who have never written C++ before port a Rust program to C++. In my opinion, one can argue it may be easy to port initially but it is an order of magnitude more complex to maintain.
Whereas the other around, porting a C++ program to Rust without knowing Rust is challenging initially (to understand the borrow checker) but orders of magnitude easier to maintain.
Couple that with easily being about to `cargo add` dependencies and good language server features, and the developer experience in Rust blows C++ out of the water.
I will grant that change is hard for people. But when working on a team, Rust is such a productivity enhancer that should be a no-brainer for anyone considering this decision.
I'm a developer since 30 years. I program C#, Rust, Java, some TS etc. I can probably go to most repositories on github and at least clone and build them. I have failed - repeatedly - to build even small C++ libraries despite reasonable effort. And that's not even _writing any C++_. Just installing the tooling around CMake etc is completely Kafkaesque.
The funniest thing happened when I needed to compile a C file as part of a little Rust project, and it turned out one of the _easiest_ ways I've experienced of compiling a tiny bit of C (on Windows) was to put it inside my Rust crate and have cargo do it via a C compiler crate.
I work on large C++ projects with 1-2 dozen third party C and C++ library dependencies, and they're all built from source (git submodules) as part of one CMake build.
Just look at this: https://pvs-studio.com/en/blog/posts/cpp/1129/ - 11 parts about C++ undefined behavior from people who specialize in finding this stuff. And that’s only the tip of the iceberg.
I use C++ daily, and it’s an overcomplicated language. The really good thing about Rust or Zig is that (mostly) everything is explicit, and that’s a big win in my opinion.
In defense of C++, I can only say that lots of interesting projects in the world are written in it.
> You can write simple and readable code in C++ if you want to. You can also write complex and unreadable code in C++ if you want to. It’s all about personal or team preference.
Problem is, if you’re using C++ for anything serious, like the aforementioned game development, you will almost certainly have to use the existing libraries; so you’re forced to match whatever coding style they chose to use for their codebase. And in the case of Unreal, the advice “stick to the STL” also has to be thrown out since Unreal doesn’t use the STL at all. If you could use vanilla, by-the-books C++ all the time, it’d be fine, but I feel like that’s quite rare in practice.
Stating that you can also write unsafe code in memory safe languages is like saying that you can also die from a car crash while wearing a safety belt. Of course you can, but it is still a much better idea to wear the safety belt rather than not to.
C++ is the third programming language I ever tried to learn, I got bored and gave up on both Python and JavaScript after like a month, I now have 150 active hours of learning (I tracked) in C++, and I love it, somehow I find it mentally more stimulating, not sure why.
4th, after basic, assembly & pascal. For a long, long time; it was my default language for personal projects.
But just keeping track of all the features and the exotic ways they interact is a full time job. There are people who have dedicated entire lives to understanding even a tiny corner of the language, and they still don't manage.
Not worth the effort for me, there are other languages.
Whenever I open one of these sites that asks me to confirm tracking, if it doesn’t have an easy way to cancel or reject, I just leave the page. The banner had hundreds of different companies listing “legitimate” reasons to track, and after turning off around 10, I noticed that I had hundreds to go. Sorry, I hope people enjoy your website. I just cannot see a reason to accept that amount of tracking. I don’t care that much about C++ anyway
None of it makes it through a pi-hole filter. Website is clean and coherent, no popups. What that implies for the attempt to track is slightly unclear to me, but I don't have great faith in the pop up box being honoured even if it loads.
When NIST released its summary judgement against C++ and other languages it deemed memory unsafe, the problem became less technical and more about politics and perception. If you're looking to work within two arms' length of the US Government, you have to consider the "written in C++" label seriously, regardless of how correct the code may be.
Nothing is going to happen for the foreseeable future, at least in the parts of government I tend to work with. It doesn't even come up in discussions of critical high-reliability system. They are still quite happy to buy and use C++, so I expect that is what they will be getting.
The government is still happily commissioning new software projects that use C++. That may change in a few years, and some organizations may already be treating C++ more critically, but so far it's been unimpactful.
ADA is still the law, but, yes, Ada the language was mandated for 5 or 6 years and everyone got waivers for it anyways.
A big difference between the Ada mandate and this current push is that the current effort is not to go to one language, but to a different category of languages (specifically, "memory safe" or ones with stronger guarantees of memory safety). That leaves it much more open than the Ada mandate did. This would be much more palatable for contractors compared to the previous mandate.
I would argue that rewrite in C++ will make it a lot better. Rust does have some nice memory safe features that are nice enough that you should question why someone did a rewrite and stuck with C++, but that C++ rewrite would fix a lot.
Fresh codebases have more bugs than mature codebases. Rewriting does not fix bugs; it is a fresh codebase that may have different bugs but extremely rarely fewer bugs than the codebase most of the bugs have been patched out of. Rewriting it in Rust reduces the bugs because Rust inherently prevents large categories of bugs. Rewriting it in C++ has no magical properties that initially writing it in C++ doesn't, especially if you weren't around for the writing of the original. Maybe if there is some especially persnickety known bug that would require a major rearchitecture and you plan to implement this architecture this time around, but that is not the modal bug, and the article is especially talking about memory safety bugs which are a totally separate kind of thing from that.
A rewrite of your C++0x codebase that's grown from 2009 until now will most definitely fix loads of memory bugs, since C++ has very much evolved in this area since then. The added value of the borrow checker compared with modern C++ is a lot less than compared with legacy C++.
That said, I still think it's a rather weak argument, even if we do accept that the rewrite will do most of the bug removal, since we aren't stupid and move to smart pointers, more stl usage and for each loops. "Most" is not "all".
I think there is significant merit to rewriting a legacy C++ (or C) codebase in very modern C++. I've done it before and it not only greatly reduced the total amount of code but also substantially improved the general safety. Faster code and higher quality. Because both implementations are "C++", there is a much more incremental path and the existing testing more or less just works.
By contrast, my experience with C++ to Rust rewrites is that the inability of Rust to express some useful and common C++ constructs causes the software architecture to diverge to the point where you might as well just be rewriting it from scratch because it is too difficult to track the C++ code.
You left out the full argument (to be clear, I don't agree with the author, but in order to disagree with him you have to quote the full argument):
The author is arguing that the main reason rewriting a C++ codebase in Rust makes it more memory-safe is not because it was done in Rust, but because it benefits from lessons learned and knowledge about the mistakes done during the first iteration. He acknowledges Rust will also play a part, but that it's minor compared to the "lessons learned" factor.
I'm not sure I buy the argument, though. I think rewrites usually introduce new bugs into the codebase, and if it's not the exact same team doing the rewrite, then they may not be familiar with decisions made during the first version. So the second version could have as many flaws, or worse.
The argument could be made that rewriting in general can make a codebase more robust, regardless of the language. But that's not what the article does; it makes it specifically about memory safety:
> That’s how I feel when I see these companies claim that rewriting their C++ codebases in Rust has made them more memory safe. It’s not because of Rust, it’s because they took the time to rethink and redesign...
If they got the program to work at all in Rust, it would be memory-safe. You can't claim that writing in a memory-safe language is a "minor" factor in why you get memory safety. That could never be proven or disproven.
Did you read what they wrote? Their point is that doing a fresh rewrite of old code in any language will often inherently fix some old issues - including memory safety ones.
Because it's a re-write, you already know all the requirements. You know what works and what doesn't. You know what kind of data should be laid out and how to do it.
Because of that, a fresh re-write will often erase bugs (including memory ones) that were present originally.
That claim appears to contradict the second-system effect [0].
The observation is that second implementation of a successful system is often much less successful, overengineered, and bloated, due to programmer overconfidence.
On the other hand, I am unsure of how frequently the second-system effect occurs or the scenarios in which it occurs either. Perhaps it is less of a concern when disciplined developers are simply doing rewrites, rather than feature additions. I don't know.
I won't say the second-system effect doesn't exist, but I wouldn't say it applies every single time either. There's too many variables. Sometimes a rewrite is just a rewrite. Sometimes the level of bloat or feature-creep is tiny. Sometimes the old code was so bad that the rewrite fully offsets any bloat.
The second system effect isn't that a rewrite necessarily has more bugs/problems. The second system effect is that a follow-on project with all of everybody's dreamed-of bells and whistles that everybody in marketing wants is going to have more problems/bugs, and may not even be finishable at all.
I'm not sure what I feel about the article's point on boost. It does contribute a lot to the standard library and does provide some excellent libraries, like boost.Unordered
I'm old enough to recall when boost first came out, and when it matured into a very nice library. What's happened in the last 15 years that boost is no longer something I would want to reach for?
Qt is... fine... as long as you're willing to commit and use only Qt instead of the standard library. It's from before the STL came out, so the two don't mesh together really at all.
In my experience I've had no issues. Occasionally have to use things like toStdString() but otherwise I use a mix of std and qt, and haven't had any problems.
That's basically what I mean. You have to call conversion functions when your interface doesn't match, and your ability to use static polymorphism goes down. If the places where the two interact are few it works fine, but otherwise it's a headache.
I use boost and Qt but completely disagree. Every new version of boost brings extremely useful libraries that will never be in std: boost.pfr was a complete game changer, boost.mp11 ended the metaprogramming framework wars, there's also the recently added support for MQTT, SQL, etc. Boost.Beast is now the standard http and websocket client/server in c++. Boost.json has a simple API and is much more performant than nlohmann. Etc etc.
Don't lump c++ in with c. C++ is a nightmare amalgamation of every bad idea in software development. C is a thing of great beauty with a cheek mole. C++ is a metastatic cancer.
Write disciplined, readable c, use valgrind and similar tools, and reap unequalled performance and maintainability
I believe most C++ gripes are a classic case of PEBKAC.
One of the most common complaints is the lack of a package manager. I think this stems from a fundamental misunderstanding of how the ecosystem works. Developers accustomed to language-specific dependency managers like npm or pip find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
Another perpetual gripe is that C++ is bad because it is overly complex and baroque, usually from C folks like Linus Torvalds[1]. It's pretty ironic, considering the very compiler they use for C (GCC), is written in C++ and not in C.
> Developers accustomed to language-specific dependency managers like npm or pip find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
Okay, but is that actually a good idea? Merely saying that something is idiomatic isn't a counterargument to an allegation that the ecosystem has converged on a bad idiom.
For software that's going to be distributed through that same package manager, yes, sure, that's the right way to handle dependencies. But if you're distributing your app in a format that makes the dependencies self-contained, or not distributing it at all (just running it on your own machines), then I don't see what you gain from letting your operating system decide which versions of your dependencies to use. Also this doesn't work if your distro doesn't happen to package the dependency you need. Seems better to minimize version skew and other problems by having the files that govern what versions of dependencies to use (the manifest and lockfile) checked into source control and versioned in lockstep with the application code.
Also, the GCC codebase didn't start incorporating C++ as an implementation language until eight years after Linus wrote that message.
GCC was originally written in GNU C. Around GCC 4.9, its developers decided to switch to a subset of C++ to use certain features, but if you look at the codebase, you will see that much of it is still GNU C, compiled as GNU C++.
There is nothing you can do in C++ that you cannot do in C due to Turing Completeness. Many common things have ways of being done in C that work equally well or even better. For example, you can use balanced binary search trees in C without type errors creating enormous error messages from types that are sentences if not paragraphs long. Just grab BSD’s sys/tree.h, illumnos’ libuutil or glib for some easy to use balanced binary search trees in C.
> There is nothing you can do in C++ that you cannot do in C due to Turing Completeness.
While this is technically true, a more satisfying rationale is provided by Stroustrup here[0].
> Many common things have ways of being done in C that work equally well or even better. For example, you can use balanced binary search trees in C without type errors creating enormous error messages from types that are sentences if not paragraphs long. Just grab BSD’s sys/tree.h, illumnos’ libuutil or glib for some easy to use balanced binary search trees in C.
Constructs such as sys/tree.h[1] replicate the functionality of C++ classes and templates via the C macro processor. While they are quite useful, asserting that macro-based definitions provide the same type safety as C++ types is simply not true.
As to the whether macro use results in "creating enormous error messages" or not, that depends on the result of the textual substitution. I can assure you that I have seen reams of C compilation error messages due to invalid macro definitions and/or usage.
> find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
It's really not about being hard to grasp. Once you need a different dependency version than the system provides, you can't easily do it. (Apart from manual copies) Even if the library has the right soname version preventing conflicts (which you can do in C, but not really C++ interfaces), you still have multiple versions of headers to deal with. You're losing features by not having a real package manager.
I write C++ daily and I really can't take seriously arguments how C++ is safe if you know what you're doing like come on. Any sufficiently large and complex codebases tend to have bugs and footguns and using tools like memory safe languages limit blast radius considerably.
Smart pointers are neat but they are not a solution for memory safety. Just using standard containers and iterators can lead to lots of footguns, or utils like string_view.
The safety part in this article is incorrect. There's a google doc somewhere where Google did an internal experiment and determined that safety c annot be achieved in C++ without an owning reference (essentially what Rust has).
Am I missing anything in the article about this problem in particular? Owning references are a part of modern C++, which should be covered by the author's arguments.
> Yes, C++ can be made safer; in fact, it can even be made memory safe.
The claim from this document:
> We attempted to represent ownership and borrowing through the C++ type system, however the language does not lend itself to this. Thus memory safety in C++ would need to be achieved through runtime checks.
Maybe we're thinking of different things, but I don't think C++ has owning references, modern or not? There's regular references (&) which are definitely not owning, and owning pointers (unique_ptr and friends), but neither of those quite match Rust's &.
I tried to use C++ on a new project and what finally killed me was the IDE. I wanted to use the newest language version, with modules. Visual Studio's autocompletion and jump-to-definition fell apart with modules. This was demoralizing, because I was under the impression that Visual Studio was the best C++ IDE, and if it can't handle language features 4 years after their release, what does that mean? CLion or Emacs+clangd were also a pain. The editing/IDE experience has been much nicer in Rust.
> you can write perfectly fine code without ever needing to worry about the more complex features of the language
Not really because of undefined behaviour. You must be aware of and vigilant about the complexities of C++ because the compiler will not tell you when you get it wrong.
I would argue that Rust is at least in the same complexity league as C++. But it doesn't matter because you don't need to remember that complexity to write code that works properly (almost all of the time anyway, there are some footguns in async Rust but it's nothing on C++).
> Now is [improved safety in Rust rewrites] because of Rust? I’d argue in some small part, yes. However, I think the biggest factor is that any rewrite of an existing codebase is going to yield better results than the original codebase.
A factor, sure. The biggest? Doubtful. It isn't only Rust's safety that helps here, it's its excellent type system.
> But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
Somehow managed to fit two fallacies in one sentence!
1. The fallacy of the grey - no language is perfect therefore they are all the same.
2. "I don't make mistakes."
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
Not true. As I said already Rust's very strong type system helps to make applications less buggy even ignoring memory safety bugs.
> Yes, C++ can be made safer; in fact, it can even be made memory safe. There are a number of libraries and tools available that can help make C++ code safer, such as smart pointers, static analysis tools, and memory sanitizers
lol
> Avoid boost like the plague.
Cool, so the ecosystem isn't confusing but you have to avoid one of the most popular libraries. And Boost is fine anyway. It has lots of quite high quality libraries, even if they do love templates too much.
> Unless you are writing a large and complex application that requires the specific features provided by Boost, you are better off using other libraries that are more modern and easier to use.
Uhuh what would you recommend instead of Boost ICL?
I guess it's a valiant attempt but this is basically "in defense of penny farthings" when the safety bicycle was invented.
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
Even if we take this claim at face value, isn’t that great?
Memory safety is a HUGE source of bugs and security issues. So the author is hand-waving away a really really good reason to use Rust (or other memory safe by default language).
Overall I agree this seems a lot like “I like C++and I’m good at it so it’s fine” with justifications created from there.
I think this is a case of two distinct populations being inappropriately averaged.
There are many high-level C++ applications that would probably be best implemented in a modern GC language. We could skip the systems language discussion entirely because it is weird that we are using one.
There are also low-level applications like high-performance database kernels where the memory management models are so different that conventional memory safety assumptions don’t apply. Also, their performance is incredibly tightly coupled to the precision of their safety models. It is no accident that these have proven to be memory safe in practice; they would not be usable if they weren’t. A lot of new C++ usage is in these areas.
Rust to me slots in as a way to materially improve performance for applications that might otherwise be well-served by Java.
Database kernels have some of the strictest resource behavior constraints of all software. Every one I have worked on in vaguely recent memory has managed memory. There is no dynamic allocation from the OS. Many invariants important to databases rely on strict control of resource behavior. An enormous amount of optimization is dependent on this, so performance-engineered systems generally don’t have issues with memory safety.
Modern database kernels are memory-bandwidth bound. Micro-managing the memory is a core mechanic as a consequence. It is difficult to micro-manage memory with extreme efficiency if it isn’t implicitly safe. Companies routinely run formal model checkers like TLA+ on these implementations. It isn’t a rando spaffing C++ code.
I’ve used PostgreSQL a lot but no one thinks of it as highly optimized.
This is true. But it has some weird gaps that make it difficult to express fundamental things in the low-level systems world without using a lot of “unsafe”. Or you can do it safely and sacrifice a lot of performance. I am a fan of formal verification and use it quite a lot but Rust is far more restrictive than formal verification requires.
Rust is a systems language but it is uncomfortable with core systems-y things like DMA because it breaks lifetime and ownership models, among many other well-known quirks as a systems language. Other verifiable safety models exist that don’t have these issues. C++, for better or worse, can deal with this stuff in a straightforward way.
You are allowed to use a lot of `unsafe` if you really need to. How much `unsafe` do you use in C++?
> it is uncomfortable with core systems-y things like DMA because it breaks lifetime and ownership models,
Sure, it means it can't prove memory safety. But that just takes you back to parity with C++. It feels bad in Rust because normally you can do way better than that, but this isn't an argument for C++.
> Yes, C++ can be unsafe if you don’t know what you’re doing
I feel like I always hear this argument for continuing to use C++.
I, on the other hand, want a language that doesn't make me feel like I'm walking a tightrope with every line of code I write. Not sure why people can't just admit the humans are not robots and will write incorrect code.
The article says "I think the biggest factor is that any rewrite of an existing codebase is going to yield better results than the original codebase.".
Yeah, sorry, but no, ask some long-term developers about how this often goes.
It depends on the codebase. If the code base deserves to be a case study in how not to do programming, then a rewrite will definitely yield better results.
I once encountered this situation with C# code written by an undergraduate, rewrote it from scratch in C++ and got a better result. In hindsight, the result would have been even better in C since I spent about 80% of my time fighting with C++ to try to use every language feature possible. I had just graduated from college and my code whole better, did a number of things wrong too (although far fewer to my credit). I look back at it in hindsight and think less is more when it comes to language features.
I actually am currently maintaining that codebase at a health care startup (I left shortly after it was founded and rejoined not that long ago). I am incrementally rewriting it to use a C subset of C++ whenever I need to make a change to it. At some point, I expect to compile it as C and put C++ behind me.
Data structures like maps and vectors from the standard library are still incredibly useful and make a fantastic addition to C if your focus relies on POD types, though if real time performance with heap cohesion is a problem then you’re right to go pure C
I've been a software developer for nearly 2 decades at this point, contributed to several rewrites and oversaw several rewrites of legacy software.
From my experience I can assure you that rewriting a legacy codebase to modern C++ will yield a better and safer codebase overall.
There are multiple factors that contribute to this, such one of which is what I reffer to as "lessons learnt" if you have a stable team of developers maintaining a legacy codebase they will know where the problematic areas are and will be able to avoid re-creating them in a rewrite.
An additonal factor to consider is that a lot of legacy C++ codebases can not be upgraded to use modern language features like smart pointers. The value smart pointers provide in a full rewrite can not be overstated.
Then there's also the factor that is a bit anecdotal which is I find that there are less C++ devs in general as there was 15 years ago, but those that stayed / survived are generally better and more experienced with very few enthusiastic juniors coming in.
I'm sorry you did not enjoy the article though, but thank you for giving it your time and reading it that part I really appreciate.
> Here’s a rule of thumb I like to follow for C++: make it look as much like C as you possibly can, and avoid using too many advanced features of the language unless you really need to.
Also, avoid using C++ classes while you're at it.
I recently had to go back to writing C++ professionally after a many-year hiatus. We code in C++23, and I got a book to refresh me on the basics as well as all the new features.
And man, doing OO in C++ just plain sucks. Needing to know things like copy and swap, and the Rule of Three/Five/Zero. Unless you're doing trivial things with classes, you'll need to know these things. If you don't need to know those things, you might as well stick to structs.
Now I'll grant C++23 is much nicer than C++03 (just import std!) I was so happy to hear about optional, only to find out how fairly useless it is compared to pretty much every language that has implemented a "Maybe" type. Why add the feature if the compiler is not going to protect you from dereferencing without checking?
I really don't like Object Oriented programming anywhere. Maybe Smalltalk had it right, but I've not messed with Pharo or anything else enough to get a feel for it.
CLOS seems pretty good, but then again I'm a bit inexperienced. Bring back Dylan!
std::optional does have dereference checking, but it's a run-time check: std::optional<T>::value(). Of course, you'll get an exception if the optional is empty, because there's nothing else for the callee to do.
And that's the problem. In other languages that have a Maybe type, it's a compile time check. If your code is not handling the "empty" case, it will simply fail to compile.
I honestly don't see any value in std::optional compared to the behavior pre-std::optional. What does it bring to the table for pointers, for example?
Nothing, but I don't think anyone uses std::optional<T *>. However, if you need to specify an optional integer, std::optional is much clearer than encoding the null value as a negative or other such hacks. Another use of std::optional is to delay construction of an object without an extra dynamic allocation. That's way more convenient than using placement new.
But the dereference operator invokes UB if there is no value.
Which is a recurring theme in C++: the default behavior is unsafe (in order to be faster), and there is a method to do the safe thing. Which is exactly the opposite of what it should be.
The one thing I'll say here is age of the language really is and always has been a superficial argument; it's only six years apart from Python, and it's far less controversial of a language choice: https://en.wikipedia.org/wiki/History_of_Python .
Either way, it's hard not to draw parallels between all the drama in US politics and the arguments about language choice sometimes; it feels like both sides lack respect for the other, and it makes things unnecessarily tense.
My go to for formatting would be clang-format, and for testing gtest. For more extensive formatting (that involves the compiler) clang-tidy goes a long way
Clang tidy does both: it can run clang's analyzer [0] (also available with clang++ --analyze or the scan-build wrapper script which provides nicer HTML-based output for complex problems found), has it's own lightweight code analysis checks but also has checks that are more about formatting and ensuring idiomatic code than it is about typical static analysis.
MVSC [1] and GCC [2] also have built-in static analyzers available via cl /analyze or g++ -fanalyzer these days.
There is also cppcheck [3], include-what-you-use [4] and a whole bunch more.
Running unit tests with the address sanitizer and UB sanitizer enabled go a long way towards addressing most memory safety bugs. The kind of C++ you write then is a far cry from what the haters complain about with bad old VC6 era C++.
It's "great" mainly in the sense of being very large, and making your code very lage - and slow to build. I would not recommend it unless you absolutely must have some particular feature not existing elsewhere.
> Yes, C++ can be unsafe if you don’t know what you’re doing. But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
C++ can be unsafe even when you know what you're doing, since it is quite easy get something wrong by accident: index off-by-one can mean out-of-bounds access to an array, which can mean anything really. So, it's not that "all languages" are like that. That seems like a "moving the goalpost" type of logical fallacy.
And I say that as a person who writes C++ for fun an profit (well, salary) and has wasted many an hour on earning my StackOverflow C++ gold badge :-)
The post also includes other arguments which I find week regard C++ being dated. It has changed and has seen many improvements, but those have been almost exclusively _additions_, not removals or changes. Which means that the rickety old stuff is basically all still there. And then there is the ABI stability issue, which is not exactly about being old, but it is is about sticking to what's older and never (?) changing it.
Bottom line for me: C++ is useful and flexible but has many warts and some pitfalls. I'd still use it over Rust for just about anything (bias towards my experience here), but if a language came along with similar design goals to C++; a robust public standardization and implementation community; less or none of the poor design choices of C; nicer built-in constructs as opposed to having to pull yourself up by the bootstraps using the standard library; etc - I would consider using that. (And no, that language is not D.)
>So, it's not that "all languages" are like that. That seems like a "moving the goalpost" type of logical fallacy.
I think what's mean is that Rust's type system only removes one specific kind of unsafety, but if you're clueless you can still royally screw things up, in any language. No type system can stop you from hosing a database by doing things in the wrong order, say. Whether trading <insert any given combination of things Rust does that you don't like> for that additional safety is worth it is IMO a more interesting question than whether it exists at all.
Personally, I mostly agree with you. I don't much care for traits, or the lack of overloading and OO, or how fast Rust is still evolving, and wish I could have Rust's safety guarantees in a language that was more like C++. It really feels like you could get 90% of the way there without doing anything too radical, just forbidding a handful of problematic features; a few off the top of my head: naked pointers, pointer arithmetic, manual memory management, not checking array accesses by default, not initializing variables by default, allowing switches to be non-exhaustive.
the word "only" doesn't really belong in that sentence, because these are very common in root-cause analysis of flaws by the "Common Weakness Enumeration" initiative:
and having said that - I agree with you back :-) ... in fact, I think this is basically "the plan" for C++ regarding security: They'll make some static analysis warnings be considered errors for parts of your code marked "safe", and let them fly in areas marked "unsafe".
If the C++ committe can make that stick - in the public discourse and in US government circles I guess - then they will have essentially "eaten Rust's lunch". Because Rust is quite restrictive, it's somewhat of a moving target, it's kind of fussy w.r.t. use on older systems, and - it's said to be somewhat restrictive. If you take away its main selling point of safety-by-default, then there would probably not be enough of a motivation to drop C++, decades of backwards compatibility, and a huge amount of C++ and C libraries, in favor of Rust.
And this would not be the first time C++ is eating the lunch of a potential successor/competitor language; D comes to mind.
It doesn't mention the horrific template error messages. I'd heard that this was an area targeted for improvement a while ago... Is it better these days?
Qualitatively better. C++20 'concepts' obviated the need for the arcane metaprogramming tricks responsible for generating the vast majority of that template vomit.
Now you mostly get an error to the effect of "constraint foo not satisfied by type bar" at the point of use that tells you specifically what needs to change about the type or value to satisfy the compiler.
> C++20 'concepts' obviated the need for the arcane metaprogramming tricks responsible for generating the vast majority of that template vomit.
1. Somewhat exaggerated claim. It reduced that need; and for when you can assumpe everyting is C++20 or later.
2. Even to the extent the need for TMP was obviated in principle - it will take decades for TMP to go away in popular libraries and in people's application code. At that time, maybe, we would stoopp seeing these endless compliation artifacts.
This reads the same way as any other 'defense', 'sales pitch', or what have you, but from a Rust evangelist. The author likes to use C++ and now he must explain to the world why his decision is okay/correct/good/etc.. If you like it that much, just use the thing; no one actually cares.
> You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
It is incredibly funny how this argument has been used for literally decades, but in reality, you don't see simple, readable, nor maintainable code, instead, most of the C++ code bases out there are an absolute mess.. This argument reminds me of something...
Python’s “there should be one obvious way to do it” slogan often collides with reality these days too, since the language sprawled into multiple idioms just like C++: for printing you can use print("hi"), f-strings like f"hi {x}", .format(), % formatting, or concatenation with +; for loops you can iterate with for i in range(n), list comprehensions [f(i) for i in seq], generator expressions (f(i) for i in seq), or map/filter/lambda; unpacking can be done with a,b=pair, tuple() casting, slicing, *args capture, or dictionary unpacking with *; conditionals can be written with if/else blocks, one-line ternary x if cond else y, and/or short-circuit hacks, or pattern matching match/case; default values can come from dict.get(k,default), x or default, try/except, or setdefault; swapping variables can be done with a,b=b,a, with a temp var, with tuple packing/unpacking, or with simultaneous assignment; joining strings can be done with "".join(list), concatenation in a loop, reduce(operator.add, seq), or f-strings; reading files can be open().read(), iterating line by line with for line in f, using pathlib.Path.read_text(), or with open(...) as f; building lists can be done with append in a loop, comprehensions, list(map(...)), or unpacking with [*a,*b]; dictionaries can be merged with {*a,*b}, a|b (Python 3.9+), dict(a,*b), update(), or comprehensions; equality and membership checks can be ==, is, in, any(...), all(...), or chained comparisons; function arguments can be passed positionally, by name, unpacked with * and \*, or using functools.partial; iteration with indexes can be for i in range(len(seq)), for i,x in enumerate(seq), zip(range(n),seq), or itertools; multiple return values can be tuples, lists, dicts, namedtuples, dataclasses, or objects; even truthiness tests can be if x:, if bool(x):, if len(x):, or if x != []:. Whew!
> But Dave, what we mean by outdated is that other languages have surpassed C++ and provide a better developer experience.
> Matter of personal taste, I guess, C++ is still one of the most widely used programming languages with a huge ecosystem of libraries and tools. It’s used in a wide range of applications, from game development to high-performance computing to embedded systems. Many of the most popular and widely used software applications in the world are written in C++.
> I don’t think C++ is outdated by any stretch of the imagination;
The second paragraph in this quote has zero connection to the first and the third paragraphs.
> C++ has a large ecosystem built over the span of 40 years or so, with a lot of different libraries and tools available.
Yes, exactly: it's outdated.
> the simple rule of thumb is to use the standard library wherever possible; it’s well-maintained and has a lot of useful features.
That's got to be the funniest joke in this whole article. First of all, no, its API is not really that well thought out and it took several language standards to finally make smart pointers and tuples truly convenient to use; and which implementation of "the standard library" do you even mean, by the way? There are several implementations of it, you know, of very varying quality.
And then there is an argument against using the Boost in this article which hilariously can be well applied to C++ itself. Don't use it unless you have to! There are languages that are more modern and easier to use!
> Fact is, if you wanna get into something like systems programming or game development then starting with Python or JavaScript won’t really help you much. You will eventually need to learn C or C++.
The key word is eventually. You don't start learning to e.g. play guitar on a cheap, half-broken piece of wood because you'll spend more time on fighting the instrument and fiddling with it than actually learning how to play it.
> New standards (C++20, C++23) keep modernizing the language, ensuring it stays competitive with younger alternatives. If you peel back the layers of most large-scale systems we rely on daily, you’ll almost always find C++ humming away under the hood.
Notice the dishonesty of placing these two sentences together: it seems to imply (with plausible deniability) that those "large-scale systems we rely on daily" are written in "modern" C++. No, they are absolutely not.
I think, if one of the most prominent C++ experts in the world(herb sutter), who chaired the C++ standards committee for 20+ years, who has evangelized the language for even longer than that - decides that complexity in the language has gotten out of control and sits down to write a simpler and safer dialect, then that is indicative of a problem with the language.
My viewpoint on the language is that there are certain types of engineers who thrive in the complexity that is easy to arrive at in a C++ code base. These engineers are undoubtedly very smart, but, I think, lack a sense of aesthetics that I can never get past. Basically, the r/atbge of programming languages (Awful Taste But Great Execution).
But you stated it was "the truth" and so we might reasonably wonder why you think so, unless it's that you just believe anything you read.
"ABI: Now or never" by Titus Winters addresses some perf leaks C++ had years ago, which it can't fix (if it retains its ABI promise). They're not big but they accumulate over time and the whole point of that document was to explain what the price is if (unlike Rust) you refuse to take steps to address it.
Rust has some places where it can't match C++ perf, but unlike that previous set Rust isn't obliged to keep one hand tied behind its back. So this gently tips the scales further towards Rust over time.
Worse, attempts to improve C++ safety often make its performance worse. There is no equivalent activity in Rust, they already have safety. So these can heap more perf woes on a C++ codebase over time.
It seems you agree? I'd love to hear your thoughts on it. I had gotten the impression (second-hand) that they were roughly equally matched in this regard.
I don't think there could be any purer of an expression of the Blub Paradox.
> Just use whatever parts of the language you like without worrying about what's most performant!
It's not about performant. It's about understanding someone else's code six months after they've been fired, and thus restricting what they can possibly have done. And about not being pervasively unsafe.
> "I don’t think C++ is outdated by any stretch of the imagination", "matter of personal taste".
Except of course for header files, forward declarations, Make, the true hell of C++ dependency management (there's an explicit exhortation not to use libraries near the bottom), a thousand little things like string literals actually being byte pointers no matter how thoroughly they're almost compatible with std::string, etc. And of course the pervasive unsafety. Yes, it sure was last updated in 2023, the number of ways of doing the same thing has been expanded from four to five but the module system still doesn't work.
> You can write unsafe code in Python! Rewriting always makes the code more safe whether it's in Rust or not!
No. Nobody who has actually used Rust can reasonably arrive at this opinion. You can write C++ code that is sound; Rust-fluent people often do. The design does not come naturally just because of the process of rewriting, this is an entirely ridiculous thing to claim. You will make the same sorts of mistakes you made writing it fresh, because you are doing the same thing as you were when writing it fresh. The Rust compiler tells you things you were not thinking of, and Rust-fluent people write sound C++ code because they have long since internalized these rules.
And the crack about Python is just stupid. When people say 'unsafe' and Rust in the same sentence, they are obviously talking about UB, which is a class of problem a cut above other kinds of bugs in its pervasiveness, exploitability, and ability to remain hidden from code review. It's 'just' memory safety that you're controlling, which according to Microsoft is 70% of all security related bugs. 70% is a lot! (plus thread safety, if this was not mentioned you know they have not bothered using Rust)
In fact the entire narrative of 'you'll get it better the second time' is nonsense, the software being rewritten was usually written for the first time by totally different people, and the rewriters weren't around for it or most of the bugfixes. They're all starting fresh, the development process is nearly the same as the original blank slate was - if they get it right with Rust, then Rust is an active ingredient in getting it right!
> Just use smart pointers!
Yes, let me spam angle brackets on every single last function. 'Write it the way you want to write it' is the first point in the article, and here is the exact 'write it this way' that was critiquing. And you realistically won't do it on every function so it is just a matter of time until one of the functions you use regular references with creates a problem.
> In fact the entire narrative of 'you'll get it better the second time' is nonsense, the software being rewritten was usually written for the first time by totally different people, and the rewriters weren't around for it or most of the bugfixes. They're all starting fresh, the development process is nearly the same as the original blank slate was - if they get it right with Rust, then Rust is an active ingredient in getting it right!
Yes, this is a serious flaw in the author's argument. Does he think the exact same team that built version 1.0 in C++ is the one writing 2.0 in Rust? Maybe that happens sometimes, I guess, but to draw a general lesson from that seems weird.
C++ and C rely, heavily, on skill and discipline instead of automated checks to stay safe. Over time, and in larger groups of people that always fails. People just aren't that disciplined and they get overconfident of their own skills (or level of discipline). Decades of endless memory leaks, buffer overflows, etc. and the related security issues, crash bugs, data corruption, etc. shows that no code base is really immune to this.
The best attitude in programmers (regardless of the language) is the awareness that "my code probably contains embarrassing bugs, I just haven't found them yet". Act accordingly.
There are of course lots of valid reasons to continue to use C/C++ on projects where it is used and there are a lot such projects. Rewrites are disruptive, time consuming, expensive, and risky.
It is true that there are ways in C++ to mitigate some of these issues. Mostly this boils down to using tools, libraries, and avoiding some of the more dark corners of the language and standard library. And if you have a large legacy code base, adopting some of these practices is prudent.
However, a lot of this stuff boils down to discipline and skill. You need to know what to use and do, and why. And then you need to be disciplined enough to stick with that. And hope that everybody around you is equally skilled and disciplined.
However, for new projects, there usually are valid alternatives. Even performance and memory are not the arguments they used to be. Rust seems to be building a decent reputation for combining compile time safety with performance and robustness; often beating C/C++ implementations of things where Rust is used to provide a drop in replacement. Given that, I can see why major companies are reluctant to take on new C/C++ projects. I don't think there are many (or any) upsides to the well documented downsides.
People innately admire difficult skills, regardless of their usefulness. Acrobatic skateboarding is impressive, even when it would be faster and safer to go in a straight line or use a different mode of transport.
To me skill and effort is misplaced and wasted when it's spent on manually checking invariants that a compiler could check better automatically, or implementing clever workarounds for language warts that no longer provide any value.
Removal of busywork and pointless obstacles won't make smart programmers dumb and lazy. It allows smart programmers to use their brainpower on bigger more ambitious problems.
To me a compiler's effort is misplaced and wasted when it's spent on checking invariants that could be checked by a linter or a sidecar analysis module.
Checking of whole-program invariants can be accurate and done basically for free if the language has suitable semantics.
For example, if a language has non-nullable types, then you get this information locally for free everywhere, even from 3rd party code. When the language doesn't track it, then you need a linter that can do symbolic execution, construct call graphs, data flows, find every possible assignment, and still end up with a lot of unknowns and waste your time on false positives and false negatives.
Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart. Recovering required information from arbitrary code may be literally impossible (Rice's theorem), and getting even approximate results quickly ends up requiring whole-program analysis and prohibitively expensive algorithms.
And it's not even an either-or choice. You can have robust checks for fundamental invariants built into the language/compiler, and still use additional linters for detecting less clear-cut issues.
If the compiler is not checking them then it can't assume them, and that reduces the opportunities for optimizations. If the checks don't run on the compiler then they're not running every time; if you do want them to run every time then they may as well live in the compiler instead.
These type comments always remind me that we forget where we come from in terms of computation, every time.
It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.
It's easy to publicly shame people who do hard things for a long time in the light of newer tools. However, many people who likes these languages are using them longer than the languages we champion today were mere ideas.
I personally like Go in these days for its stupid simplicity, but when I'm going to do something serious, I'll always use C++. You can fight me, but never pry C++ from my cold, dead hands.
For note, I don't like C & C++ because they are hard. I like them because they provide a more transparent window the processor, which is a glorified, hardware implemented PDP-11 emulator.
Last, we shall not forget that all processors are C VMs, anyway.
> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.
The core of the borrow checker was being formulated in 2012[1], which is 13 years ago. No infeasibility then. And it's based on ideas that are much older, going back to the 90s.
Plus, you are vastly overestimating the expense of borrow checking, it is very fast, and not the reason for Rust's compile times being slow. You absolutely could have done borrow checking much earlier, even with less computing power available.
1: https://smallcultfollowing.com/babysteps/blog/2012/11/18/ima...
> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.
IIRC borrow checking usually doesn't consume that much compilation time for most crates - maybe a few percent or thereabouts. Monomorphization can be significantly more expensive and that's been much more widely used for much longer.
> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.
I think you're setting the bar a little too high. Rust's borrow-checking semantics draw on much earlier research (for example, Cyclone had a form of region-checking in 2006); and Turbo Pascal was churning through 127-character identifiers on 8088s in 1983, one year before C++ stream I/O was designed.
EDIT: changed Cyclone's "2002" to "2006".
I remember, I was there in the 1980's coding, hence why I know C and C++ were not the only alternatives, rather the ones that eventually won in the end.
> we shall not forget that all processors are C VMs
This idea is some 10yrs behind. And no, thinking that C is "closer to the processor" today is incorrect
It makes you think it is close which in some sense is even worse
> This idea is some 10yrs behind.
Akshually[1] ...
> And no, thinking that C is "closer to the processor" today is incorrect
THIS thinking is about 5 years out of date.
Sure, this thinking you exhibit gained prominence and got endlessly repeated by every critic of C who once spent a summer doing a C project in undergrad, but it's been more than 5 years that this opinion was essentially nullified by
Assembler? After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on, that has a lower bound which none of the data gets close to?You're repeating something that was fashionable years ago.
===========
[1] There's always one. Today, I am that one :-)
Standard C doesn't have inline assembly, even though many compilers provide it as an extension. Other languages do.
> After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on
The claim about C being "close to the machine" means different things to different people. Some people literally believe that C maps directly to the machine, when it does not. This is just a factual inaccuracy. For the people that believe that there's a spectrum, it's often implied that C is uniquely close to the machine in ways that other languages are not. The pushback here is that C is not uniquely so. "just as close, but not closer" is about that uniqueness statement, and it doesn't mean that the spectrum isn't there.
> Some people literally believe that C maps directly to the machine, when it does not.
Maybe they did, 5 years (or more) ago when that essay came out. it was wrong even then, but repeating it is even more wrong.
> This is just a factual inaccuracy.
No. It's what we call A Strawman Argument, because no one in this thread claimed that C was uniquely close to the hardware.
Jumping in to destroy the argument when no one is making it is almost textbook example of strawmanning.
Claiming that a processor is a "C VM" implies that it's specifically about C.
Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.
> Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.
Well, you're talking about languages that don't have standards, they have a reference implementation.
IOW, no language has standards for processor intrinsics; they all have implementations that support intrinsics.
> Okay, If C is "not close to the process", what's closer?
LLVM IR is closer. Still higher level than Assembly
The problem is thus:
char a,b,c; c = a+b;
Could not be more different between x86 and ARM
> LLVM IR is closer. Still higher level than Assembly
So your reasoning for repeating the once-fashionable statement is because "an intermediate representation that no human codes in is closer than the source code"?
> C++ and C rely, heavily, on skill and discipline instead of automated checks to stay safe.
You can't sensibly talk about C and C++ as a single language. One is the most simple language there is, most of the rules to which can be held in the head of a single person while reading code.
The other is one of the most complex programming languages to ever have existed, in which even world-renowned experts in lose their facility for the language after a short break from it.
And yet, they both still suffer from the flaw that the parent comment cites. Describing a shared property doesn't imply a claim that they're the same language.
> And yet, they both still suffer from the flaw that the parent comment cites.
I dunno; the flaw is not really comparable, is it? The skill and discipline required to write C bug-free is an orders of magnitude less than the skill and discipline required to write C++.
Unless you read GGPs post to mean a flaw different to "skill and discipline required".
I'd argue that their point was that the required amount of skill and discipline of either is higher than it's worth at this point for new projects. The difference doesn't matter if even the lower of the two is too high.
Have you written significant amounts of C or C++?
Most people don't write C, nor use the C compiler, even when writing C. You use C++ and the C++ compiler. For (nearly) all intents and purposes, C++ has subsumed and replaced C. Most of the time when someone says something is "written in C" it actually means it's C++ without the +± features. It's still C++ on the C++ compiler.
Actual uses of actual C are pretty esoteric and rare in the modern era. Everything else is varying degrees of C++.
Sending out a strong disagree from the embedded systems world. C is king here.
(Broad, general, YMMV statement): The general C++ arc for an embedded developer looks like this:
1.) discover exceptions are way too expensive in embedded. So is RTTI.
2.) So you turn them off and get a gimped set of C++ with no STL.
3.) Then you just go back to C.
Skype was written without exception handling and RTTI, although using a lot of C++ features. You can write good C++ code without these dependencies. You don't use STL but with cautious use of hand-built classes you go far.
Today I wouldn't recommnend Skype built in any language except Rust. But the Skype founders Ahti Heinla, Jaan Tallinn and Priit Kasesalu found exactly the right balance of C and C++ for the time.
I also wrote a few lines of code in that dialect of C++ (no exceptions). And it didn't feel much different from modern C++ (exception are really fatal errors)
And regarding to embedded, the same codebase was embedded in literally all the ubiquitous TVs of the time, even DECT phones. I bet there are only a few (if any) application codebases of significant size to have been deployed at that scale.
> Have you written significant amounts of C or C++?
Yes.
> Most of the time when someone says something is "written in C" it actually means it's C++ without the +± features.
Those "someone's" have not written a significant amount of C. Maybe they wrote a significant amount of C++.
The cognitive load when dealing with C++ code is in no way comparable to the cognitive load required when dealing with C code, outside of code-golfing exercises which is as unidiomatic as can be for both languages.
Right on the money!
Other then hardcore embedded guys and/or folks dealing with legacy C code, I and most folks i know almost always use C++ in various forms i.e. "C++ as a better C", "Object-Oriented C++ with no template shenanigans", "Generic programming in C++ with templates and no OO", "Template metaprogramming magic", "use any subset of C++ from C++98 to C++23" etc. And of course you can mix-and-match all of the above as needed.
C++'s multi-paradigm support is so versatile that i don't know why folks on HN keep moaning about its complexity; it is the price you pay for the power you get. It is the only language that i can program in for itty-bitty MCUs all the way to large complicated distributed systems on multiple servers plus i can span all of applications to systems to bare-metal programming.
> I don't think there are many (or any) upsides to the well documented downsides.
C++ template metaprogramming still remains extremely powerful. Projects like CUTLASS, etc could not be written to give best performance in as ergonomic a way in Rust.
There is a reason why the ML infra community mostly goes with Python-like DSL's, or template metaprogramming frameworks.
Last I checked there are no alternatives at scale for this.
Even new projects have good reasons to use c++. Maybe the ecosystem is built around it. Maybe competent c++ programmers are easier to find than rust ones. Maybe you need lots of dynamic loading. Maybe you want drop-in interop with C. Maybe you’re just more comfortable with c++.
I agree with the discipline aspect. C++ has a lot going against it. But despite everything it will continue to be mainstream for a long time, and by the looks of it not in the way of COBOL but more like C.
This is a good article but it only scratches the surface, as is always the case when it comes to C++.
When I made a meme about C++ [1] I was purposeful in choosing the iceberg format. To me it's not quite satisfying to say that C++ is merely complex or vast. A more fitting word would be "arcane", "monumental" or "titanic" (get it?). There's a specific feeling you get when you're trying to understand what the hell is an xvalue, why std::move doesn't move or why std::remove doesn't remove.
The Forest Gump C++ is another meme that captures this feeling very well (not by me) [2].
What it comes down to is developer experience (DX), and C++ has a terrible one. Down to syntax and all the way up to package management a C++ developper feels stuck to a time before they were born. At least we have a lot of time to think about all that while our code compiles. But that might just be the price for all the power it gives you.
[1] https://victorpoughon.github.io/cppiceberg/
[2] https://mikelui.io/img/c++_init_forest.gif
In Linuxland you at least have pkg-config to help with package management. It's not perfect but neither is any other package management solution.
If I'm writing a small utility or something the Makefile typically looks something like this:
Consider that to do this you must:
- Use a build system like make, you can't just `c++ build`
- Understand that C++ compilers by default have no idea where most things are, you have to tell them exactly where to search
- Use an external tool that's not your build system or compiler to actually inform the compiler what those search paths are
- Oh also understand the compiler doesn't actually output what you want, you also need a linker
- That linker also doesn't know where to find things, so you need the external tool to use it
- Oh and you still have to use a package manager to install those dependencies to work with pkg-config, and it will install them globally. If you want to use it in different projects you better hope you're ok with them all sharing the same version.
Now you can see why things like IDEs became default tools for teaching students how to write C and C++, because there's no "open a text editor and then `c++ build file.cpp` to get output" for anything except hello world examples.
It's really not that big of a deal once you know how it works, and there are tools like CMake and IDEs that will take care of it.
On Windows and OSX it's even easier - if you're okay writing only for those platforms.
It's more difficult to learn, and it seems convoluted for people coming from Python and Javascript, but there are a lot of advantages to not having package management and build tooling tightly integrated with the language or compiler, too.
I agree -- I've been at it long enough -- cmake etc makes stuff pretty darn easy.
But in industrial settings where multi groups share and change libs something like debpkg may be used. You add caching and you can go quite deep quickly esp after bolting on cdci.
One must cop to the fact that a go build or zig build is just fundamentally better.
Go build is fundamentally better? How so? Go build is so light on features that adding generated files to source control is a norm in go land.
Yeah, I definitely agree the newer tools are better, but sometimes the arguments against C++ get blown out of proportion.
It definitely has a lot of flaws, but in practice most of them have solutions or workarounds, and on a day-to-day basis most C++ programmers aren't struggling with this stuff.
"It's not really that big a deal once you know how it works" is the case with pretty much everything though. The question is whether the amount of time needed to learn how something works is worthwhile though, and the sheer number of things you need to invest the time to learn in a language like C++ compared to more modern languages is a big deal. Looking at a single one of them in isolation like the build system essentially just zooms in one problem far enough to remove the other ones from the picture.
So I come from the C/C++ world, that's part of why I disagree with these takes. I wouldn't say any process involving CMake is "not that big of a deal" because I routinely see veteran developers struggle to edit cmake files to get their code to compile and link.
This is pure Stockholm syndrome. If I were forced to choose between creating a cross-platform C++ project from scratch or taking an honest to god arrow to the knee, the arrow would be less painful.
I don't want any arrows in my knees but I agree.
The main reason I don't want to use C/C++ are the header files. You have to write everything in a header file and then in an implementation file. Every time you want to change a function you need to do this at least twice. And you don't even get fast compilation speed compared to some languages because your headers will #include some library that is immense and then every header that includes that header will have transitive header dependencies, and to solve this you use precompiled headers which you might have to set up manually dependending on what IDE you are using.
It's all too painful.
It gets better with experience. You can have a minimal base layer of common but rarely changing functionality. You can reduce static inline functions in headers. You reduce data structure definitions, but put only forward declarations in header files. (Don't use C++ methods, at least don't put them in an API, because they force you to expose your implementation details needlessly). You can separate data structures from functions in different header files. Grouping functions together with types is often a bad idea since most useful functionality combines data from two or more "unrelated" types -- so you'd rather make function headers "by topic" than putting them alongside types.
I just created a subsystem for a performance intensive application -- a caching layer for millions or even billions of objects. The implementation encompasses over a 1000 LOC, but the header only includes <stdint.h>. There are about 5 forward struct declarations and maybe a dozen function calls in that API.
To a degree it might be stockholm syndrome, but I feel like having had to work around a lot of C's shortcomings I actually learned quite a lot that helps me in architecting bigger systems now. Turns out a lot of the flexibility and ease that you get from more modern languages mostly allows you to code more sloppily, but being sloppy only works for smaller systems.
If you were forced to choose between creating a cross-platform project in one of the trendy language, but of course, which must also work on tiny hardware with a weird custom OSes on some hobbyist hardware, and with 30-year-old machines in some large organization's server farm - then you would choose the C++ project, since you will be able to make that happen, with some pain. And with the other languages - you'll probably just give up or need to re-develop all userspace for a bunch of platforms, so that it can accommodate the trendy language build tool. And even that might not be enough.
Also: If you are on platforms which support, say, CMake - then the multi-platform C++ project is not even that painful.
> but of course, which must also work on tiny hardware with a weird custom OSes on some hobbyist hardware, and with 30-year-old machines in some large organization's server farm - then you would choose the C++ projectt, since you will be able to make that happen, with some pain.
With the old and proprietary toolchains involved, I would bet dollars to doughnuts that there's a 50% odds of C++11 being the latest supported standard. In that context, modern C++ is the trendy language.
Why? There are lots of cross platform libraries and most aspects are not platform specific. It's really not a big deal. Use FLTK and you get most of the cross platform stuff for free in a small package.
It is a big deal even after you know how it works.
The thing is, the languages like Rust only make this easier within their controlled "garden". But for C and C++, you build in the "world outside the garden" to begin with, where you are not guaranteed of everyone having prepared everything for you. So, it's harder, and you may need third-party tools or putting in some elbow grease, or both. The upside is that when rustaceans or go-phers and such wander outside their respective gardens, most of them are completely lost and have no idea what to do; but C and C++ people are kinda-sorta at home there, already.
What is "outside the garden" for Rust?
I used to write a lot of C++ in 2017. Now in 2025 I have no memory of how to do that anymore. It's bespoke Makefile nonsense with zero hope of standardization. It's definitively something that doesn't grow with experience. Meanwhile my gradle setups have been almost unchanged since that time if it wasn't for the stupid backwards incompatible gradle releases.
> It's bespoke Makefile nonsense with zero hope of standardization
technically Makefile is standardized (by POSIX), contrary to most alternatives.
/extremely pedantic
I would rather deal with Makefiles than Gradle.
I think we can afford to strive for more than just "not quite the absolute worst" (for however we decide to measure quality).
> I used to write a lot of C++ in 2017... It's bespoke Makefile nonsense
1. Makefiles are for build systems; they are not C++. 2. Even for building C++ - in 2017, there was no need to write bespoke Makefiles, or any Makefiles. You could, and should, have written CMake; and your CMake files would be usable and relevant today.
> Meanwhile my gradle setups have been almost unchanged since that time
... but, typically, with far narrower applicability.
> Use a build system like make, you can't just `c++ build`
This is a strength not a weakness because it allows you to choose your build system independently of the language. It also means that you get build systems that can support compiling complex projects using multiple programming languages.
> Understand that C++ compilers by default have no idea where most things are, you have to tell them exactly where to search
This is a strength not a weakness because it allows you to organize your dependencies and their locations on your computer however you want and are not bound by whatever your language designer wants.
> Use an external tool that's not your build system or compiler to actually inform the compiler what those search paths are
This is a strength not a weakness because you are not bound to a particular way of how this should work.
> Oh also understand the compiler doesn't actually output what you want, you also need a linker
This is a strength not a weakness because now you can link together parts written in different programming languages which allows you to reuse good code instead of reinventing the universe.
> That linker also doesn't know where to find things, so you need the external tool to use it
This is a strength not a weakness for the reasons already mentioned above.
> Oh and you still have to use a package manager to install those dependencies to work with pkg-config, and it will install them globally. If you want to use it in different projects you better hope you're ok with them all sharing the same version.
This is a strength not a weakness because you can have fully offline builds including ways to distribute dependencies to air-gapped systems and are not reliant on one specific online service to do your job.
Also all of this is a non-issue if you use a half-modern build system. Conflating the language, compiler, build system and package manager is one of the main reason why I stay away from "modern" programming languages. You are basically arguing against the Unix philosophy of having different tools that work together with each tool focusing on one specific task. This allows different tools to evolve independently and for alternatives to exist rather than a single tool that has to fit everyone.
> This is a strength not a weakness
Massive cope, there's no excuse for the lack of decent infrastructure. I mean, the C++ committee for years said explicitly that they don't care about infrastructure and build systems, so it's not really surprising.
The reality is that for any moderately complex C++ application, actually compiling C++ code is only a small part of what the build system does.
Well yeah
We have autoconf/automake checking if you're on a big endian PDP8 or if your compiler has support for cutting edge features like "bool"
I can use pkg-config just fine.
Not sure how relevant the "in order to use a tool, you need to learn how to use the tool".
Or from the other side: not sure what I should think about the quality of the work produced by people who don't want to learn relatively basic skills... it does not take two PhDs to understand how to use pkg-config.
I'm just pointing out that one reason devex sucks in C++ is because the fact you need a wide array of tools, that are non portable, and require learning and teaching magic incantations at the command line or in build scripts to work, doesn't foster what one could call a "good" experience.
Frankly the idea that your compiler driver should not be a basic build system, package manager, and linker is an idea best left in the 80s where it belongs.
For most people this is a feature not a bug as you suggest. It may come across as PITA, and for many people will do, but as far as I am concerned, while also having experienced the pain of package managers in C++, this is the right way. In the end it's always about the trade-offs. And all the (large) codebases that used conan, bazel or vcpkg induced a magnitude more issues that you would have to handle which otherwise in a plain CMake you would not have. Package managers are for convenience but not all projects can afford themselves the trouble this convenience brings with it.
Coming from a different ground (TypeScript) I agree, in a sense that there is a line where apparent convenience because a trouble. JS ecosystem is known for its hype for build tools. Long term all of them become a problem due to trying to be more convenient, leading to more and more abstractions and hidden behaviors, which turns into a mess impossible to debug or solve when user diverges from author's happy path. Thus I promote using only the necessities, and gluing them together by yourself. Even if something doesn't work, at least it can be tracked down and solved.
> For most people this is a feature not a bug as you suggest.
Exactly: it makes many things nicer to use than the language package managers, e.g. when maintaining a Linux distribution.
But people generally don't know how one maintains a Linux distribution, so they can't really see the use-case, I guess.
> require learning and teaching magic incantations at the command line
That's exactly my point: if you think that calling `cmake --build build` is "magic", then maybe you don't have the right profile to use C++ in the first place, because you will have to learn some harder concepts there (like... pointers).
To be honest, I find it hard to understand how a software developer can write code and still consider that command line instructions are "magic incantations". To me it's like saying that calling a function like `println("Some text, {}, {}", some_parameter, some_other_parameter)` is a "magic incantation". Calling a function with parameters counts as "the basics" to me.
that idea that packages and builds belongs to simple problem, large projects need things like more than one laguage and so end up fighting the language
Every modern language seems to have an answer to this problem that C and C++ refuse to touch because it's out of scope for their respective committees and standards orgs
On the front page right now:
Shai-Hulud malware attack: Tinycolor and over 40 NPM packages compromised (stepsecurity.io)
935 points by jamesberthoty 16 hours ago | flag | hide | 730 comments
Maybe obstreperous dependency management ends up being the winning play in 2025 :)
Just think of how many _more_ vulns C and C++ could be responsible for if they had package modern managers! :)
Seems like a false dichotomy
Completely unrelated.
C++ has a plethora of available build and package management systems. They just aren't bundled with the compiler. IMO that is a good thing, because it keeps the compiler writers honest.
You say that as if Cargo, MSBuild, and pip aren’t massively loved by their communities.
Coming from c++, pip and python dependency management is the bane of my life. How do you make a python software leveraging pytorch that will ship as a single .exe and be able to target whatever gpu the user has without downloads?
Funnily enough a lot of the challenges in this particular case is related to PyTorch and CUDA being native code (mostly in C++). Of course combined with the fact that pip is not really adequate as a native/C++ code package manager.
Perhaps if C++ had a decent standardized package manager, the Python package system reuse that? ;p
just wait for next week and python will get a better package manager!
"Massively loved" and "good decision" are orthogonal axes. See the current npm drama. People love wantonly importing dependencies the way they love drinking. Both feel great but neither is good for you.
Not that npm-style package management is the best we can do or anything, but I would be more sympathetic to this argument if C or C++ had a clearly better security story than JS, Python, etc. (pick your poison), but they're also disasters in this area.
What happens in practice is people end up writing their own insecure code instead of using someone else's insecure code. Of course, we can debate the tradeoffs of one or the other!
This isn't only about security. This is about interoperability, in the real world we mix (and should mix!) C, C++, Rust, python.... In the real world lawyers audit every dependency to ensure they can legally use it. In the real world we are responsible for our dependencies and so need to audit the code.
I'm getting the impression that C/C++ cultists love it whenever there's an npm exploit because then they can gleefully point at it and pretend that any first-party package manager for C/C++ would inevitably result in the same, nevermind the other languages that do not have this issue, or have it to a far, far lesser extent. Do these cultists just not use dependencies? Are they just [probably inexpertly] reinventing every wheel? Or do they use system packages like that's any better *cough* AUR exploits *cought*. While dependency hell on nodejs (and even Rust if we're honest) is certainly a concern, it's npm's permissiveness and lack of auditing that's the real problem. That's why Debian is so praised.
What makes me a C++ "cultist"? I like the language, but I don't think it's a cult. And yes, they do implement their own wheel all the time (usually expertly) because libraries are reserved for functions that really need it: writing left pad is really easy. They also use third-party libraries all the time, too. They just generally pay attention to the source of that library. Google and Facebook also publish a lot of C++ libraries under one umbrella (abseil and folly respectively), and people often use one of them.
STOP SAYING CULTIST! The word has very strong meaning and does not apply to anyone working with C or C++. I take offense at being called a cultist just because I say C++ is not nearly as bad as the haters keep claiming it is - as well I should.
> Or do they use system packages like that's any better cough AUR exploits cought.
AUR stands for "Arch User Repository". It's not the official system repository.
> I'm getting the impression that C/C++ cultists love it whenever there's an npm exploit
I am not a C/C++ cultist at all, and I actually don't like C++ (the language) so much (I've worked with it for years). I, for one, do not love it when there is an exploit in a language package manager.
My problem with language package managers is that people love them precisely because they don't want to learn how to deal with dependencies. Which is actually the problem: if I pull a random Rust library, it will itself pull many transitive dependencies. I recently compared two implementations of the same standard (C++ vs Rust): in C++ it had 8 dependencies (I can audit that myself). In Rust... it had 260 of them. 260! I won't even read through all those names.
"It's too hard to add a dependency in C++" is, in my opinion, missing the point. In C++, you have to actually deal with the dependency. You know it exists, you have seen it at least once in your life. The fact that you can't easily pull 260 dependencies you have never heard about is a feature, not a bug.
I would be totally fine with great tooling like cargo, if it looked like the problem of random third-party dependencies was under control. But it is not. Not remotely.
> Do these cultists just not use dependencies?
I choose my dependencies carefully. If I need a couple functions from an open source dependency I don't know, I can often just pull those two functions and maintain them myself (instead of pulling the dependency and its 10 dependencies).
> Are they just [probably inexpertly] reinventing every wheel?
I find it ironic that when I explain that my problem is that I want to be able to audit (and maintain, if necessary) my dependencies, the answer that comes suggests that I am incompetent and "inexpertly" doing my job.
Would it make me more of an expert if I was pulling, running and distributing random code from the Internet without having the smallest clue about who wrote it?
Do I need to complain about how hard CMake is and compare a command line to a "magic incantation" to be considered an expert?
> AUR stands for "Arch User Repository". It's not the official system repository.
Okay... and? The point being made was that the issue of package managers remains: do you really think users are auditing all those "lib<slam-head-on-keyboard>" dependencies that they're forced to install? Whether they install those dependencies from the official repository or from homebrew, or nix, or AUR, or whatever, is immaterial, the developer washed their hands of this, instead leaving it to the user who in all likelihood knows significantly less than the developers to be able to make an informed decision, so they YOLO it. Third-party repositories would not exist if they had no utility. But this is why Debian is so revered: they understand this dynamic and so maintain repositories that can be trusted. Whereas the solution C/C++ cultists seem to implicitly prefer is having no repositories because dependencies are, at best, a slippery slope.
> "It's too hard to add a dependency in C++"
It's not hard to add a dependency. I actually prefer the dependencies-as-git-submodules approach to package managers: it's explicit and you know what you're getting and from where. But using those dependencies is a different story altogether. Don't you just love it when one or more of your dependencies has a completely different build system to the others? So now you have to start building dependencies independently, whose artefacts are in different places, etc, etc, this shouldn't be a problem.
> I, for one, do not love it when there is an exploit in a language package manager.
Oh please, I believe that about as much as ambulance chasers saying they don't love medical emergencies. Otherwise, why are any and all comments begging for a first-party package manager immediately swamped with strawmans about npm as if anyone is actually asking for that, instead of, say, what Zig or Go has? It's because of the cultism, and every npm exploit further entrenches it.
C++ usage has nothing to do with static/dynamic linking. One is a language and the other is a way of using libraries. Dynamic linking gives you small binaries with a lot of cross-compatibility, and static linking gives you big binaries with known function. Most production C++ out there follows the same pattern as Rust and Go and uses static linking (where do you think Rust and Go got that pattern from?). Python is a weird language that has tons of dynamic linking while also having a big package manager, which is why pip is hell to use and PyTorch is infamously hard to install.
Dynamic linking shifts responsibility for the linked libraries over to the user and their OS, and if it's an Arch user using AUR they are likely very interested in assuming that risk for themselves. 99.9% of Linux users are using Debian or Ubuntu with apt for all these libs, and those maintainers do pay a lot of attention to libraries.
> But this is why Debian is so revered: they understand this dynamic and so maintain repositories that can be trusted.
So you do understand my point about AUR. AUR is like adding a third-party repo to your Debian configuration. So it's not a good example if you want to talk about official repositories.
Debian is a good example (it's not the only distribution that has that concept), which proves my point and not yours: this is better than unchecked repositories in terms of security.
> Whereas the solution C/C++ cultists seem to implicitly prefer is having no repositories because dependencies are, at best, a slippery slope.
Nobody says that ever. Either you make up your cult just to win an argument, or you don't understand what C/C++ people say. The whole goddamn point is to have a trusted system repository, and if you need to pull something that is not there, then you do it properly.
Which is better than pulling random stuff from random repositories, again.
> I actually prefer the dependencies-as-git-submodules approach
Oh right. So you do it wrong, it's good to know and it will answer your next complaint:
> Don't you just love it when one or more of your dependencies has a completely different build system to the others
I don't give a damn because I handle dependencies properly (not as git submodules). I don't have a single project where the dependencies all use the same build system. It's just not a problem at all, because I do it properly. What do I do then? Well exactly the same as what your system package manager does.
> this shouldn't be a problem.
I agree with you. Call it a footgun if you wish, you are the one pulling the trigger. It isn't a problem for me.
> why are any and all comments begging for a first-party package manager immediately swamped with strawmans about npm
Where did I do that?
> It's because of the cultism, and every npm exploit further entrenches it.
It's because npm is a good example of what happens when it goes out of control. Pip has the same problem, and Rust as well. But npm seems to be the worse, I guess because it's used by more people?
Your defensiveness is completely hindering you and I cannot be bothered with that so here are some much needed clarifications:
> I am not a C/C++ cultist at all, and I actually don't like C++ (the language) so much (I've worked with it for years). I, for one, do not love it when there is an exploit in a language package manager.
If you do neither of those things then did it ever occur to you that this might not be about YOU?
> I find it ironic that when I explain that my problem is that I want to be able to audit (and maintain, if necessary) my dependencies, the answer that comes suggests that I am incompetent and "inexpertly" doing my job.
Yeah, hi, no you didn't explain that. You're probably mistaking me for someone else in some other conversation you had. The only comment of yours prior to mine in the thread is you saying "I can use pkg-config just fine." And again, you're thinking that I'm calling YOU incompetent, or even that I'm calling you incompetent. But okay, I'm sure your code never has bugs, never has memory issues, is never poorly designed or untested, that you can whip out an OpenGL alternative whatever in no time and it be just as stable and battle-tested, and to say otherwise must be calling you incompetent. That makes total sense.
> AUR stands for "Arch User Repository". It's not the official system repository.
> So it's not a good example if you want to talk about official repositories.
I said system package, not official repository. I don't know why you keep insisting on countering an argument I did not make. Yes, system packages can be installed from unofficial repositories. I don't know how I could've made this clearer.
--
Overall, getting bored of this, though the part where you harp on about doing dependencies properly compared to me and not elaborating one bit is very funny. Have a nice day.
> Your defensiveness
Start by not calling everybody disagreeing with you a cultist, next time.
> I said system package, not official repository. I don't know why you keep insisting on countering an argument I did not make. Yes, system packages can be installed from unofficial repositories. I don't know how I could've made this clearer.
It's not that it is unclear, it's just that it doesn't make sense. When we compare npm to a system package manager in this context, the thing we compare is whether or not is it curated. Agreed, I was maybe not using the right words (I should have said curated package managers vs not curated package managers), but it did not occur to me that it was unclear because comparing npm to a system package manager makes no sense otherwise. It's all just installing binaries somewhere on disk.
AUR is much like npm in that it is not curated. So if you find that it is a security problem: great! We agree! If you want to pull something from AUR, you should read its PKGBUILD first. And if it pulls tens of packages from AUR, you should think twice before you actually install it. Just like if someone tells you to do `curl https://some_website.com/some_script.sh | sudo sh`, no matter how convenient that is.
Most Linux distributions have a curated repository, which is the default for the "system package manager". Obviously, if users add custom, not curated repositories, it's a security problem. AUR is a bad example because it isn't different from npm in that regard.
> though the part where you harp on about doing dependencies properly compared to me and not elaborating one bit is very funny
Well I did elaborate at least one bit, but I doubt you are interested in more details than what I wrote: "What do I do then? Well exactly the same as what your system package manager does."
I install the dependencies somewhere (just like the system package manager does), and I let my build system find them. It could be with CMake's `find_package`, it could be with pkg-config, whatever knows how to find packages. There is no need to install the dependencies in the place where the system package manager installs stuff: it can go anywhere you want. And you just tell CMake or pkg-config or Meson or whatever you use to look there, too.
Using git submodules is just a bad idea for many reasons, including the fact that you need all of them to use the same build system (which you mentioned), or that a clean build usually implies rebuilding the dependencies (for nothing) or that it doesn't work with package managers (system or not). And usually, projects that use git submodule only support that, without offering a way to use the system package(s).
They are massively loved because people don't want to learn how it works. But the result is that people massively don't understand how package management works, and miss the real cost of dependencies.
MSBuild also does C++.
Modern languages don't generally play nice with linux distributions, IMO.
C and C++ have an answer to the dependency problem, you just have to learn how to do it. It's not rocket science, but you have to learn something. Modern languages remove this barrier, so that people who don't want to learn can still produce stuff. Good for them.
no they don't - at least not a good answer. It generally amounts to running a different build system and waiting - this destroys parralism and slows the build down.
I'm not going to defend the fact that the C++ devex sucks. There are really a lot of reasons for it, some of which can't sensibly be blamed on the language and some of which absolutely can be. (Most of it probably just comes down to the language and tooling being really old and not having changed in some specific fundamental ways.)
However, it's definitely wrong to say that the typical tools are "non-portable". The UNIX-style C++ toolchains work basically anywhere, including Windows, although I admit some of the tools require MSys/Cygwin. You can definitely use GNU Makefiles with pkg-config using MSys2 and have a fine experience. Needless to say, this also works on Linux, macOS, FreeBSD, Solaris, etc. More modern tooling like CMake and Ninja work perfectly fine on Windows and don't need any special environment like Cygwin or MSys, can use your MSVC installation just fine.
I don't really think applying the mantra of Rust package management and build processes to C++ is a good idea. C++'s toolchain is amenable to many things that Rust and Cargo aren't. Instead, it'd be better to talk about why C++ sucks to use, and then try to figure out what steps could be taken to make it suck less. Like:
- Building C++ software is hard. There's no canonical build system, and many build systems are arcane.
This one really might be a tough nut to crack. The issue is that creating yet another system is bound to just cause xkcd 927. As it is, there are many popular ways to build, including GNU Make, GNU Autotools + Make, Meson, CMake, Visual Studio Solutions, etc.
CMake is the most obvious winner right now. It has achieved defacto standard support. It works on basically any operating system, and IDEs like CLion and Visual Studio 2022 have robust support for CMake projects.
Most importantly, building with CMake couldn't be much simpler. It looks like this:
And you have a build in .build. I think this is acceptable. (A one-step build would be simpler, but this is definitely more flexible, I think it is very passable.)This does require learning CMake, and CMake lists files are definitely a bit ugly and sometimes confusing. Still, they are pretty practical, and rather easy to get started with, so I think it's a clear win. CMake is the "defacto" way to go here.
- Managing dependencies in C++ is hard. Sometimes you want external dependencies, sometimes you want vendored dependencies.
This problem's even worse. CMake helps a little here, because it has really robust mechanisms for finding external dependencies. However, while robust, the mechanism is definitely a bit arcane; it has two modes, the legacy Find scripts mode, and the newer Config mode, and some things like version constraints can have strange and surprising behavior (it differs on a lot of factors!)
But sometimes you don't want to use external dependencies, like on Windows, where it just doesn't make sense. What can do you really do here?
I think the most obvious thing to do is use vcpkg. As the name implies, it's Microsoft's solution to source-level dependencies. Using vcpkg with Visual Studio and CMake is relatively easy, and it can be configured with a couple of JSON files (and there is a simple CLI that you can use to add/remove dependencies, etc.) When you configure your CMake build, your dependencies will be fetched and built appropriately for your targets, and then CMake's find package mechanism can be used just as it is used for external dependencies.
CMake itself is also capable of vendoring projects within itself, and it's absolutely possible to support all three modalities of manual vendoring, vcpkg, and external dependencies. However, for obvious reasons this is generally not advisable. It's really complicated to write CMake scripts that actually work properly in every possible case, and many cases need to be prevented because they won't actually work.
All of that considered, I think the best existing solution here is CMake + vcpkg. When using external dependencies is desired, simply not using vcpkg is sufficient and the external dependencies will be picked up as long as they are installed. This gives an experience much closer to what you'd expect from a modern toolchain, but without limiting you from using external dependencies which is often unavoidable in C++ (especially on Linux.)
- Cross-compiling with C++ is hard.
In my opinion this is mostly not solved by the "defacto" toolchains. :)
It absolutely is possible to solve this. Clang is already better off than most of the other C++ toolchains in that it can handle cross-compiling with selecting cross-compile targets at runtime rather than build time. This avoids the issue in GCC where you need a toolchain built for each target triplet you wish to target, but you still run into the issue of needing libc/etc. for each target.
Both CMake and vcpkg technically do support cross-compilation to some extent, but I think it rarely works without some hacking around in practice, in contrast to something like Go.
If cross-compiling is a priority, the Zig toolchain offers a solution for C/C++ projects that includes both effortless cross-compiling as well as an easy to use build command. It is probably the closest to solving every (toolchain) problem C++ has, at least in theory. However, I think it doesn't really offer much for C/C++ dependencies yet. There were plans to integrate vcpkg for this I think, but I don't know where they went.
If Zig integrates vcpkg deeply, I think it would become the obvious choice for modern C++ projects.
I get that by not having a "standard" solution, C++ remains somewhat of a nightmare for people to get started in, and I've generally been doing very little C++ lately because of this. However I've found that there is actually a reasonable happy path in modern C++ development, and I'd definitely recommend beginners to go down that path if they want to use C++.
> Using vcpkg [...] When you configure your CMake build, your dependencies will be fetched and built appropriately for your targets, and then CMake's find package mechanism can be used just as it is used for external dependencies.
Yes! I believe this is powerful: if CMake is used properly, it does not have to know where the dependencies come from, it will just "find" them. So they could be installed on the system, or fetched by a package manager like vcpkg or conan, or just built and installed manually somewhere.
> Cross-compiling with C++ is hard.
Just wanted to mention the dockcross project here. I find it very useful (you just build in a docker container that has the toolchain setup for cross-compilation) and it "just works".
You also don't do "rustc build". Cargo is a build system too.
The whole point of pkg-config is to tell the compiler where those packages are.
I mean yeah, that's the point of having a tool like that. It's fine that the compiler doesn't know that, because its job is turning source into executables, not being the OS glue.
I'm not sure "having a linker" is a weakness? What are talking about?
It is true that you need to use the package manager to install the dependencies. This is more effort than having a package manager download them for you automatically, but on the other hand you don't end up in a situation where you need virtual environments for every application because they've all downloaded slightly different versions of the same packages. It's a bit of a philosophical argument as to what is the better solution.
The argument that it is too hard for students seems a bit overblown. The instructions for getting this up and running are:
I'd argue that is fewer steps than setting up almost any IDE. The Makefile is 6 lines and is easy to adapt to any similar size project. The only major weakness is headers, in which case you can do something like: If you change any header it will trigger a full system rebuild, but on C projects this is fine for a long time. It's just annoying that you have to create a new entry for every c file you add to the project instead of being able to tell make to add that to every object automatically. I suspect there is a very arcane way to do this, but I try to keep it as simple as possible.I'm not your parent, but the overall point of this kind of thing is that all of these individual steps are more annoying and error-prone than one command that just takes care of it. `cargo build` is all you need to build the vast majority of Rust projects. No need to edit the Makefile for those headers, or remember which commands you need to install the various dependencies, and name them individually, figuring out which name maps to your distro's naming scheme, etc. It's not just "one command vs five" it's "one command for every project vs five commands that differ slightly per project and per platform". `make` can come close to this, and it's why people love `./configure; make`, and there's no inherent reason why this couldn't paper over some more differences to make it near universal, but that still only gets you Unix platforms.
> but on the other hand you don't end up in a situation where you need virtual environments for every application because they've all downloaded slightly different versions of the same packages.
The real downside here is that if you need two different programs with two different versions of packages, you're stuck. This is often mitigated by things like foo vs foo2, but I have been in a situation where two projects both rely on different versions of foo2, and cannot be unified. The per-project dependency strategy handles this with ease, the global strategy cannot.
clang++ $(pkg-config --cflags --libs libtorch) qwen-3-nvfp4.cpp -o ./qwen-3-infer
Your move.
None of that is a problem
There are a lot of problems, but having to carefully construct the build environment is a minor one time hassle.
Then repeated foot guns going off, no toes left, company bankrupt and banking system crashed, again
> There are a lot of problems, but having to carefully construct the build environment is a minor one time hassle.
I've observed the existence in larger projects of "build engineers" whose sole job is to keep the project building on a regular cadence. These jobs predominantly seem to exist in C++ land.
> These jobs predominantly seem to exist in C++ land.
You wish.
These jobs exist for companies with large monorepos in other languages too and/or when you have many projects.
Plenty of stuff to handle in big companies (directory ownership, Jenkins setup, in-company dependency management and release versioning, developer experience in genernal, etc.)
Most of what I have seen came from technical debt aquired over decades. With some of the build engineers hired to "manage" that themselves not being treated as programmers and just adding on top of the mess with "fixes" that are never reviewed or even checked in. Had a fun time once after we reinstalled the build server and found out that the last build engineer created a local folder to store various dependencies instead of of using vcpkg to fetch everything as we had mandated for several years by then.
I have been "build engineer" across many projects, regardless of the set of programming languages being used, this is not specific to C++.
I’ve only ever seen this on extraordinarily complex codebases that mixed several languages. Pure C++ scales really well these days.
How is that even possible?
Wasn't CI invented to solve just this problem?
You have a team of 20 engineers on a project you want to maintain velocity on. With that many coooks, you have patches on top of patches of your build system where everyone does the bare minimum to meet the near term task only and it devolves into a mess no one wants to touch over enough time.
Your choice: do you have the most senior engineers spend time sporadically maintaining the build system, perhaps declaring fires to try to pay off tech debt, or hire someone full time, perhaps cheaper and with better expertise, dedicated to the task instead?
CI is an orthogonal problem but that too requires maintenance - do you maintain it ad-hoc or make it the official responsibility for someone to keep maintained and flexible for the team’s needs?
I think you think I’m saying the task is keeping the build green whereas I’m saying someone has to keep the system that’s keeping the build green going and functional.
> You have a team of 20 engineers on a project you want to maintain velocity on. With that many coooks, you have patches on top of patches of your build system ...
The scenario you are describing does not make sense for the commonly accepted industry definition of "build system." It would make sense if, instead, the description was "application", "product", or "system."
Many software engineers use and interpret the phrase "build system" to be something akin to make[0] or similar solution used to produce executable artifacts from source code assets.
0 - https://man.freebsd.org/cgi/man.cgi?query=make&apropos=0&sek...
I can only relate to you what I’ve observed. Engineers were hired to rewrite the Make-based system into Bazel and maintain it for single executable distributed to the edge. I’ve also observed this for embedded applications and other stuff.
I’m not sure why you’re dismissing it as something else without knowing any of the details or presuming I don’t know what I’m talking about.
> minor one time hassle
I dont know if you're jokingor just naïve, but cmake and the like are massive time sinks if you want anything beyond "here's a few source files, make me an application"
Wow, I don't understand what anything means in those memes. And I'm so glad I don't!
It seems to me that the people/committees who built C++ just spent decades inventing new and creative ways for developers to shoot themselves in the foot. Like, why does the language need to offer a hundred different ways to accomplish each trivial task (and 98 of them are bad)?
The road to hell is paved with 40 years of backwards compatibility requirements.
Ignorance is not something you should be proud of.
C++ is the only language which invests in archaeology over futurism.
You get to choose between 25 flint-bladed axes, some of which are coated in modern plastic, when you really want a chainsaw.
Being wise enough to know what's worth spending time to learn is, though. Knowledge isn't free.
I don't plan on ever using C++ again, but FWIW in Rust there are lots of cases where you specify `move` and stuff doesn't get moved, or don't specify it and it does, and it's also a specific feeling.
> why std::move doesn't move
Whenever someone asks you about std::move, or -values I have a ready-made answer for that: https://stackoverflow.com/a/27026280/1593077
but it's true that when a user first sees `std::move(x)` with no argument saying _where_ to move it to, they either get frustrated or understand they have to get philosophical :-)
> in C++, you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
This... doesn't really hold water. You have to learn about what the insane move semantics are (and the syntax for move ctors/operators) to do fairly basic things with the language. Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood. Basic standard library datatypes like std::vector use templates, so you're debugging template instantiation issues whether you write your own templated code or not.
> You have to learn about what the insane move semantics are (and the syntax for move ctors/operators) to do fairly basic things with the language
That is simply not true. You can write a lot of C++ code without even touching move stuff. Hell, we've been fine without move semantics for the last 30 years :P
> Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood
Partially true. operator*() is used through the standard library a lot, because it nicely wraps pointer semantics. Still, you don't have to know about implementation details, as they depend on how the standard library implements the underlying containers.
AFAIK operator<<() is mainly (ab)used by streams. And you can freely skip that part; many C++ developers find them unnecessarily slow and complex.
> Basic standard library datatypes like std::vector use templates, so you're debugging template instantiation issues whether you write your own templated code or not.
As long as you keep things simple, errors are going to be simple. The problem with "modern C++" is that people overuse these new features without fully comprehending their pros and cons, simply because they look cool.
Reminds me of the old quote
> everyone only uses 20% of C++, the problem is that everyone uses a different 20%
I suspect that percentage has shrunk quite a bit lately.
> Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood
you don't need to understand what an overloaded operator is doing any more than you have to understand the implementation of every function you call, recursively
I mean, you kinda do. Otherwise you won’t understand why bit-shifting to std::cout prints something, which is pretty much day 1 of C++ hello world introduction (yes, I know there are introductions that don’t use that silly syntax sugar. They’re rare, like it or not.)
Like, sure, you don’t have to understand cout’s implementation of operator <<, but you have to know a) that it’s overloadable in the first place, b) that overloads can be arbitrary functions on arbitrary types (surprising if coming from languages that support more limited operator overloading), and c) probably how to google/go-to-documentation on operators for a type to see what bit-shifting a string into stdio does.
That’s … a lot more to learn than, say, printf.
> I mean, you kinda do. Otherwise you won’t understand why bit-shifting to std::cout prints something,
but it's not bit-shifting, it's "<<", which can do different operations in different contexts.
Exactly, the "you dont have" part lasts until the first error message and then it's 10 feet of stl template instantiation error avalanche. And stl implementation is really advanced c++. Also, a lot of "modern" code cannot be debugged at all (because putting in a print statement breaks constexprness) and your only recourse is reading code.
This is in big part also because of committee, that prefers hundred-line template monster "can_this_call_that_v" to a language feature, probably thinking that by not including something in language standard and offloading it to library they do good job.
I’ve been programming C++ on a daily basis for more than 20 years and literally never use the >> operator. Never. Not rarely, never.
I didn't mention >>.
How do you shift bits to the right?
Manipulating bit patterns isn't really that common in most workloads.
right_shifted = value / (1 << bits)
Division is slow, though. You should use something like:
right_shifted = (int)(value * pow(2, -bits) - 0.5)
Or just rely on the compiler to automatically do trivial conversions. Which are pretty reliable these days.
I'm pretty sure the post you are responding to is not seriously suggesting using floating point multiplication and exponentiation as a performance optimization ;)
Offcourse not, that would be silly. Just overload the ^ operator like any normal person.
#include <iostream> #include <bitset>
class Bits { unsigned value; public: explicit Bits(unsigned v) : value(v) {}
};int main() { Bits x(0b00001111);
I dont trust my intel FPU to accurately representa that ;-)
What about sometimes?
Overloaded operators were a terrible mistake in every programming language I've encountered them in. (Yes, sorry Haskell, you too!)
I don't think move semantics are really that bad personally, and some languages move by default (isn't that Rust's whole thing?).
What I don't like is the implicit ambiguous nature of "What does this line of code mean out of context" in C++. Good luck!
I have hope for C++front/Cpp2. https://github.com/hsutter/cppfront
(oh and I think you can write a whole book on the different ways to initialize variables in C++).
The result is you might be able to use C++ to write something new, and stick to a style that's readable... to you! But it might not make everyone else who "knows C++" instantly able to work on your code.
Overloaded operators are great. But overloaded operators that do something entirely different than their intended purpose is bad. So a + operator that does an add in your custom numeric data type is good. But using << for output is bad.
The first programming language that used overloaded operators I really got into was Scala, and I still love it. I love that instead of Java's x.add(y); I can overload + so that it calls .add when between two objects of type a. It of course has to be used responsibly, but it makes a lot of code really more readable.
> The first programming language that used overloaded operators I really got into was Scala, and I still love it. I love that instead of Java's x.add(y); I can overload + so that it calls .add when between two objects of type a. It of course has to be used responsibly, but it makes a lot of code really more readable.
The problem, for me, with overloaded operators in something like C++ is that it frequently feels like an afterthought.
Doing "overloaded operators" in Lisp (CLOS + MOP) has much better "vibes" to me than doing overloaded operators in C++ or Scala.
Exactly, not allowing operator overload leads to Java hell, where we need verbose functions for calls that should be '+' or similar.
I will die on the hill that string concatenation should have its own operator, and overloading + for the operation is a mistake.
Languages that get it right: SQL, Lua, ML, Perl, PHP, Visual Basic.
I think it's fine when the language has sufficiently strict types for string concatenation.
Unfortunately, many languages allow `string + int`, which is quite problematic. Java is to blame for some of this.
And C++ is even worse since literals are `const char[]` which decays to pointer.
Languages okay by my standard but not yours include: Python, Ruby.
Those languages need a dedicated operator because they are loosely typed which would make it ambiguous like + in JavaScript.
But C++ doesn't have that problem. Sure, a separate operator would have been cleaner (but | is already used for bitwise or) but I have never seen any bug that resulted from it and have never felt it to be an issue when writing code myself.
Though then you can have code like "hello" + "world" that doesn't compile and "hello" + 10 that will do something completely different. In some situations you could actually end up with that by gradual modification of the original code..
Granted this is probably a novice-level problem.
Tangential, but Lua is the most write-only language I have had pleasure working with. The implementation and language design are 12 out of 10, top class. But once you need to read someone else's code, and they use overloads liberally to implement MCP and OODB and stuff, all in one codebase, and you have no idea if "." will index table, launch Voyager, or dump core, because everything is dispatched at runtime, it's panic followed by ennui.
Perl was one for me that I always had a little trouble reading again later.
I guess forth as well... hmmm
> string concatenation should have its own operator,
It does: |
That character was put in ASCII specifically for concatenation in PL/1.
Then came C.
D (as always) is clever: the operator is ~ So no confusion between addition and concatenation and you can keep | for or.
Question, does that work with other types? Say you have two u16 values, can you concatenate them together with ~ into a u32 without any shifting?
No, it doesn't. But I'm not sure that this matter, a sufficiently "smart" compiler understand that this is the same thing.
It works with arrays (both fixed size, and dynamically sized) and arrays; between arrays and elements; but not between two scalar types that don't overload opBinary!"~", so no it won't work between two `ushorts` to produce a `uint`
Alternatively, any implementation of operator+ should have a notional identity element, an inverse element and be commutative.
C++ would be a very different language if you couldn't use floats:
(NaN + 0.0) != 0.0 + NaN
Inf + -Inf != Inf
I suspect the algebraists would also be pissed if you took away their overloads for hypercomplex numbers and other exotic objects.
PHP overloads operators in other ways though.
But why, where does it become a problem?
This is so sad obvious it’s painful.
Arithmetic addition and sequence concatenation are very very different.
——
Scala got this right as well (except strings, Java holdover)
Concatenation is ++
Python managed to totally confuse this. "+" for built-in arrays is concatenation. "+" for NumPy arrays is elementwise addition. Some functions accept both types. That can end badly.
I kind of like that OCaml, which I've also not used a great deal, has different operators for adding floats vs integers etc...
Extremely clear at the "call site" what's going on.
Regrettably, “intended purpose” is highly subjective.
Sure, << for stream output is pretty unintuitive and silly. But what about pipes for function chaining/composition (many languages overload thus), or overriding call to do e.g. HTML element wrapping, or overriding * for matrices multiplied by simple ints/vectors?
Reasonable minds can and do differ about where the line is in many of those cases. And because of that variability of interpretation, we get extremely hard to understand code. As much as I have seen value in overloading at times, I’m forced to agree that it should probably not exist entirely.
> we get extremely hard to understand code
The thing is code without operator overloading is also hard to understand because you might have this math thing (BigIntegers, Matrices) and you can't use standard notation.
Depends on what you're trying to understand.
Let's say I have matrices, and I've overloaded * for multiplying a matrix by a matrix, and a matrix by a vector, and a matrix by a number. And now I write
If I'm trying to understand this as one of a series of steps of linear algebra that I'm trying to make sure are right, that is far more comprehensible than because it uses math notation, and that's closer to the way linear algebra is written.But if I take the exact same line and try to understand exactly which functions get called, because I'm worried about numerical stability or performance or something, then the first approach hides the details and the second one is easier to understand.
This is always the way it goes with abstraction. Abstraction hides the details, so we can think at a higher level. And that's good, when you're trying to think at the higher level. When you're not, then abstraction just hides what you're really trying to understand.
but is it element by element multiplication, or matrix multiplication? I honestly just prefer calling matmul.
If you've done any university-level maths you should have seen the + sign used in many other contexts than adding numbers, why should that be a problem when programming?
There is usually another operator used for concatenation in math though: | or || or ⊕
The first two are already used for bitwise and logical or and the third isn't available in ASCII so I still think overloading + was a reasonable choice and doesn't cause any actual problems IME.
So, what programmers wanted (yes, already before C++ got this) was what are called "destructive move semantics".
These assignment semantics work how real life works. If I give you this Rubik's Cube now you have the Rubik's Cube and I do not have it any more. This unlocks important optimisations for non-trivial objects which have associated resources, if I can give you a Rubik's Cube then we don't need to clone mine, give you the clone and then destroy my original which is potentially much more work.
C++ 98 didn't have such semantics, and it had this property called RAII which means when a local variable leaves scope we destroy any values in that variable. So if I have a block of code which makes a local Rubik's Cube and then the block ends the Rubik's Cube is destroyed, I wrote no code to do that it just happens.
Thus for compatibility, C++ got this terrible "C++ move" where when I give you a Rubik's Cube, I also make a new hollow Rubik's Cube which exists just to say "I'm not really a Rubik's Cube, sorry, that's gone" and this way, when the local variable goes out of scope the destruction code says "Oh, it's not really a Rubik's Cube, no need to do more work".
Yes, there is a whole book about initialization in C++: https://www.cppstories.com/2023/init-story-print/
For trivial objects, moving is not an improvement, the CPU can do less work if we just copy the object, and it may be easier to write code which doesn't act as though they were moved when in fact they were not - this is obviously true for say an integer, and hopefully you can see it will work out better for say an IPv6 address, but it's often better for even larger objects in some cases. Rust has a Copy marker trait to say "No, we don't need to move this type".
In particular, move is important if there is something like a unique_ptr. To make a copy, I have to make a deep copy of whatever the unique_ptr points to, which could be very expensive. To do a move, I just copy the bits of the unique_ptr, but now the original object can't be the one that owns what's pointed to.
Sure. Notice std::unique_ptr<T> is roughly equivalent to Rust's Option<Box<T>>
The C++ "move" is basically Rust's core::mem::take - we don't just move the T from inside our box, we have to also replace it, in this case with the default, None, and in C++ our std::unique_ptr now has no object inside it.
But while Rust can carefully move things which don't have a default, C++ has to have some "hollow" moved-from state because it doesn't have destructive move.
> I don't think move semantics are really that bad personally, and some languages move by default (isn't that Rust's whole thing?).
Rust's move semantics are good! C++'s have a lot of non-obvious footguns.
> (oh and I think you can write a whole book on the different ways to initialize variables in C++).
Yeah. Default init vs value init, etc. Lots of footguns.
Operator overloarding is essential for computer graphics libraries for vector and matrix multiplication, which becomes an illegible mess without.
I personally think that operator overloading itself is justified, but the pervasive scope of operator overloading is bad. To me the best solution is from OCaml: all operators are regular functions (`a + b` is `(+) a b`) and default bindings can't be changed but you can import them locally, like `let (+) = my_add in ...`. OCaml also comes with a great convenience syntax where `MyOps.(a + b * c)` is `MyOps.(+) a (MyOps.(*) b c)` (assuming that MyOps defines both `(+)` and `(*)`), which scopes operator overloading in a clear and still convenient way.
A benefit of operator overloads is that you can design drop-in replacements for primitive types to which those operators apply but with stronger safety guarantees e.g. fully defining their behavior instead of leaving it up to the compiler.
This wasn't possible when they were added to the language and wasn't really transparent until C++17 or so but it has grown to be a useful safety feature.
However C++ offers several overloads where you don't get to provide a drop-in replacement.
Take the short-circuiting boolean operators || and &&. You can overload these in C++ but you shouldn't because the overloaded versions silently lose short-circuiting. Bjarne just didn't have a nice way to write that so, it's not provided.
So while the expression `foo(a) && bar(b)` won't execute function bar [when foo is "falsy"] if these functions just return an ordinary type which doesn't have the overloading, if they do enable overloading both functions are always executed then the results given to the overloading function.
Edited:: Numerous tweaks because apparently I can't boolean today.
This is a failed attempt at muddying the waters. You don't know what move semantics is? You go learn what they are. Best/simplest way is to just disable your copy ctor.
You need to know the operator overload semantics for a particular use case? It is not exactly hidden lore, there are even man pages (libstdc++-doc, man 3 std::ostream) or just use std::println.
You are stuck instantiating std::vector? Then you will be stuck in any language anyway.
> in C++, you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
Only if you have full control on what others are writing. In reality, you're going to read a lot, lots of "clever" codes. And I'm saying as a person who have written a good amount of template meta programming codes. Even for me, some codes take hours to understand and I was usually able to cut 90% of its code after that.
I’m probably guilty of gratuitous template stuff, because it adds fun to the otherwise boring code I spend a lot of time on. But I feel like the 90% cutdowns are when someone used copy-paste instead of templates, overloads, and inheritance. I don’t think both problems happen at the same time, though, or maybe I misunderstood.
When people are obsessed with over-abstraction and over-generalization, you can often see FizzBuzz Enterprise in action where a single switch statement is more than enough.
I see that more with inheritance including pure virtual interface for things that only have one implementation and actor patterns that make the execution flow unnecessarily hard to follow. Basically, Java written in C++.
Most templates are much easier to read in comparison.
Agreed that template itself is not the problem but people are. It is still arguable that template is much more of fun to write clever codes because of its meta programming capability as well as its runtime performance advantages.
With a pure virtual interface you can at least track down the execution path as long as you can spot down where the object is created, but with template black magics? Good luck. Static dispatch with all those type traits and SFINAE practically makes it impossible to know before running it. Concept was supposed to solve this but this won't automatically solve all those problems lurking in legacy codes.
Being able to cut 90% of code sounds like someone was getting paid by LoC (which is also a practice from a time when C++ was considered a "modern" language).
Yes, but not always. For example, what can now be written as single "requires requires" or a short chain of "else if constexpr" statements, used to be sprawling, incomprehensible template class hierarchy, before that feature got added.
“If someone can, they will.”
Some lvalue move copy constructor double rainbow, and you’re left wondering wtf
> make it look as much like C as you possibly can, and avoid using too many advanced features of the language unless you really need to.
Oh boy!
This person needs control.
That is where I left C++, a better C
Faint praise
It's a fine post except for this:
> Countless companies have cited how they improved their security or the amount of reported bugs or memory leaks by simply rewriting their C++ codebases in Rust. Now is that because of Rust? I’d argue in some small part, yes.
Just delete this. Even an hour's familiarity with Rust will give you a visceral understanding that "Rewrites of C++ codebases to Rust always yield more memory-safe results than before" is absolutely not because "any rewrite of an existing codebase is going to yield better results". If you don't have that, skip it, because it weakens the whole piece.
It reminded me of Firefox attempts at rewriting part of C++ codebase and failing - because of C++ complexity; but succeeding later because of Rust https://blog.rust-lang.org/2017/11/14/Fearless-Concurrency-I...
Yeah... if you don't have 100% feature test coverage (and more) you will almost certainly lose features (aka cause bugs)
C++ will always stay relevant. Software has eaten the world. That transition is almost complete now. The languages that were around when it happened will stay deeply embedded in our fundamental tech stacks for another couple decades at least, if not centuries. And C and C++ are the lion's share of that.
COBOL sticks around 66 years after its first release. Fortran is 68 years old and is still enormously relevant. Much, much more software was written in newer languages and has become so complex that replacements have become practically impossible (Fuchsia hasn't replaces Linux in Google products, wayland isn't ready to replace X11 etc)
It seems likely that C++ will end up in a similar place as COBOL or Fortran, but I don't see that as a good future for a language.
These languages are not among the top contenders for new projects. They're a legacy problem, and are kept alive only by a slowly shrinking number of projects. It may take a while to literally drop to zero, but it's a path of exponential decay towards extinction.
C++ has strong arguments for sticking around as a legacy language for several too-big-to-rewrite C++ projects, but it's becoming less and less attractive for starting new projects.
C++ needs a better selling point than being a language that some old projects are stuck with. Without growth from new projects, it's only a matter of time until it's going to be eclipsed by other languages and relegated to shrinking niches.
It will take generations to fully bootstrap compiler toolchains, language runtimes, and operating systems that depend on either C or C++.
Also depending on how AI assisted tooling evolves, I think it is not only C and C++ that will become a niche.
I already see this happening with the amount of low-code/no-code augmented with AI workflows, that are currently trending on SaaS products.
As long as people write software (no pun intended), software will follow trends. For instance, in many scientific ecosystems, Matlab was successfully replaced by Scipy. Which happens to get replaced by Julia. Things don't neccessarily have to stay the same. Interestingly, such a generational trend currently happens with Rust, despite there has been numerous other popular languages such as D or Zig which didn't have the same traction.
Sure, there are still Fortran codes. But I can hardly imagine that Fortran still plays a big role in another 68 years from now on.
Matlab/Scipy/Julia are totally different since those function more like user interfaces, they are directly user facing. You're not building an app with matlab (though you might be with scipy and julia, it's not the primary use case), you're working with data. C++ on the other hand underpins a lot of key infrastructure.
I am not saying that these languages will stay around forever, mind you. But we have solidified the tech stacks involving these languages by making them ridiculously complex. Replacement of a programming language in one of the core components can only come through gradual and glacially slow evolution at this point. "Rewrite it in XYZ" as a clean slate approach on a big scale is simply a pipe dream.
Re Matlab: I still see it thriving in the industry, for better or worse. Many engineers just seem to love it. I haven't seen many users of Julia yet. Where do you see those? I think that Julia deserves a fair chance, but it just doesn't have a presence in the fields I work in.
I've heard via former employees that Mathworks has conceded that Python ate Matlab's niche and that they're focusing on Simulink
You’re thinking of software that is being written today. GP is talking about software we use every day in every device on the planet that hasn’t changed since it was written 30+ years ago.
What is this software? E.g. Linux is 33 years old; barely a few percent of Linux 1.0 remains in a modern kernel, if we count lines of code.
Maybe GNU Emacs has a larger percentage remaining intact; at least it retains some architectural idiosyncrasies from 1980s.
As of Fortran, modern Fortran is a pretty nice and rich language, very unlike the Fortran-77 I wrote at high school.
> For instance, in many scientific ecosystems, Matlab was successfully replaced by Scipy. Which happens to get replaced by Julia
If by scientific ecosystems you mean people making prototypes for papers, then yes. But in commercial, industrial setting there is still no alternative for many of Matlab toolboxes, and as for Julia, as cool as it is, you need to be careful to distinguish between real usage and vetted marketing materials created by JuliaSim.
Scipy is a wrapper of Numpy, which is a wrapper of C and Fortran.
Especially the 'backend' languages that do all the heavy lifting for domain-specific software. Just in my vertical of choice, financial software, there are literally billions of lines of Java and .NET code powering critical systems. The code is the documentation, and there's little appetite to rewrite all that at enormous cost and risk.
Perhaps AI will get reliable enough to pour through these double-digit million LOC codebases and convert them flawlessly, but that looks like it's decades off at this point.
I'm not so sure. The user experience has really crystallized over the years. It's not hard to imagine a smart tv or something like it just reimplementing that experience in hardware in the not too distant future (say 2055 if transistor and memory scaling stall in 2035).
We live in a special time when general processing efficiency has always been increasing. The future is full of domain specific hardware (enabling the continued use of COBOL code written for slower mainframes). Maybe this will be a half measure like cuda or your c++ will just be a thin wrapper around a makeYoutube() ASIC
Of course if there is a breakthrough in general purpose computing or a new killer app it will wipe out all those products which is why they don't just do it now
> Software has eaten the world.
Bit off more than it could chew, no we all have indigestion
"you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language."
You could also inherit a massive codebase old enough to need a prostate exam that was written by many people who wanted to prove just how much of the language spec they could use.
If selecting a job mostly under the Veil of Ignorance, I'll take a large legacy C project over C++ any day.
I've been programming in c++ for 25 years (15 professionally) and I really don't see any reason to keep using it apart from dealing with legacy codebases.
Most arguments in the article boil down to "c++ has the reputation of X, which is partly true, but you can avoid problems with discipline". Amusingly, this also applies to assembly. This is _exactly_ why I don't want to code in c++ anymore: I don't want the constant cognitive load to remember not to shoot myself in the foot, and I don't want to spend time debugging silly issues when I screw up. I don't want the outdated tooling, compilation model and such.
Incidentally, I've also been coding in Rust for 5 years or so, and I'm always amazed that code that compiles actually works as intended and I can spend time on things that matter.
Going back to c++ makes me feel like a caveman coder, every single time.
Exactly. I’ve been using „Stupid Rust“ for years now where I just liberally clone if I can’t have my way. It‘s not bitten me yet and once the code compiles, it works.
> You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
Maybe you can do that. But you are probably working in a team. And inevitably someone else in your team thinks that operator overloading and template metaprogramming are beautiful things, and you have to work with their code. I speak from experience.
This is true and I will concede this point. Appreciate your feedback!
However if I may raise my counter point I like to have a rule that C++ should be written mostly as if you were writing C as much as possible until you need some of it's additional features and complexities.
Problem is when somebody on the team does not share this view though, that much is true :)
Counter-counter point: if you're going to actively avoid using the majority of a language's features and for the most part write code in it as if it were a different language, doesn't that suggest the language is deeply flawed?
(Note: I'm not saying it is deeply flawed, just that this particular way of using it suggests so).
I wouldn't necessarily put it like that no. I'd say all languages have features that fit certain situations but should be avoided in other situations.
It's like a well equiped workshop, just because you have access to a chainsaw but do not need to use it to build a table does not mean it's a bad workshop.
C is very barebones, languages like C++. C#, Rust and so on are not. Just because you don't need all of it's features does not make those languages inherently bad.
Great question or in this case counter-counter point though.
> However if I may raise my counter point I like to have a rule that C++ should be written mostly as if you were writing C as much as possible until you need some of it's additional features and complexities.
How do you define “need” for extra features? C and C++ can fundamentally both do the same thing so if you’re going to write C style C++, why not just write C and avoid all of C++’s foot guns?
RAII. It's the major C++ feature I miss in C, and the one that fixes most memory leak problems in C. Also, std::vector, which solves the remaining memory leak (and most bounds problems) in C. And std::string, which solves the remaining memory leak problems.
Excellent question, I guess it depends on the team mostly how they define which features they need and which are better avoided.
As for why not just go for C. You can write C++ fully as if it were C, you can not ever turn C into C++
A pet peeve of mine is when people claim C++ is a superset of C. It really isn't. There's a lot of little nuanced differences that can bite you.
Ignore the fact that having more keywords in C++ precludes the legality of some C code being C++. (`int class;`)
void * implicit casting in C just works, but in C++ it must be an explicit cast (which is kind of funny considering all the confusing implicit behavior in C++).
C++20 does have C11's designated initialization now, which helps in some cases, but that was a pain for a long time.
enums and conversion between integers is very strict in C++.
`char * message = "Hello"` is valid C but not C++ (since you cannot mutate the pointed to string, it must be `const` in C++)
C99 introduced variadic macros that didn't become standard C++ until 2011.
C doesn't allow for empty structs. You can do it in C++, but sizeof(EmptyStruct) is 1. And if C lets you get away with it in some compilers, I'll bet it's 0.
Anyway, all of these things and likely more can ruin your party if you think you're going to compile C code with a C++ compiler.
Also don't forget if you want code to be C callable in C++ you have to use `extern "C"` wrappers.
> It really isn't. There's a lot of little nuanced differences that can bite you.
These are mostly inconsequential when using code other people write. It is trivial to mix C and C++ object files, and where the differences (in headers) do matter, they can be ifdefed away.
> void * implicit casting in C just works, but in C++ it must be an explicit cast (which is kind of funny considering all the confusing implicit behavior in C++).
This makes sense because void* -> T* is a downcast. I find the C behavior worse.
> enums and conversion between integers is very strict in C++.
As it should, but unscoped enums are promoted to integers the same way they are in C
> `char * message = "Hello"` is valid C but not C++
Code smell anyway, you can and should use char[] in both languages
You didn't mention the difference in inline semantics which IMO has more impact than what you cited
I was not trying to be exhaustive.
And I think you're downplaying many of the ones I mentioned, but I think this level of "importance" is subjective to the task at hand and one's level of frustrations.
> you can and should use char[] in both languages
Not for temporaries initialized from a string constant. That would create a new array on the stack which is rarely what you want.
And for globals this would preclude the the data backing your string from being shared with other instances of the same string (suffix) unless you use non-standard compiler options, which is again undesirable.
In modern C++ you probably want to convert to a string_view asap (ideally using the sv literal suffix) but that has problems with C interoperability.
Right, I've checked that char foo = "bar"; is indeed the same as the const char variant (both reference a string literal in rodata), which IMO makes it worse.
About string literals, the C23 standard states:
therefore `char foo = "bar";` is very bad practice (compared to using const char).I assumed you wanted a mutable array of char initializable from a string literal, which is provided by std::string and char[] (depending on usecase).
> In modern C++ you probably want to convert to a string_view asap (ideally using the sv literal suffix)
I know as much
The two will also continue to diverge over time, after all, C2y should have the defer feature, which C++ will likely never add. Even if we used polyfills to let C++ compilers support it, the performance characteristics could be quite different; if we compare a polyfill (as suggested in either N3488 or N3434) to a defer feature, C++ would be in for a nasty shock as the "zero cost abstractions" language, compared to how GCC does the trivial re-ordering and inlining even at -O1, as quickly tested here: https://godbolt.org/z/qoh861Gch
I used the [[gnu::cleanup]] attribute macro (as in N3434) since it was simple and worked with the current default GCC on CE, but based on TS 25755 the implementation of defer and its optimisation should be almost trivial, and some compilers have already added it. Oh, and the polyfills don't support the braceless `defer free(p);` syntax for simple defer statements, so there goes the full compatibility story...
While there are existing areas where C diverged, as other features such as case ranges (N3370, and maybe N3601) are added that C++ does not have parity with, C++ will continue to drift further away from the "superset of C" claim some of the 'adherents' have clung to for so long. Of course, C has adopted features and syntax from C++ (C2y finally getting if-declarations via N3356 comes to mind), and some features are still likely to get C++ versions (labelled breaks come to mind, via N3355, and maybe N3474 or N3377, with C++ following via P3568), so the (in)compatibility story is simply going to continue getting more nuanced and complicated over time, and we should probably get this illusion of compatibility out of our collective culture sooner rather than later.
> A pet peeve of mine is when people claim C++ is a superset of C. It really isn't. There's a lot of little nuanced differences that can bite you.
> Ignore the fact that having more keywords in C++ precludes the legality of some C code being C++. (`int class;`)
Your very first example reverses the definitions of superset and subset. "C++ is a superset of C" implies that C++ will have at least as many, if not more, keywords than C.
Other examples make the same mistake.
> void * implicit casting in C just works, but in C++ it must be an explicit cast
In C, casting a `void *` is a code smell, I feel.
Most confusing one is how the meaning of `const` differs between C and C++; I'm pretty certain the C `const` keyword is broken compared to `const` in C++.
> C++20 does have C11's designated initialization now, which helps in some cases, but that was a pain for a long time.
C++ designated initializers are slightly different in that the initialization order must match the declared member order. That is not required in C.
It also completely negates their utility, even though the exact "problem" they always bring up is already solved when you use normal constructors.
Not sure I understand, since they're available in c++ designated initializes are one of the features I use most, to the point of making custom structs to pass the arguments if a type cannot be changed to be an aggregate. It makes a huge positive difference in readability and has helped me solve many subtle bugs ; and not initializing things in order will throw a warning so you catch it immediately in your ide
The problem is that there are a lot of APIs (even system and system-ish ones) that don't want to specify the order of their fields (or outright differ between platforms). Or that can't use a meaningful order due to ABI compatibility, yet the caller wants to pass fields in a meaningful order.
platform APIs like this are likely much less less than 1% of the things I call in my codebases. The few files i have open right now have absolutely no such call.
> You can do it in C++, but sizeof(EmptyStruct) is 1.
Unless you use the C++20 [[no_unique_address]] attribute, in which case it is 0 (if used correctly).
I completely forgot about that!
When it comes to programming, I generally decide my thoughts based on pain-in-my-ass levels. If I constantly have to fiddle with something to get it working, if it's fragile, if it frequently becomes a pain point - then it's not great.
And out of all the tools and architecture I work with, C++ has been some of the least problematic. The STL is well-formed and easy to work with, creating user-defined types is easy, it's fast, and generally it has few issues when deploying. If there's something I need, there's a very high chance a C or C++ library exists to do what I need. Even crossing multiple major compiler versions doesn't seem to break anything, with rare exceptions.
The biggest problem I have with C++ is how easy it is to get very long compile times, and how hard it feels like it is to analyze and fix that on a 'macro' (whole project) level. I waste ungodly amounts of time compiling. I swear I'm going to be on deaths door and see GCC running as my life flashes by.
Some others that have been not-so-nice:
* Python - Slow enough to be a bottleneck semi-frequently, hard to debug especially in a cross-language environment, frequently has library/deployment/initialization problems, and I find it generally hard to read because of the lack of types, significant whitespace, and that I can't easily jump with an IDE to see who owns what data. Also pip is demon spawn. I never want to see another Wheel error until the day I die.
* VSC's IntelliSense - My god IntelliSense is picky. Having to manually specify every goddamn macro, one at a time in two different locations just to get it to stop breaking down is a nightmare. I wish it were more tolerant of having incomplete information, instead of just shutting down completely.
* Fortran - It could just be me, but IDEs struggle with it. If you have any global data it may as well not exist as far as the IDE is concerned, which makes dealing with such projects very hard.
* CMake - I'm amazed it works at all. It looks great for simple toy projects and has the power to handle larger projects, but it seems to quickly become an ungodly mess of strange comments and rules that aren't spelled out - and you have no way of stepping into it and seeing what it's doing. I try to touch it as infrequently as possible. It feels like C macros, in a bad way.
CMake is not a great language, but great effort has been put into cleaning up how things should be done. However you can't just upgrade, someone needs to go through the effort of using all that new stuff. In almost all projects the build system is an after thought that developers touch as little as possible to make things work and so it builds cruft constantly.
You can do much better in CMake if you put some effort into cleaning it up - I have little hope anyone will do this though. We have a hard time getting developers to clean up messes in production code and that gets a lot more care and love.
I agree. Unless the project is huge, it's totally possible to use CMake in a maintainable way. It just requires some effort (not so much, but not nothing).
If you are willing to give up incremental compilation, concatenating all C++ files into a single file and compiling that on a single core will often outperform a multi-core compilation. The reason is that the compiler spends most of its time parsing headers and when you concentrate everything into a single file (use the C preprocessor for this), it only needs to parse headers once.
Merely parsing C++ code requires a higher time complexity than parsing C code (linear time parsers cannot be used for C++), which is likely where part of the long compile times originate. I believe the parsing complexity is related to templates (and the headers are full of them), but there might be other parts that also contribute to it. Having to deal with far more abstractions is likely another part.
That said, I have been incrementally rewriting a C++ code base at a health care startup into a subset of C with the goal of replacing the C++ compiler with a C compiler. The closer the codebase comes to being C, the faster it builds.
> Fortran - It could just be me, but IDEs struggle with it. If you have any global data it may as well not exist as far as the IDE is concerned, which makes dealing with such projects very hard.
You really should not have global data. Modules are the way to go and have been since Fortran90.
> CMake - I'm amazed it works at all. It looks great for simple toy projects and has the power to handle larger projects, but it seems to quickly become an ungodly mess of strange comments and rules that aren't spelled out - and you have no way of stepping into it and seeing what it's doing. I try to touch it as infrequently as possible. It feels like C macros, in a bad way.
I like how you wrote my feelings so accurately :D
>You really should not have global data. Modules are the way to go and have been since Fortran90.
Legacy code, just have to deal with it. This code predates F90.
> I never want to see another Wheel error until the day I die.
What exactly do you mean by a "Wheel error"? Show me a reproducer and a proper error message and I'll be happy to help to the best of my ability.
By and large, the reason pip fails to install a package is because doing so requires building non-Python code locally, following instructions included in the package. Only in rare cases are there problems due to dependency conflicts, and these are usually resolved by creating a separate environment for the thing you're trying to install — which you should generally be doing anyway. In the remaining cases where two packages simply can't co-exist, this is fundamentally Python's fault, not the installer's: module imports are cached, and quite a lot of code depends on the singleton nature of modules for correctness, so you really can't safely load up two versions of a dependency in the same process, even if you hacked around the import system (which is absolutely doable!) to enable it.
As for finding significant whitespace (meaning indentation used to indicate code structure; it's not significant in other places) hard to read, I'm genuinely at a loss to understand how. Python has types; what it lacks is manifest typing, and there are many languages like this (including Haskell, whose advocates are famous for explaining how much more "typed" their language is than everyone else's). And Python has a REPL, the -i switch, and a built-in debugger in the standard library, on top of not requiring the user to do the kinds of things that most often need debugging (i.e. memory management). How can it be called hard to debug?
Unfortunately that Wheel situation was far enough back now that I don't have details on hand. I just know it was awful at the time.
As for significant whitespace, the problem is that I'm often dealing with files with several thousand lines of code and heavily nested functions. It's very easy to lose track of scope in that situation. Am I in the inner loop, or this outer loop? Scrolling up and down, up and down to figure out where I am. Feels easier to make mistakes as well.
It works well if everything fits on one screen, it gets harder otherwise, at least for me.
As for types, I'm not claiming it's unique to Python. Just that it makes working with Python harder for me. Being able to see the type of data at a glance tells me a LOT about what the code is doing and how it's doing it - and Python doesn't let me see this information.
As for debugging, it's great if you have pure Python. Mix other languages in and suddenly it becomes pain. There's no way to step from another language into Python (or vice-versa), at least not cleanly and consistently. This isn't always true for compiled->compiled. I can step from C++ into Fortran just fine.
> Am I in the inner loop, or this outer loop? Scrolling up and down, up and down to figure out where I am.
Find an IDE or extension which provides the nesting context on top of the editor. I think vs code has it built in these days.
Pip has changed a lot in the last few years, and there are many new ecosystem standards, along with greater adoption of existing ones.
> I'm often dealing with files with several thousand lines of code and heavily nested functions.
This is the problem. Also, a proper editor can "fold" blocks for you.
> Being able to see the type of data at a glance tells me a LOT about what the code is doing and how it's doing it - and Python doesn't let me see this information.
If you want to use annotations, you can, and have been able to since 3.0. Since 3.5 (see https://peps.python.org/pep-0484/; it's been over a decade now), there's been a standard for understanding annotations as type information, which is recognized by multiple different third-party tools and has been iteratively refined ever since. It just isn't enforced by the language itself.
> Mix other languages in and suddenly it becomes pain.... This isn't always true for compiled->compiled.
Sure, but then you have to understand the assembly that you've stepped into.
>This is the problem. Also, a proper editor can "fold" blocks for you.
I can't fix that. I just work here. I've got to deal with the code I've got to deal with. And for old legacy code that's sprawling, I find braces help a LOT with keeping track of scope.
>Sure, but then you have to understand the assembly that you've stepped into.
Assembly? I haven't touched raw assembly since college.
> And for old legacy code that's sprawling, I find braces help a LOT with keeping track of scope.
How exactly are they more helpful than following the line of the indentation that you're supposed to have as a matter of good style anyway? Do you not have formatting tools? How do you not have a tool that can find the top of a level of indentation, but do have one that can find a paired brace?
>Assembly? I haven't touched raw assembly since college.
How exactly does your debugger know whether the compiled code it stepped into came from C++ or Fortran source?
> How exactly does your debugger know whether the compiled code it stepped into came from C++ or Fortran source?
Executables with debug symbols contain the names of the source files it was built from. Your debugger understands the debug symbols, or you can use tools like `addr2line` to find the source file and line number of an instruction in an executable.
Debugger does not need to understand the source language. It's possible to cross language boundaries in just vanilla GDB for example.
>How exactly does your debugger know whether the compiled code it stepped into came from C++ or Fortran source?
I don't know what IDE GP might be using, but mixed-language debuggers for native code are pretty simple as long as you just want to step over. Adding support for Fortran to, say, Visual Studio wouldn't be a huge undertaking. The mechanism to detect where to put the cursor when you step into a function is essentially the same as for C and C++. Look at the instruction pointer, search the known functions for an address that matches, and jump to the file and line.
No. To everything
Python is up there (down there?) with Windows as a poster child for popularity does not imply quality
If it stayed in its lane as a job control language, and they used semantic versioning then it would be OK.
But the huge balls of spaghetti Python code, that must be run in a virtual environment cause versions drive me mental
> Yes, C++ can be unsafe if you don’t know what you’re doing. But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
I think this is one of the worst (and most often repeated arguments) about C++. C and C++ are inherently unsafe in ways that trip up _all_ developers even the most seasoned ones, even when using ALL the modern C++ features designed to help make C++ somewhat safer.
There are two levels on which this argument feels weak:
* The author is confusing memory safety with other kinds of safety. This is evident from the fact that they say you can write unsafe code in GC languages like python and javascript. unsafe != memory unsafe. Rust only gives you memory safety, it won't magically fix all your bugs.
* The slippery slope trick. I've seen this so often, people say because Rust has unsafe keyword it's the same as c/c++. The reason it's not is because in c/c++ you don't have any idea where to look for undefined behaviour. In Rust at least the code points you to look at the unsafe blocks. The difference is of degree which for practial purposes makes a huge difference.
The problem with C++ vs. unsafety is that there is really no boundary: All code is by default unsafe. You will need to go to great lengths to make it all somewhat safe, and then to even greater lengths to ensure any libraries you use won't undermine your safety.
In Rust, if you have unsafe code, the onus is on you to ensure its soundness at the module level. And yes, that's harder than writing the corresponding C++, but it makes the safe code using that abstraction a lot easier to reason about. And if you don't have unsafe code (which is possible for a lot of problems), you won't need to worry about UB at all. Imagine never needing to keep all the object lifetimes in your head because the compiler does it for you.
Good article overall. There's one part I don't really agree with:
> Here’s a rule of thumb I like to follow for C++: make it look as much like C as you possibly can, and avoid using too many advanced features of the language unless you really need to.
This has me scratching my head a bit. In spite of C++ being nearly a superset of C, they are very different languages, and idiomatic C++ doesn't look very much like C. In fact, I'd argue that most of the stuff C++ adds to C allows you to write code that's much cleaner than the equivalent C code, if you use it the intended way. The one big exception I can think of is template metaprogramming, since the template code can be confusing, but if done well, the downstream code can be incredibly clean.
There's an even bigger problem with this recommendation, which is how it relates to something else talked about in the article, namely "safety." I agree with the author that modern C++ can be a safe language, with programmer discipline. C++ offers a very good discipline to avoid resource leaks of all kinds (not just memory leaks), called RAII [1]. The problem here is that C++ code that leverages RAII looks nothing like C.
Stepping back a bit, I feel there may be a more fundamental fallacy in this "C++ is Hard to Read" section in that the author seems to be saying that C++ can be hard to read for people who don't know the language well, and that this is a problem that should be addressed. This could be a little controversial, but in my opinion you shouldn't target your code to the level of programmers who don't know the language well. I think that's ultimately neither good for the code nor good for other programmers. I'm definitely not an expert on all the corners of C++, but I wouldn't avoid features I am familiar with just because other programmers might not be.
[1] https://en.cppreference.com/w/cpp/language/raii.html
IN MY DEFENS
BJARNE ME DEFEND
https://monarchies.fandom.com/wiki/In_my_defens_God_me_defen...
Great article. Modern C++ has come a really long way. I think lots of people have no idea about the newer features of the standard library and how much they minimize footguns.
Lambdas, a modern C++ feature, can borrow from the stack and escape the stack. (This led to one of the more memorable bugs I've been part of debugging.) It's hard to take any claims about modern C++ seriously when the WG thought this was an acceptable feature to ship.
Of course, the article doesn't mention lambdas.
Capturing lambdas are no different from handwritten structures with operator() ("functors"), therefore it makes no sense castrating them.
Borrowing from stack is super useful when your lambda also lives in the stack; stack escaping is a problem, but it can be made harder by having templates take Fn& instead of const Fn& or Fn&&; that or just a plain function pointer.
Borrowing from the stack is definitely useful. I do it all the time, safely (truly safely), in Rust.
Convenience is a difference in kind.
Like, I'm not god's gift to programming or anything, but I'm decently good at it, and I wrote a use-after-return bug due to a lambda reference last week.
Not an issue if you make use of the tools available:
https://godbolt.org/z/xW14hGeoj
It looks like (a) this is a warning, not an error (why? the code is always wrong) and (b) the warning was added in clang 21 which came out this year. I also suspect that it wouldn't be able to detect complex cases that require interprocedural analysis.
The bug I saw happened a few years ago, and convinced me to switch to Rust where it simply cannot happen.
They can, but I find in practice that I never do this so it doesn't matter.
I'm glad, but my problem is with the claim that modern C++ is safer. They added new features that are very easy to misuse.
Meanwhile in Rust you can freely borrow from the stack in closures, and the borrow checker ensures that you'll not screw up. That's what (psychological) safety feels like.
Lambdas are syntactic sugar over functors, and it was possible all along to define a functor that stores a local address and then return it from the scope, thus leaving a dangling pointer. They don't introduce any new places for bugs to creep in, other than confusing programmers who are used to garbage-collected languages. That C++11 is safer than C++98 is still true, as this and other convenience features make it harder to introduce bugs from boilerplate code.
The ergonomics matter a lot. Of course a lambda is equivalent to a functor that stores a local reference, but making errors with lambdas requires disturbingly little friction.
In any case, if you want safety and performance, use Rust.
>making errors with lambdas requires disturbingly little friction
Not any less than other parts of the language. If you capture by reference you need to mind your lifetimes. If you need something more dynamic then capture by copy and use pointers as needed. It unfortunate the developer who introduced that bug you mentioned didn't keep that in mind, but this is not a problem that lambdas introduced; it's been there all along. The exact same thing would've happened if they had stored a reference to a dynamic object in another dynamic object. If the latter lives longer than the former you get a dangling reference.
>In any case, if you want safety and performance, use Rust.
Personally, I prefer performance and stability. I've already had to fix broken dependencies multiple times after a new rustc version was released. Wake me up when the language is done evolving on a monthly basis.
So you agree that modern C++ adds new ways to introduce memory unsafety?
Why wouldn't it be acceptable to ship? This is how everything works in C++. You always have to mind your references.
Exactly! This is my problem with the C++ community's culture. At no point is safety put first.
Yeah, it's great that the C++ community starts to take safety in consideration, but one has to admit that safety always comes as the last priority, behind compatibility, convenience, performance and expressiveness.
Its worse. The day I discovered that std::array is explicitly not range/bounds checked by default I really wanted to write some angry letters to the committee members.
Why go through all the trouble to make a better array, and require the user to call a special .at() function to get range checking rather than the other way around? I promptly went into my standard library and reversed that decision because if i'm going to the trouble to use a C++ array class, it better damn well give me a tiny bit of additional protection. The .at() call should have been the version that reverted to C array behavior without the bounds checking.
And its these kinds of decisions repeated over and over. I get its a committee. Some of the decisions won't be the best, but by 2011 everyone had already been complaining about memory safety issues for 15+ years and there wasn't enough politics on the comittee to recognize that a big reason for using C++ over C was the ability of the language to protect some of the sharper edges of C?
>Why go through all the trouble to make a better array, and require the user to call a special .at() function to get range checking rather than the other way around?
Because the point was not to make an array type that's safe by default, but rather to make an array type that behaves like an object, and can be returned, copied, etc. I mean, I agree with you, I think operator[]() should range-check by default, but you're simply misunderstanding the rationale for the class.
Which goes to the GP's point, which is that security and robustness are not on the radar.
And my point in providing a concrete example, where a decision was made to prioritize unsafe behavior in a known problematic area, when they could just as well have made a half dozen other decisions which would have solved a long standing problem rather than just perpetuating it with some new syntactic sugar.
I didn't dispute that, I was simply addressing the point about std::array. The class is not meant to be "arrays, but as good as they could possibly be". It's "arrays, but as first-class objects instead of weird language constructs".
That said, making std::array::operator[]() range-checking would have been worse, because it would have been the only overload that did that. Could they have, in the same version, made all the overloads range-checking? Maybe, I don't know.
std::array [] is checked if you have the appropriate build settings toggled, which of course you should during development.
The same applies to many of the other baseless complaints I'm seeing here, learn to use your tools fools.
Good news! Contracts were approved for c++26 so they should be in compilers by like 2031 and then you can configure arrays and vectors to abort on out-of-bounds errors instead of corrupting your program.
Let no one accuse the committee of being unresponsive.
This is like writing an article entitled "In Defense of Guns", and then belittling the fact it can kill by saying "You always have to track your bullets".[1]
[1] Not me making this up - I started getting into guns and this is what people say.
To me it's as if someone releases a new gun model and people single that gun out and complain that if you shoot someone with it they may die. Like it's a critique of guns as a concept not of that particular one.
In a complete tangent I think that "smart guns" that only let you shoot bullseye targets, animals and designated un-persons are not far off.
I eagerly await the day when they do away with the distinction between ".cpp" and ".hpp" files and the textual substitution nature of "#include" and replace them all with a proper module system.
Modules are a disaster.
It's hard enough to get programmers to care enough about how their code affects build times. Modules make it impossible for them to care, and will lead to horrible problems when building large projects.
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
You know, not sure I even agree with the memory leaks part. If you define a memory leak very narrowly as forgetting to free a pointer, this is correct. But in my experience working with many languages including C/C++, forgotten pointers are almost never the problem. You're gonna be dealing with issues involving "peaky" memory usage e.g. erroneously persistent references to objects or bursty memory allocation patterns. And these occur in all languages.
Nice thing about Rust is not that you cannot write such code, it is you know exactly where you used peaky memory or re-interpreted something as a unsigned integer or replaced your program stack with something else. All of such cases require unsafe blocks in Rust. It is a screaming indicator "here be dragons". It is the do not press this red button unless you intend to.
In C and C++ no such thing exists. It is walking in a minefield. It is worse with C++ because they piled so much stuff, nobody knows on the top of their head how a variable is initialized. The initialization rules are insane: https://accu.org/journals/overload/25/139/brand_2379/
So if you are doing peaky memory stuff with complex partially self-initializing code in C++, there are so many ways of blowing yourself and your entire team up without knowing which bit of code you committed years ago caused it.
> All of such cases require unsafe blocks in Rust.
It's true that Rust makes it much harder to leak memory compared to C and even C++, especially when writing idiomatic Rust -- if nothing else, simply because Rust forces the programmer to think more deeply about memory ownership.
But it's simply not the case that leaking memory in Rust requires unsafe blocks. There's a section in the Rust book explaining this in detail[1] ("memory leaks are memory safe in Rust").
[1] https://doc.rust-lang.org/book/ch15-06-reference-cycles.html
My comment is more of an answer to this
> You're gonna be dealing with issues involving "peaky" memory usage e.g. erroneously persistent references to objects
I use Rust in a company in a team who made the C++ -> Rust switch for many system services we provide on our embedded devices. I use Rust daily. I am aware that leaking is actually safe.
C++'s design encourages that kind of allocation "leak" though. The article suggests using smart pointers, so let's take an example from there and mix make_shared with weak_ptr. Congrats, you've now extended the lifetime of the allocation to whatever the lifetime of your weak pointer is.
Rc::Weak does the same thing in Rust, but I rarely see anyone use it.
std::weak_ptr is rarely used in C++ too - "I don't care if this thing goes away but still want to keep a reference to it" is just not good design in most cases.
Huh? What do you mean? The point of std::weak_ptr is that it's non-owning, so it has no effect on the lifetime of the pointed object.
You are correct, it does not affect the lifetime of the pointed object (pointee).
But a shared_ptr manages at least 3 things: control block lifetime, pointee lifetime, and the lifetime of the underlying storage. The weak pointer shares ownership of the control block but not the pointee. As I understand this is because the weak_ptr needs to modify the control block to try and lock the pointer and to do so it must ensure the control block's lifetime has not ended. (It manages the control blocks lifetime by maintaining a weak count in the control block but that is not really why it shares ownership.)
As a bonus trivia, make_shared uses a single allocation for both the control block and the owned object's storage. In this case weak pointers share ownership of the allocation for the pointee in addition to the control block itself. This is viewed as an optimization except in the case where weak pointers may significantly outlive the pointee and you think the "leaked" memory is significant.
It has no effect on the lifetime of the object, but it can affect the lifetime of the allocation. The reason is that weak_ptr needs the control block, which make_shared bundles into the same allocation as the object for optimization reasons.
Quoting cppreference [0]:
[0] https://en.cppreference.com/w/cpp/memory/shared_ptr/make_sha...They're both problems, and forgotten pointers are more common in C, or C++ before 2011 (unique_ptr vs manual new/delete).
What's worse in languages like Go, which I love, is that you won't even immediately how to solve this unless you have experience dropping down into doing things you just would have normally done in C or C++.
Even the Go authors themselves on Go's website display a process of debugging memory usage that looks identical to a workflow you would have done in C++. So, like, what's the point? Just use C++.
I really do think Go is nice, but at this point I would relegate it to the workplace where I know I am working with a highly variable team of developers who in almost all cases will have a very poor background in debugging anything meaningful at all.
C++ as a language is truly a mess. But as far as a niche it has a pretty unique offering, even today.
My background is games and I've been heavily in Unreal lately. The language feels modern enough with smart pointers and such. Their standard library equivalent is solid.
The macros still feel very hacky and, ironically, Unreal actually does its own prepass over the source to parse for certain macros.... kind of shows that it's not a good language feature if that's needed. BUT the syntax used fits right into the language, so it feels idiomatic enough.
Templates are as powerful as they are just a mess to read.
Does anything come close to the speed and flexibility of the language? I think the biggest reason C++ sticks around is momentum but beyond that nothing _really_ replaces the messy but performance critical nature of it.
The complexity argument is just not true. You do have to know this stuff in c++, you run into it all the time.
I wish I didn’t have to know about std::launder but I do
You need something like std::launder in any systems language for certain situations, it isn’t a C++ artifact.
Before C++ added it we relied on undefined behavior that the compilers agreed to interpret in the necessary way if and only if you made the right incantations. I’ve seen bugs in the wild because developers got the incantations wrong. std::launder makes it explicit.
For the broader audience because I see a lot of code that gets this wrong, std::launder does not generate code. It is a compiler barrier that blocks constant folding optimizations of specific in-memory constants at the point of invocation. It tells the compiler that the constant it believes lives at a memory address has been modified by an external process. In a C++ context, these are typically restricted to variables labeled ‘const’.
This mostly only occurs in a way that confuses the compiler if you are doing direct I/O into the process address space. Unless you are a low-level systems developer it is unlikely to affect you.
Do you see all the concepts you had to describe here?
> Unless you are a low-level systems developer it is unlikely to affect you.
Making new data structure is common. Serializing classes into buffers is common.
If you are doing something equivalent to placement new on top of existing objects, the compiler often sees that. If that is your case you can avoid it in most cases. That is not what std::launder is for. It is for an exotic case.
std::launder is a tool for object instances that magically appear where other object instances previously existed but are not visible to the compiler. The typical case is some kind of DMA like direct I/O. The compiler can’t see this at compile time and therefore assumes it can’t happen. std::launder informs the compiler that some things it believes to be constant are no longer true and it needs to update its priors.
With placement new you need to hold on to the pointer. If you need to get an object back out of a buffer you need launder.
Std::vector needs launder.
> Making new data structure is common. Serializing classes into buffers is common.
You don't want std::launder for any of that. If you must create object instances from random preexisting bytes you want std::bit_cast or https://en.cppreference.com/w/cpp/memory/start_lifetime_as.h...
Alas none of gcc/clang/msvc(?) have implemented start_lifetime_as, so if you want to create an object in-place and obtain a mutable pointer to it, you're stuck with the:
trick until they properly implement it. For MMIO, reintepret_cast from integer is most likely fine.I feel like C++ is a bunch of long chains of solutions creating problems that require new solutions, that start from claiming that it can do things better than C.
Problem 1: You might fail to initialize an object in memory correctly.
Solution 1: Constructors.
Problem 2: Now you cannot preallocate memory as in SLAB allocation since the constructor does an allocator call.
Solution 2: Placement new
Problem 3: Now the type system has led the compiler to assume your preallocated memory cannot change since you declared it const.
Solution 3: std::launder()
If it is not clear what I mean about placement new and const needing std::lauder(), see this:
https://miyuki.github.io/2016/10/21/std-launder.html
C has a very simple solution that avoids this chain. Use structured programming to initialize your objects correctly. You are not going to escape the need to do this with C++, but you are guaranteed to have to consider a great many things in C++ that would not have needed consideration in C since C avoided the slippery slope of syntactic sugar that C++ took.
But the c++ solution is transparent to the user. You can write entire useful programs that will use std:: containers willy-nilly and all propagate their allocators automatically and recursively without you having to lift a finger because all the steps you've mentioned have been turned in a reusable library, once.
I'd file that in the category of "what I can't recreate, I can't understand".
With that argument you could discard JavaScript because V8 is hard to understand.
C++ giving you the ability to create your own containers that equal the standard library is a bonus, it doesn't make those containers harder to use.
That's a false comparison. There's a huge difference between a standard container library, and the combination of a (a) best-in-class byte code interpreter with a (b) caching, optimizing JIT, supported by (c) a best-in-class garbage collector.
I would argue that it's reasonable to say that creating a robust data structure library at the level of the STL shouldn't be that arcane.
Yes, and to get that nice feature you have to pay an enormous cost.
I absolutely agree - your chain of reasoning follows as well. It doesn't seem like it at first, but the often praised constructor/destructor is actually a source of incredible complexity, probably more than virtual.
Problem 1 happens, say, 10% of the time when using a C struct.
Problem 2 happens only when doing SLAB allocations - say, 1% of the time when using a C++ class. (Might be more or less, depending on what problem space you're in.)
Problem 3 happens only if you are also declaring your allocated stuff const - say, maybe 20% of the time?
So, while not perfect, each solution solves most of the problem for most of the people. Complaining about std::launder is complaining that solution 2 wasn't perfect; it's not in any way an argument that solution 1 wasn't massively better than problem 1.
I would really like to see more people who have never written C++ before port a Rust program to C++. In my opinion, one can argue it may be easy to port initially but it is an order of magnitude more complex to maintain.
Whereas the other around, porting a C++ program to Rust without knowing Rust is challenging initially (to understand the borrow checker) but orders of magnitude easier to maintain.
Couple that with easily being about to `cargo add` dependencies and good language server features, and the developer experience in Rust blows C++ out of the water.
I will grant that change is hard for people. But when working on a team, Rust is such a productivity enhancer that should be a no-brainer for anyone considering this decision.
I'm a developer since 30 years. I program C#, Rust, Java, some TS etc. I can probably go to most repositories on github and at least clone and build them. I have failed - repeatedly - to build even small C++ libraries despite reasonable effort. And that's not even _writing any C++_. Just installing the tooling around CMake etc is completely Kafkaesque.
The funniest thing happened when I needed to compile a C file as part of a little Rust project, and it turned out one of the _easiest_ ways I've experienced of compiling a tiny bit of C (on Windows) was to put it inside my Rust crate and have cargo do it via a C compiler crate.
Most of the C++ world has standardized on CMake.
I work on large C++ projects with 1-2 dozen third party C and C++ library dependencies, and they're all built from source (git submodules) as part of one CMake build.
It's not easy but it is fairly simple.
> C++ is very old, in fact, it came out in 1985, to put it into perspective, that’s 4 years before the first version of Windows was released
Nitpick, I guess, but Windows 1.0 was released in November 1985:
https://en.m.wikipedia.org/wiki/Windows_1.0
True, but hardly anyone used it until Windows 3, or even 3.1.
Funny how silly Windows 1 looks compared to Mac OS 1. I wonder if it was the color support taking resources.
Just look at this: https://pvs-studio.com/en/blog/posts/cpp/1129/ - 11 parts about C++ undefined behavior from people who specialize in finding this stuff. And that’s only the tip of the iceberg.
I use C++ daily, and it’s an overcomplicated language. The really good thing about Rust or Zig is that (mostly) everything is explicit, and that’s a big win in my opinion.
In defense of C++, I can only say that lots of interesting projects in the world are written in it.
> You can write simple and readable code in C++ if you want to. You can also write complex and unreadable code in C++ if you want to. It’s all about personal or team preference.
Problem is, if you’re using C++ for anything serious, like the aforementioned game development, you will almost certainly have to use the existing libraries; so you’re forced to match whatever coding style they chose to use for their codebase. And in the case of Unreal, the advice “stick to the STL” also has to be thrown out since Unreal doesn’t use the STL at all. If you could use vanilla, by-the-books C++ all the time, it’d be fine, but I feel like that’s quite rare in practice.
Stating that you can also write unsafe code in memory safe languages is like saying that you can also die from a car crash while wearing a safety belt. Of course you can, but it is still a much better idea to wear the safety belt rather than not to.
Yet we don't have safety belts on a bus for example.
Author of the article here.
Really appreciate and value everyone's feedback on this.
I can not overstate my excitement that my blog post has generated this much discussion and debate.
C++ is the third programming language I ever tried to learn, I got bored and gave up on both Python and JavaScript after like a month, I now have 150 active hours of learning (I tracked) in C++, and I love it, somehow I find it mentally more stimulating, not sure why.
4th, after basic, assembly & pascal. For a long, long time; it was my default language for personal projects.
But just keeping track of all the features and the exotic ways they interact is a full time job. There are people who have dedicated entire lives to understanding even a tiny corner of the language, and they still don't manage.
Not worth the effort for me, there are other languages.
Whenever I open one of these sites that asks me to confirm tracking, if it doesn’t have an easy way to cancel or reject, I just leave the page. The banner had hundreds of different companies listing “legitimate” reasons to track, and after turning off around 10, I noticed that I had hundreds to go. Sorry, I hope people enjoy your website. I just cannot see a reason to accept that amount of tracking. I don’t care that much about C++ anyway
None of it makes it through a pi-hole filter. Website is clean and coherent, no popups. What that implies for the attempt to track is slightly unclear to me, but I don't have great faith in the pop up box being honoured even if it loads.
When NIST released its summary judgement against C++ and other languages it deemed memory unsafe, the problem became less technical and more about politics and perception. If you're looking to work within two arms' length of the US Government, you have to consider the "written in C++" label seriously, regardless of how correct the code may be.
Nothing is going to happen for the foreseeable future, at least in the parts of government I tend to work with. It doesn't even come up in discussions of critical high-reliability system. They are still quite happy to buy and use C++, so I expect that is what they will be getting.
The government is still happily commissioning new software projects that use C++. That may change in a few years, and some organizations may already be treating C++ more critically, but so far it's been unimpactful.
At some point the US government required ADA for all new development.
Yet here we are.
ADA is still the law, but, yes, Ada the language was mandated for 5 or 6 years and everyone got waivers for it anyways.
A big difference between the Ada mandate and this current push is that the current effort is not to go to one language, but to a different category of languages (specifically, "memory safe" or ones with stronger guarantees of memory safety). That leaves it much more open than the Ada mandate did. This would be much more palatable for contractors compared to the previous mandate.
The author argues that if rewriting a C++ codebase in Rust makes it more memory-safe, that's not because Rust is memory-safe. What?
That’s because the author thinks it’s the second system syndrome carrying the weight.
I think Rust is probably doing the majority of the work unless you’re writing everything in unsafe. And why would you? Kinda defeats the purpose.
I would argue that rewrite in C++ will make it a lot better. Rust does have some nice memory safe features that are nice enough that you should question why someone did a rewrite and stuck with C++, but that C++ rewrite would fix a lot.
Fresh codebases have more bugs than mature codebases. Rewriting does not fix bugs; it is a fresh codebase that may have different bugs but extremely rarely fewer bugs than the codebase most of the bugs have been patched out of. Rewriting it in Rust reduces the bugs because Rust inherently prevents large categories of bugs. Rewriting it in C++ has no magical properties that initially writing it in C++ doesn't, especially if you weren't around for the writing of the original. Maybe if there is some especially persnickety known bug that would require a major rearchitecture and you plan to implement this architecture this time around, but that is not the modal bug, and the article is especially talking about memory safety bugs which are a totally separate kind of thing from that.
A rewrite of your C++0x codebase that's grown from 2009 until now will most definitely fix loads of memory bugs, since C++ has very much evolved in this area since then. The added value of the borrow checker compared with modern C++ is a lot less than compared with legacy C++.
That said, I still think it's a rather weak argument, even if we do accept that the rewrite will do most of the bug removal, since we aren't stupid and move to smart pointers, more stl usage and for each loops. "Most" is not "all".
I think there is significant merit to rewriting a legacy C++ (or C) codebase in very modern C++. I've done it before and it not only greatly reduced the total amount of code but also substantially improved the general safety. Faster code and higher quality. Because both implementations are "C++", there is a much more incremental path and the existing testing more or less just works.
By contrast, my experience with C++ to Rust rewrites is that the inability of Rust to express some useful and common C++ constructs causes the software architecture to diverge to the point where you might as well just be rewriting it from scratch because it is too difficult to track the C++ code.
You left out the full argument (to be clear, I don't agree with the author, but in order to disagree with him you have to quote the full argument):
The author is arguing that the main reason rewriting a C++ codebase in Rust makes it more memory-safe is not because it was done in Rust, but because it benefits from lessons learned and knowledge about the mistakes done during the first iteration. He acknowledges Rust will also play a part, but that it's minor compared to the "lessons learned" factor.
I'm not sure I buy the argument, though. I think rewrites usually introduce new bugs into the codebase, and if it's not the exact same team doing the rewrite, then they may not be familiar with decisions made during the first version. So the second version could have as many flaws, or worse.
The argument could be made that rewriting in general can make a codebase more robust, regardless of the language. But that's not what the article does; it makes it specifically about memory safety:
> That’s how I feel when I see these companies claim that rewriting their C++ codebases in Rust has made them more memory safe. It’s not because of Rust, it’s because they took the time to rethink and redesign...
If they got the program to work at all in Rust, it would be memory-safe. You can't claim that writing in a memory-safe language is a "minor" factor in why you get memory safety. That could never be proven or disproven.
My only objection to your initial comment was that you left out the main gist of the argument (your later paraphrase says the same as I did).
I'm not defending TFA, I'm saying if you're going to reject the argument you must quote it in full, without leaving the main part.
Did you read what they wrote? Their point is that doing a fresh rewrite of old code in any language will often inherently fix some old issues - including memory safety ones.
Because it's a re-write, you already know all the requirements. You know what works and what doesn't. You know what kind of data should be laid out and how to do it.
Because of that, a fresh re-write will often erase bugs (including memory ones) that were present originally.
That claim appears to contradict the second-system effect [0].
The observation is that second implementation of a successful system is often much less successful, overengineered, and bloated, due to programmer overconfidence.
On the other hand, I am unsure of how frequently the second-system effect occurs or the scenarios in which it occurs either. Perhaps it is less of a concern when disciplined developers are simply doing rewrites, rather than feature additions. I don't know.
[0] https://en.wikipedia.org/wiki/Second-system_effect
I won't say the second-system effect doesn't exist, but I wouldn't say it applies every single time either. There's too many variables. Sometimes a rewrite is just a rewrite. Sometimes the level of bloat or feature-creep is tiny. Sometimes the old code was so bad that the rewrite fully offsets any bloat.
The second system effect isn't that a rewrite necessarily has more bugs/problems. The second system effect is that a follow-on project with all of everybody's dreamed-of bells and whistles that everybody in marketing wants is going to have more problems/bugs, and may not even be finishable at all.
I'm not sure what I feel about the article's point on boost. It does contribute a lot to the standard library and does provide some excellent libraries, like boost.Unordered
Boost is an awful whole with a couple very nice tiny parts inside.
If you can restrict to using the 'good' parts than it can be OK, but it's pulling in a huge dependency for very little gain these days.
I'm old enough to recall when boost first came out, and when it matured into a very nice library. What's happened in the last 15 years that boost is no longer something I would want to reach for?
C++11 through 17 negated a lot of its usefulness - the standard library does a lot of what Boost originally offered.
Alternative libraries like QT are more coherent and better thought out.
Qt is... fine... as long as you're willing to commit and use only Qt instead of the standard library. It's from before the STL came out, so the two don't mesh together really at all.
In my experience I've had no issues. Occasionally have to use things like toStdString() but otherwise I use a mix of std and qt, and haven't had any problems.
That's basically what I mean. You have to call conversion functions when your interface doesn't match, and your ability to use static polymorphism goes down. If the places where the two interact are few it works fine, but otherwise it's a headache.
I use boost and Qt but completely disagree. Every new version of boost brings extremely useful libraries that will never be in std: boost.pfr was a complete game changer, boost.mp11 ended the metaprogramming framework wars, there's also the recently added support for MQTT, SQL, etc. Boost.Beast is now the standard http and websocket client/server in c++. Boost.json has a simple API and is much more performant than nlohmann. Etc etc.
Don't lump c++ in with c. C++ is a nightmare amalgamation of every bad idea in software development. C is a thing of great beauty with a cheek mole. C++ is a metastatic cancer.
Write disciplined, readable c, use valgrind and similar tools, and reap unequalled performance and maintainability
Does it need to be defended? Does it need to be attacked?
I take the "use it if we can/want/is forced to" and "improve it if you want and can" approaches. Or else, leave it be.
Is there anything new here? Aren't these the same talking points people have been making for the past few years?
I believe most C++ gripes are a classic case of PEBKAC.
One of the most common complaints is the lack of a package manager. I think this stems from a fundamental misunderstanding of how the ecosystem works. Developers accustomed to language-specific dependency managers like npm or pip find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
Another perpetual gripe is that C++ is bad because it is overly complex and baroque, usually from C folks like Linus Torvalds[1]. It's pretty ironic, considering the very compiler they use for C (GCC), is written in C++ and not in C.
[1]: Torvalds' comment on C++ <https://harmful.cat-v.org/software/c++/linus>
> Developers accustomed to language-specific dependency managers like npm or pip find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
Okay, but is that actually a good idea? Merely saying that something is idiomatic isn't a counterargument to an allegation that the ecosystem has converged on a bad idiom.
For software that's going to be distributed through that same package manager, yes, sure, that's the right way to handle dependencies. But if you're distributing your app in a format that makes the dependencies self-contained, or not distributing it at all (just running it on your own machines), then I don't see what you gain from letting your operating system decide which versions of your dependencies to use. Also this doesn't work if your distro doesn't happen to package the dependency you need. Seems better to minimize version skew and other problems by having the files that govern what versions of dependencies to use (the manifest and lockfile) checked into source control and versioned in lockstep with the application code.
Also, the GCC codebase didn't start incorporating C++ as an implementation language until eight years after Linus wrote that message.
GCC was originally written in GNU C. Around GCC 4.9, its developers decided to switch to a subset of C++ to use certain features, but if you look at the codebase, you will see that much of it is still GNU C, compiled as GNU C++.
There is nothing you can do in C++ that you cannot do in C due to Turing Completeness. Many common things have ways of being done in C that work equally well or even better. For example, you can use balanced binary search trees in C without type errors creating enormous error messages from types that are sentences if not paragraphs long. Just grab BSD’s sys/tree.h, illumnos’ libuutil or glib for some easy to use balanced binary search trees in C.
> There is nothing you can do in C++ that you cannot do in C due to Turing Completeness.
While this is technically true, a more satisfying rationale is provided by Stroustrup here[0].
> Many common things have ways of being done in C that work equally well or even better. For example, you can use balanced binary search trees in C without type errors creating enormous error messages from types that are sentences if not paragraphs long. Just grab BSD’s sys/tree.h, illumnos’ libuutil or glib for some easy to use balanced binary search trees in C.
Constructs such as sys/tree.h[1] replicate the functionality of C++ classes and templates via the C macro processor. While they are quite useful, asserting that macro-based definitions provide the same type safety as C++ types is simply not true.
As to the whether macro use results in "creating enormous error messages" or not, that depends on the result of the textual substitution. I can assure you that I have seen reams of C compilation error messages due to invalid macro definitions and/or usage.
0 - https://www.stroustrup.com/compat_short.pdf
1 - https://cgit.freebsd.org/src/tree/sys/sys/tree.h
Where C macros provide functionality C++ classes and/or templates cannot is stringification of their argument(s).
For example:
> find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
It's really not about being hard to grasp. Once you need a different dependency version than the system provides, you can't easily do it. (Apart from manual copies) Even if the library has the right soname version preventing conflicts (which you can do in C, but not really C++ interfaces), you still have multiple versions of headers to deal with. You're losing features by not having a real package manager.
I write C++ daily and I really can't take seriously arguments how C++ is safe if you know what you're doing like come on. Any sufficiently large and complex codebases tend to have bugs and footguns and using tools like memory safe languages limit blast radius considerably.
Smart pointers are neat but they are not a solution for memory safety. Just using standard containers and iterators can lead to lots of footguns, or utils like string_view.
The safety part in this article is incorrect. There's a google doc somewhere where Google did an internal experiment and determined that safety c annot be achieved in C++ without an owning reference (essentially what Rust has).
Am I missing anything in the article about this problem in particular? Owning references are a part of modern C++, which should be covered by the author's arguments.
I think your parent may be slightly confused, in the sense of terminology: "owning reference" is a contradiction in Rust terms.
Here's the document I believe your parent is referring to: https://docs.google.com/document/d/e/2PACX-1vSt2VB1zQAJ6JDMa...
The claim in the article:
> Yes, C++ can be made safer; in fact, it can even be made memory safe.
The claim from this document:
> We attempted to represent ownership and borrowing through the C++ type system, however the language does not lend itself to this. Thus memory safety in C++ would need to be achieved through runtime checks.
It doesn't use "owning reference" anywhere.
> Owning references are a part of modern C++
Maybe we're thinking of different things, but I don't think C++ has owning references, modern or not? There's regular references (&) which are definitely not owning, and owning pointers (unique_ptr and friends), but neither of those quite match Rust's &.
I tried to use C++ on a new project and what finally killed me was the IDE. I wanted to use the newest language version, with modules. Visual Studio's autocompletion and jump-to-definition fell apart with modules. This was demoralizing, because I was under the impression that Visual Studio was the best C++ IDE, and if it can't handle language features 4 years after their release, what does that mean? CLion or Emacs+clangd were also a pain. The editing/IDE experience has been much nicer in Rust.
Terrible article.
> you can write perfectly fine code without ever needing to worry about the more complex features of the language
Not really because of undefined behaviour. You must be aware of and vigilant about the complexities of C++ because the compiler will not tell you when you get it wrong.
I would argue that Rust is at least in the same complexity league as C++. But it doesn't matter because you don't need to remember that complexity to write code that works properly (almost all of the time anyway, there are some footguns in async Rust but it's nothing on C++).
> Now is [improved safety in Rust rewrites] because of Rust? I’d argue in some small part, yes. However, I think the biggest factor is that any rewrite of an existing codebase is going to yield better results than the original codebase.
A factor, sure. The biggest? Doubtful. It isn't only Rust's safety that helps here, it's its excellent type system.
> But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
Somehow managed to fit two fallacies in one sentence!
1. The fallacy of the grey - no language is perfect therefore they are all the same.
2. "I don't make mistakes."
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
Not true. As I said already Rust's very strong type system helps to make applications less buggy even ignoring memory safety bugs.
> Yes, C++ can be made safer; in fact, it can even be made memory safe. There are a number of libraries and tools available that can help make C++ code safer, such as smart pointers, static analysis tools, and memory sanitizers
lol
> Avoid boost like the plague.
Cool, so the ecosystem isn't confusing but you have to avoid one of the most popular libraries. And Boost is fine anyway. It has lots of quite high quality libraries, even if they do love templates too much.
> Unless you are writing a large and complex application that requires the specific features provided by Boost, you are better off using other libraries that are more modern and easier to use.
Uhuh what would you recommend instead of Boost ICL?
I guess it's a valiant attempt but this is basically "in defense of penny farthings" when the safety bicycle was invented.
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
Even if we take this claim at face value, isn’t that great?
Memory safety is a HUGE source of bugs and security issues. So the author is hand-waving away a really really good reason to use Rust (or other memory safe by default language).
Overall I agree this seems a lot like “I like C++and I’m good at it so it’s fine” with justifications created from there.
I think this is a case of two distinct populations being inappropriately averaged.
There are many high-level C++ applications that would probably be best implemented in a modern GC language. We could skip the systems language discussion entirely because it is weird that we are using one.
There are also low-level applications like high-performance database kernels where the memory management models are so different that conventional memory safety assumptions don’t apply. Also, their performance is incredibly tightly coupled to the precision of their safety models. It is no accident that these have proven to be memory safe in practice; they would not be usable if they weren’t. A lot of new C++ usage is in these areas.
Rust to me slots in as a way to materially improve performance for applications that might otherwise be well-served by Java.
> It is no accident that these have proven to be memory safe in practice; they would not be usable if they weren’t.
Can't agree there. Why wouldn't they be usable if they weren't memory safe?
Can you give me an example of this mythical "memory safe in practice" database?
Not Postgresql at least: https://www.postgresql.org/support/security/
Database kernels have some of the strictest resource behavior constraints of all software. Every one I have worked on in vaguely recent memory has managed memory. There is no dynamic allocation from the OS. Many invariants important to databases rely on strict control of resource behavior. An enormous amount of optimization is dependent on this, so performance-engineered systems generally don’t have issues with memory safety.
Modern database kernels are memory-bandwidth bound. Micro-managing the memory is a core mechanic as a consequence. It is difficult to micro-manage memory with extreme efficiency if it isn’t implicitly safe. Companies routinely run formal model checkers like TLA+ on these implementations. It isn’t a rando spaffing C++ code.
I’ve used PostgreSQL a lot but no one thinks of it as highly optimized.
Ok but can you give one example?
You can micro-manage memory in Rust if you really want so I'm not sure why this would be a factor.
Rust is a systems programming language, it was absolutely designed with the second use case in mind.
This is true. But it has some weird gaps that make it difficult to express fundamental things in the low-level systems world without using a lot of “unsafe”. Or you can do it safely and sacrifice a lot of performance. I am a fan of formal verification and use it quite a lot but Rust is far more restrictive than formal verification requires.
Rust is a systems language but it is uncomfortable with core systems-y things like DMA because it breaks lifetime and ownership models, among many other well-known quirks as a systems language. Other verifiable safety models exist that don’t have these issues. C++, for better or worse, can deal with this stuff in a straightforward way.
> without using a lot of “unsafe”
You are allowed to use a lot of `unsafe` if you really need to. How much `unsafe` do you use in C++?
> it is uncomfortable with core systems-y things like DMA because it breaks lifetime and ownership models,
Sure, it means it can't prove memory safety. But that just takes you back to parity with C++. It feels bad in Rust because normally you can do way better than that, but this isn't an argument for C++.
Fallacy of grey, now I know how is it is called. I see it used by C++ programmers to dismiss Rust a lot.
Nobody claims that Rust is a perfect language, the argument is that Rust is a better language than C++.
> Yes, C++ can be unsafe if you don’t know what you’re doing
I feel like I always hear this argument for continuing to use C++.
I, on the other hand, want a language that doesn't make me feel like I'm walking a tightrope with every line of code I write. Not sure why people can't just admit the humans are not robots and will write incorrect code.
The article says "I think the biggest factor is that any rewrite of an existing codebase is going to yield better results than the original codebase.".
Yeah, sorry, but no, ask some long-term developers about how this often goes.
It depends on the codebase. If the code base deserves to be a case study in how not to do programming, then a rewrite will definitely yield better results.
I once encountered this situation with C# code written by an undergraduate, rewrote it from scratch in C++ and got a better result. In hindsight, the result would have been even better in C since I spent about 80% of my time fighting with C++ to try to use every language feature possible. I had just graduated from college and my code whole better, did a number of things wrong too (although far fewer to my credit). I look back at it in hindsight and think less is more when it comes to language features.
I actually am currently maintaining that codebase at a health care startup (I left shortly after it was founded and rejoined not that long ago). I am incrementally rewriting it to use a C subset of C++ whenever I need to make a change to it. At some point, I expect to compile it as C and put C++ behind me.
Data structures like maps and vectors from the standard library are still incredibly useful and make a fantastic addition to C if your focus relies on POD types, though if real time performance with heap cohesion is a problem then you’re right to go pure C
Hi author of the article here.
I've been a software developer for nearly 2 decades at this point, contributed to several rewrites and oversaw several rewrites of legacy software.
From my experience I can assure you that rewriting a legacy codebase to modern C++ will yield a better and safer codebase overall.
There are multiple factors that contribute to this, such one of which is what I reffer to as "lessons learnt" if you have a stable team of developers maintaining a legacy codebase they will know where the problematic areas are and will be able to avoid re-creating them in a rewrite.
An additonal factor to consider is that a lot of legacy C++ codebases can not be upgraded to use modern language features like smart pointers. The value smart pointers provide in a full rewrite can not be overstated.
Then there's also the factor that is a bit anecdotal which is I find that there are less C++ devs in general as there was 15 years ago, but those that stayed / survived are generally better and more experienced with very few enthusiastic juniors coming in.
I'm sorry you did not enjoy the article though, but thank you for giving it your time and reading it that part I really appreciate.
A lot of this article boils down to "C++ is great... If you don't use most of the language"
Not a terrible thing on its face
> Here’s a rule of thumb I like to follow for C++: make it look as much like C as you possibly can, and avoid using too many advanced features of the language unless you really need to.
Also, avoid using C++ classes while you're at it.
I recently had to go back to writing C++ professionally after a many-year hiatus. We code in C++23, and I got a book to refresh me on the basics as well as all the new features.
And man, doing OO in C++ just plain sucks. Needing to know things like copy and swap, and the Rule of Three/Five/Zero. Unless you're doing trivial things with classes, you'll need to know these things. If you don't need to know those things, you might as well stick to structs.
Now I'll grant C++23 is much nicer than C++03 (just import std!) I was so happy to hear about optional, only to find out how fairly useless it is compared to pretty much every language that has implemented a "Maybe" type. Why add the feature if the compiler is not going to protect you from dereferencing without checking?
Using classes doesn't have to mean OO. Classes in C++ are important for RAII which you definitely should not avoid.
I really don't like Object Oriented programming anywhere. Maybe Smalltalk had it right, but I've not messed with Pharo or anything else enough to get a feel for it.
CLOS seems pretty good, but then again I'm a bit inexperienced. Bring back Dylan!
std::optional does have dereference checking, but it's a run-time check: std::optional<T>::value(). Of course, you'll get an exception if the optional is empty, because there's nothing else for the callee to do.
> but it's a run-time check
And that's the problem. In other languages that have a Maybe type, it's a compile time check. If your code is not handling the "empty" case, it will simply fail to compile.
I honestly don't see any value in std::optional compared to the behavior pre-std::optional. What does it bring to the table for pointers, for example?
Nothing, but I don't think anyone uses std::optional<T *>. However, if you need to specify an optional integer, std::optional is much clearer than encoding the null value as a negative or other such hacks. Another use of std::optional is to delay construction of an object without an extra dynamic allocation. That's way more convenient than using placement new.
But the dereference operator invokes UB if there is no value.
Which is a recurring theme in C++: the default behavior is unsafe (in order to be faster), and there is a method to do the safe thing. Which is exactly the opposite of what it should be.
No, it's the reason people choose C++.
So Boost is dying off? Good to know.
The one thing I'll say here is age of the language really is and always has been a superficial argument; it's only six years apart from Python, and it's far less controversial of a language choice: https://en.wikipedia.org/wiki/History_of_Python .
Either way, it's hard not to draw parallels between all the drama in US politics and the arguments about language choice sometimes; it feels like both sides lack respect for the other, and it makes things unnecessarily tense.
What’s a good (ie: opinionated) code formatter and unit test framework for C++ these days?
I just had a PR on an old C++ project, and spending 8 years in the web ecosystem have raised the bar around tooling expectations.
Rust is particularly sweet to work with in that regard.
My go to for formatting would be clang-format, and for testing gtest. For more extensive formatting (that involves the compiler) clang-tidy goes a long way
I think you meant for more extensive static analysis. Clang-tidy is really awesome. There is also Facebook's Infer.
Clang tidy does both: it can run clang's analyzer [0] (also available with clang++ --analyze or the scan-build wrapper script which provides nicer HTML-based output for complex problems found), has it's own lightweight code analysis checks but also has checks that are more about formatting and ensuring idiomatic code than it is about typical static analysis.
MVSC [1] and GCC [2] also have built-in static analyzers available via cl /analyze or g++ -fanalyzer these days.
There is also cppcheck [3], include-what-you-use [4] and a whole bunch more.
If you can, run all of them on your code.
[0] https://clang-analyzer.llvm.org/
[1] https://learn.microsoft.com/en-us/cpp/build/reference/analyz...
[2] https://gcc.gnu.org/onlinedocs/gcc/Static-Analyzer-Options.h...
[3] https://cppcheck.sourceforge.io/
[4] https://github.com/include-what-you-use/include-what-you-use
Catch2 is great as a unit test framework.
Running unit tests with the address sanitizer and UB sanitizer enabled go a long way towards addressing most memory safety bugs. The kind of C++ you write then is a far cry from what the haters complain about with bad old VC6 era C++.
> Catch2 is great as a unit test framework.
It's "great" mainly in the sense of being very large, and making your code very lage - and slow to build. I would not recommend it unless you absolutely must have some particular feature not existing elsewhere.
Here's a long list of C++ unit testing frameworks: https://en.wikipedia.org/wiki/List_of_unit_testing_framework...
And you might consider:
* doctest: https://github.com/doctest/doctest
* snitch: https://github.com/snitch-org/snitch
* ut/micro-test: https://github.com/boost-ext/ut
The only formatter is clang-format, and it isn't very good. Better than nothing though.
> Yes, C++ can be unsafe if you don’t know what you’re doing.
It is even if if you do
> But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
But here's the thing, that's not a good argument because...
> will just make it a lot harder to have memory leaks or safety issues.
... in reality it's not "just". "Just makes it better" means it's better
> Yes, C++ can be unsafe if you don’t know what you’re doing. But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
C++ can be unsafe even when you know what you're doing, since it is quite easy get something wrong by accident: index off-by-one can mean out-of-bounds access to an array, which can mean anything really. So, it's not that "all languages" are like that. That seems like a "moving the goalpost" type of logical fallacy.
And I say that as a person who writes C++ for fun an profit (well, salary) and has wasted many an hour on earning my StackOverflow C++ gold badge :-)
The post also includes other arguments which I find week regard C++ being dated. It has changed and has seen many improvements, but those have been almost exclusively _additions_, not removals or changes. Which means that the rickety old stuff is basically all still there. And then there is the ABI stability issue, which is not exactly about being old, but it is is about sticking to what's older and never (?) changing it.
Bottom line for me: C++ is useful and flexible but has many warts and some pitfalls. I'd still use it over Rust for just about anything (bias towards my experience here), but if a language came along with similar design goals to C++; a robust public standardization and implementation community; less or none of the poor design choices of C; nicer built-in constructs as opposed to having to pull yourself up by the bootstraps using the standard library; etc - I would consider using that. (And no, that language is not D.)
>So, it's not that "all languages" are like that. That seems like a "moving the goalpost" type of logical fallacy.
I think what's mean is that Rust's type system only removes one specific kind of unsafety, but if you're clueless you can still royally screw things up, in any language. No type system can stop you from hosing a database by doing things in the wrong order, say. Whether trading <insert any given combination of things Rust does that you don't like> for that additional safety is worth it is IMO a more interesting question than whether it exists at all.
Personally, I mostly agree with you. I don't much care for traits, or the lack of overloading and OO, or how fast Rust is still evolving, and wish I could have Rust's safety guarantees in a language that was more like C++. It really feels like you could get 90% of the way there without doing anything too radical, just forbidding a handful of problematic features; a few off the top of my head: naked pointers, pointer arithmetic, manual memory management, not checking array accesses by default, not initializing variables by default, allowing switches to be non-exhaustive.
> only removes one specific kind of unsafety
the word "only" doesn't really belong in that sentence, because these are very common in root-cause analysis of flaws by the "Common Weakness Enumeration" initiative:
https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html
and having said that - I agree with you back :-) ... in fact, I think this is basically "the plan" for C++ regarding security: They'll make some static analysis warnings be considered errors for parts of your code marked "safe", and let them fly in areas marked "unsafe".
If the C++ committe can make that stick - in the public discourse and in US government circles I guess - then they will have essentially "eaten Rust's lunch". Because Rust is quite restrictive, it's somewhat of a moving target, it's kind of fussy w.r.t. use on older systems, and - it's said to be somewhat restrictive. If you take away its main selling point of safety-by-default, then there would probably not be enough of a motivation to drop C++, decades of backwards compatibility, and a huge amount of C++ and C libraries, in favor of Rust.
And this would not be the first time C++ is eating the lunch of a potential successor/competitor language; D comes to mind.
I am not sure C++ needs a defense. Especially after C++ 11 cleanup.
It doesn't mention the horrific template error messages. I'd heard that this was an area targeted for improvement a while ago... Is it better these days?
Qualitatively better. C++20 'concepts' obviated the need for the arcane metaprogramming tricks responsible for generating the vast majority of that template vomit.
Now you mostly get an error to the effect of "constraint foo not satisfied by type bar" at the point of use that tells you specifically what needs to change about the type or value to satisfy the compiler.
> C++20 'concepts' obviated the need for the arcane metaprogramming tricks responsible for generating the vast majority of that template vomit.
1. Somewhat exaggerated claim. It reduced that need; and for when you can assumpe everyting is C++20 or later.
2. Even to the extent the need for TMP was obviated in principle - it will take decades for TMP to go away in popular libraries and in people's application code. At that time, maybe, we would stoopp seeing these endless compliation artifacts.
This reads the same way as any other 'defense', 'sales pitch', or what have you, but from a Rust evangelist. The author likes to use C++ and now he must explain to the world why his decision is okay/correct/good/etc.. If you like it that much, just use the thing; no one actually cares.
As a bonus:
> You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
It is incredibly funny how this argument has been used for literally decades, but in reality, you don't see simple, readable, nor maintainable code, instead, most of the C++ code bases out there are an absolute mess.. This argument reminds me of something...
Python’s “there should be one obvious way to do it” slogan often collides with reality these days too, since the language sprawled into multiple idioms just like C++: for printing you can use print("hi"), f-strings like f"hi {x}", .format(), % formatting, or concatenation with +; for loops you can iterate with for i in range(n), list comprehensions [f(i) for i in seq], generator expressions (f(i) for i in seq), or map/filter/lambda; unpacking can be done with a,b=pair, tuple() casting, slicing, *args capture, or dictionary unpacking with *; conditionals can be written with if/else blocks, one-line ternary x if cond else y, and/or short-circuit hacks, or pattern matching match/case; default values can come from dict.get(k,default), x or default, try/except, or setdefault; swapping variables can be done with a,b=b,a, with a temp var, with tuple packing/unpacking, or with simultaneous assignment; joining strings can be done with "".join(list), concatenation in a loop, reduce(operator.add, seq), or f-strings; reading files can be open().read(), iterating line by line with for line in f, using pathlib.Path.read_text(), or with open(...) as f; building lists can be done with append in a loop, comprehensions, list(map(...)), or unpacking with [*a,*b]; dictionaries can be merged with {*a,*b}, a|b (Python 3.9+), dict(a,*b), update(), or comprehensions; equality and membership checks can be ==, is, in, any(...), all(...), or chained comparisons; function arguments can be passed positionally, by name, unpacked with * and \*, or using functools.partial; iteration with indexes can be for i in range(len(seq)), for i,x in enumerate(seq), zip(range(n),seq), or itertools; multiple return values can be tuples, lists, dicts, namedtuples, dataclasses, or objects; even truthiness tests can be if x:, if bool(x):, if len(x):, or if x != []:. Whew!
But hey at least python forces you to use whitespace properly hinthinthint
> But Dave, what we mean by outdated is that other languages have surpassed C++ and provide a better developer experience.
> Matter of personal taste, I guess, C++ is still one of the most widely used programming languages with a huge ecosystem of libraries and tools. It’s used in a wide range of applications, from game development to high-performance computing to embedded systems. Many of the most popular and widely used software applications in the world are written in C++.
> I don’t think C++ is outdated by any stretch of the imagination;
The second paragraph in this quote has zero connection to the first and the third paragraphs.
> C++ has a large ecosystem built over the span of 40 years or so, with a lot of different libraries and tools available.
Yes, exactly: it's outdated.
> the simple rule of thumb is to use the standard library wherever possible; it’s well-maintained and has a lot of useful features.
That's got to be the funniest joke in this whole article. First of all, no, its API is not really that well thought out and it took several language standards to finally make smart pointers and tuples truly convenient to use; and which implementation of "the standard library" do you even mean, by the way? There are several implementations of it, you know, of very varying quality.
And then there is an argument against using the Boost in this article which hilariously can be well applied to C++ itself. Don't use it unless you have to! There are languages that are more modern and easier to use!
> Fact is, if you wanna get into something like systems programming or game development then starting with Python or JavaScript won’t really help you much. You will eventually need to learn C or C++.
The key word is eventually. You don't start learning to e.g. play guitar on a cheap, half-broken piece of wood because you'll spend more time on fighting the instrument and fiddling with it than actually learning how to play it.
> New standards (C++20, C++23) keep modernizing the language, ensuring it stays competitive with younger alternatives. If you peel back the layers of most large-scale systems we rely on daily, you’ll almost always find C++ humming away under the hood.
Notice the dishonesty of placing these two sentences together: it seems to imply (with plausible deniability) that those "large-scale systems we rely on daily" are written in "modern" C++. No, they are absolutely not.
I think, if one of the most prominent C++ experts in the world(herb sutter), who chaired the C++ standards committee for 20+ years, who has evangelized the language for even longer than that - decides that complexity in the language has gotten out of control and sits down to write a simpler and safer dialect, then that is indicative of a problem with the language.
My viewpoint on the language is that there are certain types of engineers who thrive in the complexity that is easy to arrive at in a C++ code base. These engineers are undoubtedly very smart, but, I think, lack a sense of aesthetics that I can never get past. Basically, the r/atbge of programming languages (Awful Taste But Great Execution).
"Rust shines in new projects where safety is the priority, while C++ continues to dominate legacy systems and performance-critical domains."
the truth
Hasn’t Rust been shown to be very fast, especially since it can elide a lot of safety checks that would otherwise be necessary to prevent bugs?
On legacy code bases, sure. C++ rules in legacy C++ codebases. That’s kind of a given isn’t it? So that’s not a benefit. Just a fact.
> "while C++ continues to dominate ... performance-critical domains"
Why performance-critical domains? Does C++ have a performance edge over Rust?
That is what the article says.
But you stated it was "the truth" and so we might reasonably wonder why you think so, unless it's that you just believe anything you read.
"ABI: Now or never" by Titus Winters addresses some perf leaks C++ had years ago, which it can't fix (if it retains its ABI promise). They're not big but they accumulate over time and the whole point of that document was to explain what the price is if (unlike Rust) you refuse to take steps to address it.
Rust has some places where it can't match C++ perf, but unlike that previous set Rust isn't obliged to keep one hand tied behind its back. So this gently tips the scales further towards Rust over time.
Worse, attempts to improve C++ safety often make its performance worse. There is no equivalent activity in Rust, they already have safety. So these can heap more perf woes on a C++ codebase over time.
The article is not an authority, so if you’re going to appeal to it when someone presses you on the topic, you should probably be ready to back it up.
It seems you agree? I'd love to hear your thoughts on it. I had gotten the impression (second-hand) that they were roughly equally matched in this regard.
I don't think there could be any purer of an expression of the Blub Paradox.
> Just use whatever parts of the language you like without worrying about what's most performant!
It's not about performant. It's about understanding someone else's code six months after they've been fired, and thus restricting what they can possibly have done. And about not being pervasively unsafe.
> "I don’t think C++ is outdated by any stretch of the imagination", "matter of personal taste".
Except of course for header files, forward declarations, Make, the true hell of C++ dependency management (there's an explicit exhortation not to use libraries near the bottom), a thousand little things like string literals actually being byte pointers no matter how thoroughly they're almost compatible with std::string, etc. And of course the pervasive unsafety. Yes, it sure was last updated in 2023, the number of ways of doing the same thing has been expanded from four to five but the module system still doesn't work.
> You can write unsafe code in Python! Rewriting always makes the code more safe whether it's in Rust or not!
No. Nobody who has actually used Rust can reasonably arrive at this opinion. You can write C++ code that is sound; Rust-fluent people often do. The design does not come naturally just because of the process of rewriting, this is an entirely ridiculous thing to claim. You will make the same sorts of mistakes you made writing it fresh, because you are doing the same thing as you were when writing it fresh. The Rust compiler tells you things you were not thinking of, and Rust-fluent people write sound C++ code because they have long since internalized these rules.
And the crack about Python is just stupid. When people say 'unsafe' and Rust in the same sentence, they are obviously talking about UB, which is a class of problem a cut above other kinds of bugs in its pervasiveness, exploitability, and ability to remain hidden from code review. It's 'just' memory safety that you're controlling, which according to Microsoft is 70% of all security related bugs. 70% is a lot! (plus thread safety, if this was not mentioned you know they have not bothered using Rust)
In fact the entire narrative of 'you'll get it better the second time' is nonsense, the software being rewritten was usually written for the first time by totally different people, and the rewriters weren't around for it or most of the bugfixes. They're all starting fresh, the development process is nearly the same as the original blank slate was - if they get it right with Rust, then Rust is an active ingredient in getting it right!
> Just use smart pointers!
Yes, let me spam angle brackets on every single last function. 'Write it the way you want to write it' is the first point in the article, and here is the exact 'write it this way' that was critiquing. And you realistically won't do it on every function so it is just a matter of time until one of the functions you use regular references with creates a problem.
> In fact the entire narrative of 'you'll get it better the second time' is nonsense, the software being rewritten was usually written for the first time by totally different people, and the rewriters weren't around for it or most of the bugfixes. They're all starting fresh, the development process is nearly the same as the original blank slate was - if they get it right with Rust, then Rust is an active ingredient in getting it right!
Yes, this is a serious flaw in the author's argument. Does he think the exact same team that built version 1.0 in C++ is the one writing 2.0 in Rust? Maybe that happens sometimes, I guess, but to draw a general lesson from that seems weird.
If there are as many typos in some code as are in the article, there would be a whole lot of segfaults, too