I compile a lot of C++ code from a lot of places, and the only time I run into code that somehow simply doesn't work on newer versions of C++ and where the developers aren't even sure if they will accept any patches to fix the issue as they claim it "isn't supported" to use a newer version of C++--even for the public headers of a library--is, you guessed it: code from Google.
Meanwhile, most of the C++ code from Google seems to be written in some mishmash of different ideas, always at some halfway point along a migration between something ancient and something passable... but never anything I would ever dare to call "modern", and thereby tends to be riddled with state machines and manual weak pointers that lead to memory corruption.
So... I really am not sure I buy the entire premise of this article? Honestly, I am extremely glad that Google is finally leaving the ecosystem, as I generally do not enjoy it when Google engineers try to force their ridiculous use cases down peoples' throats, as they seem to believe they simply know better than everyone else how to develop software.
Like... I honestly feel bad for the Rust people, as I do not think the increasing attention they are going to get from Google is going to be at all positive for that ecosystem, any more than I think the massive pressure Google has exerted on the web has been positive or any more than the pressure Google even exerted on Python was positive (not that Python caved to much of it, but the pressure was on and the fact that Python refused to play ball with Google was in no small part what caused Go to exist at all).
(FWIW, I do miss Microsoft's being in the space, but they honestly left years ago -- Herb's existence until recent being kind of a token consideration -- as they have been trying to figure out a tactical exit to C++ ever since Visual J++ and, arguably, Visual Basic, having largely managed to pivot to C# and TypeScript for SDKs long ago. That said... Sun kicking Microsoft out of Java might have been really smart, despite the ramifications?)
DanielHB 21 hours ago [-]
> code from Google.
I spilled my coffee, I was just talking the other day to some coworkers how I don't trust google open source. Sure they open their code but they don't give a damn about contributions or making it easy for you to use the projects. I feel a lot of this sentiment extends to GCP as well.
So many google projects are better than your average community one, but they never gain traction outside of google because it is just too damn hard to use them outside of google infra.
The only Google project that seems to evade this rule that I know of is Go.
kccqzy 16 hours ago [-]
> but they don't give a damn about contributions
Here is a concrete reason why Google open source sucks when it comes to contributions and I don't think it can be improved unless Google changes things drastically: (1) an external contributor makes a nice change and a PR on GitHub; (2) the change breaks internal use cases and their tests; (3) the team is unwilling to fix the PR or port the internal test (which may be a test several layers down the dependency tree) to open source.
> making it easy for you to use the projects
Google internally use Blaze, a version of Bazel. It's so ridiculously easy for one team to use another team's project that even just thinking about what the rest of us needs to do to use another project is unloved dreadful work. So people don't make that effort.
I do not see either of these two points changing. Sure there are individuals at Google that really care about open source community, but most don't, and so their project is forever a cathedral not a bazaar.
DanielHB 15 hours ago [-]
It is not only that, but often when google uses an open source project not owned by them they either try to take ownership of the project or fork it instead of trying to contribute to the original.
jsnell 13 hours ago [-]
Which cases did you have in mind? Seems like it should be easy to find half a dozen examples since you claim it happens often.
Create 11 hours ago [-]
KHTML, officially discontinued in 2023. -- "Embrace, extend, and extinguish" (EEE) also known as "embrace, extend, and exterminate", is a phrase that the U.S. Department of Justice. It's also possible that President-elect Donald Trump may interfere with the DOJ's proposed remedies; he said on the campaign trail that a Google break-up may not be desirable since it could "destroy" a company that the US highly values.
jsnell 10 hours ago [-]
The GP's complaint was that Google "took over projects" or "forked them without trying to contribute to the original".
In the case of KHTML, they never used it in the first place, so it seems like a particularly inappropriate example. I assume you actually meant Webkit? In that case, they spent half a decade and thousands of engineer-years contributing to Webkit, so it doesn't fit the original complaint about not "trying to contribute" either.
Create 10 hours ago [-]
November 4, 1998; 26 years ago (KHTML released)
June 7, 2005; 19 years ago (WebKit sourced)
* (C) 1999-2003 Lars Knoll (knoll@kde.org)
* (C) 2002-2003 Dirk Mueller (mueller@kde.org)
* Copyright (C) 2002, 2006, 2008, 2012 Apple Inc. All rights reserved.
* Copyright (C) 2006 Samuel Weinig (sam@webkit.org)
"...they never used it in the first place"
rcxdude 10 hours ago [-]
I think the point is that KHTML was already forked into webkit by apple long before google came along (though, they have in fact also now forked webkit into blink).
rahkiin 15 hours ago [-]
One could ask whether Google works ‘open source’ or more ‘source available’; the source is there but you cannot contribute, if you can build it at all
kccqzy 14 hours ago [-]
No, "open source" doesn't imply open contribution. The standard terminology is cathedral vs bazaar.
interroboink 14 hours ago [-]
Just to add a different perspective: sometimes people mean Open Source[1] when they say "open source," and sometimes they don't.
Personally, I take the cathedral/bazaar distinction to indicate different development cadences and philosophies, rather than whether contributions are allowed/encouraged.
Various cathedral-style projects (eg: FreeBSD, Emacs) still actively take contributions and encourage involvement.
There's something even further along the spectrum that's "we provide dumps of source code, but don't really want your patches." I'm not sure what the best term is for that, but "source [merely] available" sometimes has that connotation.
The quintessential example for providing source and discouraging contributions is SQLite. Nobody would argue that it's merely source available. It is full open source.
In fact "source available" usually means you can see the source code, but there are severe restrictions on the source, such as no permission to modify the source even for your own use, or no permission to create forks of the project containing the modifications, or severe restrictions on such modifications. An example is MongoDB's Server Side Public License, which is source-available but not open source.
steve_gh 10 hours ago [-]
I think it depends on the contribution. I sent a bug report with a minimal test case. It was welcomed and quickly fixed. It is not a source code contribution, but I think it is a contribution.
odo1242 9 hours ago [-]
OP is specifically talking about code contributions. You can (I have) make that type of contribution to proprietary software.
palata 9 hours ago [-]
> sometimes people mean Open Source[1] when they say "open source," and sometimes they don't.
And when they don't when talking about source code, they are wrong. If someone says that an RJ45 cable is "a piece of software" because it's "soft" (you can bend it), would you say it's just a different perspective?
Open source, in the context of software, has a particular meaning. And it is the case that many software developers don't know it, so it's worth teaching them.
humanrebar 20 hours ago [-]
Googletest is the most widely used test library for C++. Googlemock is the only mocking library available that's reasonably feature complete.
bluGill 18 hours ago [-]
I you are using googletest, you owe it to yourself to check out catch2 which I find much better and uses modern C++. There are a few other test frameworks in C++ that look better than google test as well, but catch2 is the one I settled on (and seems to be the best supported): feel free to check them out.
I've given up on mock frameworks. They make it too easy to make an interface for everything and then test that you are calling functions with the expected parameters instead of the program works as you want. A slight change to how I call some function results in 1000 failed tests and yet I'm confident that I didn't break anything the user could notice (sometimes I'm wrong in this confidence - but none of the failing tests give me any clue that I'm wrong!)
Maxatar 17 hours ago [-]
catch2 has become fairly bloated. doctest takes all of the best parts of catch2 without all the bloat and the end result is a test framework that is literally over 10x faster than catch2. It's also like 90% compatible with catch2 so porting your tests to it is pretty easy.
Especially if you have a build process that always runs your unit tests, it's nice to have a very fast test/compile/debug loop.
>catch2 has become fairly bloated. doctest takes all of the best parts of catch2 without all the bloat and the end result is a test framework that is literally over 10x faster than catch2. It's also like 90% compatible with catch2 so porting your tests to it is pretty easy.
I feel like you could make a madlib where you could plug in any two project names and this sentence would make sense.
bee_rider 14 hours ago [-]
Madlibs have become fairly bloated. Copypasta memes take all the best parts of madlibs without all the bloat and the end result is a form of mockery is literally over 10x faster than a madlib. It's also like 90% compatible with madlibs so porting your gibes is pretty easy.
gary_0 17 hours ago [-]
I was just about to suggest doctest, you beat me to it! I'm all about faster compile times, and it was mostly a drop-in replacement for catch2 in my case.
Also, IMO, both doctest and catch2 are far superior to Google Test.
ehoh 18 hours ago [-]
Sounds like the mocks are overused or used inappropriately in your experience (whether by a colleague or yourself).
Mocks have their place. A prototypical example is at user-visible endpoints (eg: a mock client).
bluGill 17 hours ago [-]
I have found in my world it is easy to setup a test database (we use sqlite!) and the file system is fast enough (I have code to force using a different directory for files). I have been playing with starting a dbus server on a different port in my tests and then starting the real server to test against (with mixed results - I need a better way to know when dbus is running). I have had great success by writing a fake for one service that is painful - the fake tracks the information I really care about and so lets me query on things that matter not what the function signature was.
I'm not arguing that mocks don't have their place. However I have found that by declaring I won't use them at all I overall come up with better solutions and thus better tests.
amalcon 16 hours ago [-]
I've found exactly three places where I really want to have a mock available:
1) Databases and other persistent storage. Though in this case, the best mock for a database is generally another (smaller, easily snapshottable) database, not something like googlemock.
2) Network and other places where the hardware really matters. Sometimes, I really want to drop a particular message, to exercise some property of the sender. This is often possible to code around in greenfield projects, but in existing code it can be much simpler to just mock the network out.
3) Cases where I am calling out to some external black-box. Sometimes it's impractical to replicate the entire black-box in my test. This could be e.g. because it is a piece of specialized hardware, or it's non-deterministic in a way that I'd prefer my test not to be. I don't want to actually call out to an external black-box (hygiene), so some kind of a mock is more or less necessary.
eddautomates 16 hours ago [-]
For 1 have you looked at test containers?
amalcon 15 hours ago [-]
Briefly, but frankly: copying small SQLite files around works so well in almost all cases that I don't feel the need for a new abstraction.
physicsguy 16 hours ago [-]
I used to really like Google Test, and then Google decided in it's infinite wisdom to make the OSS version depend on their C++ shared library replacement Abseil, and not just that but the live at head version.
That makes sense internally for Google because they have their massive monorepo, but it sure as hell makes it a pain in the ass to adopt for everyone else.
jeffbee 16 hours ago [-]
I don't think you're reading those docs correctly. Googletest recommends living at head, but there's no reason you can't pin a release, either a git commit hash or a release label, of which there have been several. Googletest does not depend on the HEAD of abseil-cpp, it actually declares a direct dependency on an older LTS release of absl, but since you are building it from source any later release or commit of absl would work.
Google open source libraries are often a mess when you try to include more than one of them in the same project, but googletest isn't an example of the mess. It's actually pretty straightforward.
james_promoted 14 hours ago [-]
> Google open source libraries are often a mess when you try to include more than one of them in the same project
Completely agree. In isolation all of their libs are great, but inevitably I end up having to build Abseil from source, to then build Protobuf off of that, to then build gRPC off of that. If I can include the sanitizers under Google then that also becomes painful because Abseil (at least) will have ABI issues if it isn't built appropriately.
Thinking about it I'd really just like a flat_hash_map replacement so I can drop Abseil.
Doctor_Fegg 11 hours ago [-]
Protobuf depending on Abseil (which has ongoing macOS build issues) is clinically insane. I tend to use protozero now which trades half a day’s boilerplate for two days’ build heartache.
FWIW the flat hash map in Boost is now faster. I am not sure if integrating Boost is any easier for you.
james_promoted 9 hours ago [-]
I occasionally reconsider it so I can try a bunch of the FB alternatives (Folly, Thrift, CacheLib, etc.), but... yeah. Still just kind of waiting for a panacea.
physicsguy 13 hours ago [-]
It's been a few years to be fair, I stopped working with C++ in early 2021 or so so maybe I've just misremembered. I do remember having to take Abseil on where we previously didn't.
gpderetta 18 hours ago [-]
Google test and mock are quite powerful but are a big hit at both compile time and runtime, which matters for quick edit-compile-fix loops.
I still go back and forth on whether google test and mock are worth it.
Google benchmark is also nice.
rangestransform 14 hours ago [-]
> big hit at both compile time and runtime, which matters for quick edit-compile-fix loops
honestly if you write C++ for work, there's no excuse for your company to not give you the beefiest dev machine that money can reasonably buy. given that rust exists, I think "get a faster computer" is a totally valid answer to build times, especially now that skylake malaise era is over and CPUs are getting faster
badsectoracula 13 hours ago [-]
> given that rust exists, I think "get a faster computer" is a totally valid answer to build times
I find this amusing because one of the main reasons i avoid Rust (in the sense that i prefer to build things written in other languages if possible - i don't mind if someone else uses it and gives me a binary/library i can use - and it never went beyond "i might check this at some point, sometime, maybe" in my mind) is the build times compared to most other compilers :-P.
Also, at least personally, if i get a faster computer i want my workflow to be faster.
jimmaswell 16 hours ago [-]
Does it not support only running some or no tests? I only run the full test suite rarely, close to releases.
__MatrixMan__ 15 hours ago [-]
I blame monorepo culture. If it doesn't grow up in a context where it's expected to stand on its own, it crashes and burns when you kick it out of the nest.
DanielHB 14 hours ago [-]
I heard that Meta also has a monorepo but most of their open source projects are very community driven. I think it is corporate mandate thing, no resources to be spent on open source and not tracking open source contributions as part of career development.
badpun 20 hours ago [-]
Tensorflow is/was decent. It looked like they made a lot of effort for it to be accessible for outsiders.
th2oi34234234 20 hours ago [-]
Have you tried building the damn thing ?
Nix build is still stuck in the one from 3-4 y back because bazel doesn't play well. Debian too has some issues building the thing...
p_l 15 hours ago [-]
Having tried on other platforms, it's not Bazwl, it's not even just Google.
It's python packaging and the way the only really supported binary distribution method of Tensorflow for many many years was to use Pip and hope it doesn't crash. And it's reflected in how the TF build scripts only support building python lib as artefact, everything else at the very least involved dissecting bazel intermediate targets
jimmaswell 16 hours ago [-]
As an industry we need to stop treating breaking changes as an acceptable thing. The rate of bit rot has accelerated to an absurd pace. I can't remember the package but I had to spend considerable time fixing a build because a package.. changed names.. for NO REASON. They just liked the new name better. This should be career death. You're wasting your fellow humans' time and energy on your vanity when you make a breaking change that is at all avoidable. I should be able to run a build script made 20 years ago and it should just work. No renamed package hunting, no WARNING WARNING DEPRECATED REWRITE ALL YOUR CODE FOR LEFTPAD 10.3 IMMEDIATELY in the console, no code changes, no fuss, we should expect it to just work. This state of affairs is a stain on our industry.
__MatrixMan__ 6 hours ago [-]
One day we will have bled enough and we'll switch to using cryptographic hashes of package contents (or of some recipe for deterministically building the thing on different architectures) instead of anything so flimsy as a name and version number.
For the humans, we can render the hashes as something friendly, but there's no reason to confuse the machines with our human notions of friendliness.
knome 15 hours ago [-]
this is why you build to a specific version of a library. drop your build script into a container with the versions of software it expects and it should do fine. containerization is the admittance that versioning environments is needed for most software. I expect the nix/guix crowds to win in the end.
__MatrixMan__ 6 hours ago [-]
Blindly wrapping a build script in a Dockerfile is not nothing, but it's no replacement for being careful while writing that script in the first place.
Otherwise I agree, because if you must be careful, you might as well use tooling that's built for such care. But if you're doing that, do you need the Dockerfile? And that's how you end up with nix/guix.
groos 13 hours ago [-]
Whatever gave you the idea Microsoft "left" C++ years ago? It has massive code bases in C++ and continues to invest in its compiler teams and actively tracks the C++ standard. It was the first compiler to implement C++20 mostly completely, including modules, which other compilers have yet to catch up to. Like other mature companies, Microsoft realized decades ago that they can be a one-tech-dependent company and hence has code in C++ and .NET, and is now exploring Rust.
pjmlp 1 days ago [-]
The issue with Microsoft until recently, has been the power of WinDev, which are the ones responsible for anything C++ on Microsoft dungeons.
Hence the failure of Longhorn, or any attempt coming out from Microsoft Research.
Ironically, given your Sun remark, Microsoft is back into the Java game, having their own distribution of OpenJDK, and Java is usually the only ecosystem that has day one parity with anything Azure puts out as .NET SDK.
memsom 16 hours ago [-]
I use the Microsoft JDK daily - to develop in Maui for Android. Other than that, I'm not too sure what anyone would use it for over the actual OpenJDK versions. I'm pretty sure the MS OpenJDK is mostly there to support pushing people to Azure (hence your observation) and Android. I don't think it is there for much else outside of that, but I'm happy to stand corrected if anyone has another use cas for it.
pjmlp 15 hours ago [-]
It was thanks to Microsoft that you get to enjoy the JVM on ARM for example, or better escape analysis.
Sure, but the first link is surely only benefiting those using Windows on ARM? I do have Windows on ARM on a MacBook under VMWare, but my daily usage of Windows is under x64. Second link - not really knowing much about Java I don't know enough to comment. 99% of my Java use is indirect because it only gets touched by MSBuild when compiling my APK from C#.
quietbritishjim 24 hours ago [-]
What is "WinDev"? A quick search didn't turn up much except a French Wikipedia article.
pjmlp 23 hours ago [-]
Windows Development, per opposition to DevDiv, Developer Division.
Two quite common names in the Microsoft ecosystem.
asveikau 16 hours ago [-]
As a former MS employee some time ago I don't think I ever heard "windev". It was always referred to as "Windows". Though there were a lot of different groups within that, so sometimes you'd hear an initialism for a specific team. For example during some of my time there was a big organizational split between "core" and more UI oriented teams.
pjmlp 15 hours ago [-]
Here is an example in the press, with an email from Somasegar, leader of developer division in the past.
I was an employee in Windows on the date of that email. I left a few months later. Note that the email itself doesn't say "windev". It says "Windows" a bunch of times.
If I'm stretching this "windev" thing, the domain for a lot of employee accounts (including mine) was NTDEV, that had a longer history afaik, nobody called an org that..
pjmlp 14 hours ago [-]
The journalist writes it though, as do many other folks.
I didn't come up with this definition myself.
If I am not mistaken, I can probably even dig some Sinosfky references using it.
int_19h 10 hours ago [-]
I think it was sort of externally derived based on "DevDiv", but as another former MS employee - albeit from DevDiv - I can confirm that "WinDev" is not something that was routinely used inside the company the way "DevDiv" is. Usually it's just "Windows", or "Windows org" if the context is ambiguous.
loup-vaillant 22 hours ago [-]
For a moment there I thought you were referring to this trademark: https://pcsoft.fr/windev/index.html Which was known at a time for having young women in light clothing in their marketing material.
jcelerier 20 hours ago [-]
aha, that's the windev that comes to mind too. I didn't know they were actually a french company, wild that they're still around... their ads were plastered everywere in the 2000s.
Apparently they have a programming language for which you can "one-click-switch" between english and french for the keywords??? https://pcsoft.fr/windev/ebook/56/
voidfunc 20 hours ago [-]
That's actually kind of neat, also I love how the brochure uses the American flag for English...
xbar 18 hours ago [-]
Yes. I would have preferred that they had used Canadian flags for both.
fpoling 17 hours ago [-]
I second the observation of the state of Google C++. Just look at Chromium. There are a lot of unfinished refactoring there, as if people lost interest the moment the clean refactoring hit a roadblock requiring efforts to communicate with other teams. Only by a sort of direct order from the management things can be completed.
WalterBright 13 hours ago [-]
Being smart, well-educated, and knowing how to program isn't good enough for creating great code. It takes experience. I've been programming for 50 years now, and keep finding ways to make code more readable and more maintainable.
ozim 9 hours ago [-]
How do you find gimmicks from Bob Martin like (d + e*g) which in theory are great but to use it in practice would take loads of coaching?
WalterBright 8 hours ago [-]
I'm not familiar with that gimmick.
One thing I learned, for example, is do not access global immutable state from within a function. All inputs come through the parameters, all outputs through the parameters or the return value.
pizza-wizard 8 hours ago [-]
As someone without a lot of experience (in my first dev job now), would you care to expand on this? Does this mean that you wouldn’t have a function fn() that manipulates a global variable VAR, but rather you’d pass VAR like fn(VAR)?
maxbond 6 hours ago [-]
You've got the gist of it. By decoupling your function from the state of your application, you can test that function in isolation.
For instance, you might be tempted to write a function that opens an HTTP connection, performs an API call, parses the result, and returns it. But you'll have a really hard time testing that function. If you decompose it into several tiny functions (one that opens a connection, one that accepts an open connection and performs the call, and one that parses the result), you'll have a much easier time testing it.
(This clicked for me when I wrote code as I've described, wrote tests for it, and later found several bugs. I realized my tests did nothing and failed to catch my bugs, because the code I'd written was impossible to test. In general, side effects and global state are the enemies of testability.)
You end up with functions that take a lot of arguments (10+), which can feel wrong at first, but it's worth it, and IDEs help enormously.
To expand on the other reply, some related things:
1. don't do console I/O in leaf functions. Instead, pass a parameter that's a "sink" for output, and let the caller decide what do with it. This helps a lot when converting a command line program to a gui program. It also makes it practical to unit test the function
2. don't allocate storage in a leaf function if the result is to be returned. Try to have storage allocated and free'd in the same function. It's a lot easier to keep track of it that way. Another use of sinks, output ranges, etc.
3. separate functions that do a read-only gathering of data, from functions that mutate the data
Give these a try. I bet you'll like the results!
chipdart 4 hours ago [-]
> Give these a try. I bet you'll like the results!
It sounds like too many words to refer ro plain old inversion of control and CQRS. They're both tried and true techniques.
ozim 3 hours ago [-]
Cool I am just confirming my own bias against much of „clean code” teachings. That it might be a bit easier to read order of the operations - but no one uses it so it doesn’t matter.
vinkelhake 1 days ago [-]
> Honestly, I am extremely glad that Google is finally leaving the ecosystem, as I generally do not enjoy it when Google engineers try to force their ridiculous use cases down peoples' throats, as they seem to believe they simply know better than everyone else how to develop software.
Well, you may be celebrating a bit prematurely then. Google still has a ton of C++ and they haven't stopped writing it. It's going to take ~forever until Google has left the C++ ecosystem. What did happen was that Google majorly scaled down their efforts in the committee.
When it comes to the current schism on how to improve the safety of C++ there are largely two factions:
* The Bjarne/Herb [1] side that focuses on minimal changes to the code. The idea here is to add different profiles to the language and then [draw the rest of the fucking owl]. The big issue here is that it's entirely unclear on how they will achieve temporal and spatial memory safety.
* The other side is represented by Sean Baxter and his work on Safe C++. This is basically a whole-sale adoption of Rust's semantics. The big issue here is that it's effectively introducing a new language that isn't C++.
Google decided to pursue Carbon and isn't a major playing in either of the above efforts. Last time I checked, that language is not not meant to be memory safe.
Carbon is intended to be memory safe! (Not sure whether you intended to write a double negative there.) There are a few reasons that might not be clear:
* Carbon has relatively few people working on it. We currently are prioritizing work on the compiler at the moment, and don't yet have the bandwidth to also work on the safety design.
* As part of our migration-from-C++ story, where we expect code to transition C++ -> unsafe Carbon -> safe Carbon, we plan on supporting unsafe Carbon code with reasonable ergonomics.
* Carbon's original focus was on evolvability, and didn't focus on safety specifically. Since then it has become clear that memory safety is a requirement for Carbon's success, and will be our first test of those evolvability goals. Talks like https://www.youtube.com/watch?v=1ZTJ9omXOQ0 better reflect more recent plans around this topic.
vinkelhake 14 hours ago [-]
Thanks for the correction, I appreciate it!
The double negative was not intended :)
bcoates 11 hours ago [-]
Not super familiar with Carbon but .. what's the elevator pitch for porting my C++ to unsafe Carbon? Can it be done with an automated refactoring tool or something?
I feel like if I'm gonna go through the whole nightmare of a code port I should get something for it as opposed to just relying on interop
pjmlp 19 hours ago [-]
People like to always talk about Carbon like that, yet the team is the first to point out anyone that can use something else, should.
Carbon is an experiment, that they aren't sure how it is going to work out in first place.
> "We want to better understand whether we can build a language that meets our successor language criteria, and whether the resulting language can gather a critical mass of interest within the larger C++ industry and communit"
Carbon isn't currently memory safe, but Chandler Carruth has made it clear that every security expert he talked to says the same thing: memory safety is a requirement for security.
He at least claims that Carbon will have memory safety features such as borrow checking down the line. I guess we'll see.
vlovich123 2 hours ago [-]
It’s worrying to me that Carbon separates data races and memory safety as two distinct things when data races can easily cause both spatial and temporal memory safety issues. Similarly, type safety, can also cause spatial issues (e.g. many kernel exploits in Darwin were a result of causing type confusion for the SLAB allocator resulting in an exploitable memory safety issue).
The entire philosophy errs too much in the direction of “being reasonable” and “pragmatic” while getting fundamental things wrong.
> Over time, safety should evolve using a hybrid compile-time and runtime safety approach to eventually provide a similar level of safety to a language that puts more emphasis on guaranteed safety, such as Rust. However, while Carbon may encourage developers to modify code in support of more efficient safety checks, it will remain important to improve the safety of code for developers who cannot invest into safety-specific code modifications.
That’s really just paying lip service to Rust without recognizing that the key insight is that optional memory safety isn’t memory safety.
It is kind of neat just how much Rust has managed to disrupt the C++ ecosystem and dislodge its position.
IAmLiterallyAB 18 hours ago [-]
> Herb side that proposes minimal changes
Herb is developing a whole second syntax, I wouldn't call that minimal changes. And probably the only way to evolve the language at this point, because like you said Sean is introducing a different language entirely, so its not C++ at that point.
I really like some of Herb's ideas,but it seems less and less likely they'll ever be added to C++
darknavi 18 hours ago [-]
Have you seen some of his recent talks? Lots of underpinnings of cppfront have been added or are in committy.
He compares it to the JS/TS relationship.
pjmlp 17 hours ago [-]
Nope, that is mostly sales pitch, the only thing added thus far has been the spaceship operator.
He also sells the language differently from any other language that also compiles to native via C++, like Eiffel and Nim among others, due to conflict of interest to have WG21 chair propose yet another take on C++.
nox101 18 hours ago [-]
It's not really a valid comparison though. cppfront is a different language that just happens to be compatible with C++. ts/js is were ts is just js with types. You can comment out the types and it just runs. cppfront's language you'll actually have to re-write the code to get it to compile in C++
typescript
function add(a: number, b: number): number { return a + b };
javascript
function add(a/*: number*/, b/*: number*/)/*: number*/ { return a + b };
cppfront
add: (a: float, b: float): float = { a + b; }
cpp
float add(float a, float b) { return a + b; }
EE84M3i 15 hours ago [-]
> ts/js is were ts is just js with types. You can comment out the types and it just runs.
Is this true in the general case? I thought there were typescript features that didn't have direct JavaScript alternatives, for example enums.
paulddraper 15 hours ago [-]
Enums and namespaces are the only runtime features of TypeScript.
So, yes, you can't just strip types, but it's close.
EE84M3i 14 hours ago [-]
Is there a comprehensive list of such incompatibilities documented somewhere?
That guarantees that the types do not determine the output (e.g. no const enums), not that you can "strip" types to get the same output.
paulddraper 11 hours ago [-]
Not that I'm aware of.
Decorators would be another example. (Though they have always been marked experimental.)
And of course JSX, but that's not a TypeScript invention.
HelloNurse 17 hours ago [-]
Do you realize that the Typescript example contains strictly more information than the Javascript one (namely, declarations for the type of three things) and is therefore more complex to compile, while the two C++ examples are semantically identical (the last expression in the function is returned implicitly without having to write "return") and the new syntax is easier to parse?
Conscat 15 hours ago [-]
There are several semantic differences between Cpp1 and Cpp2. Cpp2 moves from last use, which is the biggest one. In a contrived example, that could result in a "hello world" changing to "goodbye world" or any other arbitrary behavior change you want to demonstrate. Cpp2 also doesn't require you to order functions and types or declare prototypes, which means partial template specializations and function overloads can produce similar changes when migrating from Cpp1 to Cpp2.
You can see where CPPFront inserts a `cpp2::move` call automatically, and how that differs from a superficially equivalent Cpp1 function.
nox101 14 hours ago [-]
yes, of course. That's not my point. My point is TypeScript succeeds because it's just JavaScript with types. It's not a new language. cppfront is an entirely new language so it's arguably going to have a tougher time. Being an entirely new language, it is not analogous to typescript
chipdart 4 hours ago [-]
> He compares it to the JS/TS relationship.
OP is right, TypeScript is a whole new syntax, and it's shtick is that it can be transpiled into JavaScript.
andai 19 hours ago [-]
I am out of the loop, what kind of pressure were they putting on Python?
throwaway2037 1 days ago [-]
> riddled with state machines
Why is this bad? Normally, state machines are easy to reason about.
majormajor 1 days ago [-]
The set of developers who say "I want to implement this logic as a state machine" is MUCH larger than the set of developers who say "I should make sure I fully understand every possible state and edge case ahead of time before making a state machine!"
jimmaswell 15 hours ago [-]
> "I should make sure I fully understand every possible state and edge case ahead of time before making a state machine!"
Attempting to understand every state and edge case before writing code is a fool's errand because it would amount to writing the entire program anyway.
State machines are a clear, concise, elegant pattern to encapsulate logic. They're dead simple to read and reason about. And, get this, writing one FORCES YOU to fully understand every possible state and edge case of the problem you're solving.
You either have an explicit state machine, or an implicit one. In my entire career I have never regretted writing one the instant I even smell ambiguity coming on. They're an indefatigable sword to cut through spaghetti that's had poorly interacting logic sprinkled into it by ten devs over ten years, bring it into the light, and make the question and answer of how to fix it instantly articulable and solvable.
I truly don't understand what grudge you could have against the state machine. Of all the patterns in software development I'd go as far as to hold it in the highest regard above all others. If our job is to make computers do what we want them to do in an unambiguous and maintainable manner then our job is to write state machines.
harrall 13 hours ago [-]
The times I’ve bothered to write explicit state machines have created the most solid, confident and bug-free pieces of software I’ve ever built. I would send someone to the moon with them.
throwaway2037 1 days ago [-]
Couldn't this be said about any alternative solution? I fail to see how this is specific to state machines.
What do you suggest instead of a state machine?
bvrmn 22 hours ago [-]
Like properly model a domain in domain terms?
nottorp 21 hours ago [-]
And that won't be a state machine with the states having more fancy names?
InDubioProRubio 20 hours ago [-]
It will be, but the idea of having an overview over the states is gone then. There is just modules-> objects with the transitions being method calls. Nobody will have to know all the things about all the state transitions, resulting in another problem (dys)solved by architecture obscurity.
If needs be the state-machine can be reconstructed on a whiteboard by a team of five.
grumpyprole 1 hours ago [-]
A state machine makes the actual program state first class and easy to reason about. One does not even need mutable state to model one. Whereas you appear to be advocating mutable objects. The state space then becomes a combinatorial explosion of all the hidden mutable state "encapsulated" inside the objects. Object oriented programming is not the only way and often leads to a poor domain model. Some OOP evangelists even model a bank account with a mutable balance field and methods for making deposits. This is a absolutely not a faithful model of the domain (ledgers have been used for hundreds/thousands of years). In summary, yes a state machine can absolutely be a good domain model.
freeone3000 18 hours ago [-]
Implement as a state machine? But. Your program exists as a set of transforms upon memory. Your program is a state machine! You just need to define the proper morpisms to map your problem domain to the computer domain.
marcosdumay 17 hours ago [-]
Transformations are separable by principle, it's a fundamental property of them that state machines have as an afterthought that is even hard to represent.
It doesn't matter if they have equivalent power. One of those representations fundamentally allows your software to have an architecture, the other doesn't.
freeone3000 16 hours ago [-]
How much of software architecture is required because of the architecture? If your program has types that are the possible states, and functions to transform between those states, what architecture is needed beyond that? A grouping of related types, perhaps?
marcosdumay 14 hours ago [-]
Yeah, just one layer of functions is enough for everybody.
Let's look next at that "compiler" thing and high-level languages. The hardware-native one suffices, no need for all that bloat.
kayo_20211030 18 hours ago [-]
I have a coding problem.
I'll use a state machine!
Now, I have two problems :-(
lmm 5 hours ago [-]
I've never understood this claim. I find state machines very hard to follow because there's no easy way to tell what paths lead to a given state; they're like using goto instead of functions (indeed they're often implemented that way).
risenshinetech 1 days ago [-]
Please describe "normally". State machines can turn into nightmares, just like any design pattern used poorly.
nurettin 1 days ago [-]
State machines don't have syntax for "transition here when event is encountered no matter what state you are in" so the whole diagram becomes a spaghetti mess if you have a lot of those escape hatches.
lelanthran 20 hours ago [-]
> State machines don't have syntax for "transition here when event is encountered no matter what state you are in" so the whole diagram becomes a spaghetti mess if you have a lot of those escape hatches.
I place a note at the top of my diagrams stating what the default state would be on receipt of an unexpected event. There is no such thing as "event silently gets swallowed because no transition exists", because, in implementation, the state machine `switch` statement always has a `default` clause which triggers all the alarm bells.
Works very well in practice; I used to write hard real-time munitions control software for blowing shit up. Never had a problem.
rramadass 19 hours ago [-]
> hard real-time munitions control software for blowing shit up. Never had a problem.
Ha, Ha, Ha! The juxtaposition of these two phrases is really funny. I would like to apply for a position on the Testing team :-)
quietbritishjim 23 hours ago [-]
State machines don't have a native syntax in C++ at all, so you can structure them however you want. It's easy to structure a state machine, if needed, so that all (or some) states can handle the same event in the same way.
Downside of course is now you have a dependency on qt.
alexvitkov 15 hours ago [-]
The downside is that you're now heap allocating at least one object for every state, and I'm willing to bet that each QState has an associated std::vector-style list of actions, and that each action is also its own object on the heap.
If you can afford to do things like this you can most likely use something other than C++ and save yourself a lot of headaches.
dgfitz 15 hours ago [-]
> If you can afford to do things like this you can most likely use something other than C++ and save yourself a lot of headaches.
Surely you can understand that, despite the recent c++ hate, my job doesn't give a fuck and we aren't migrating our massive codebase from c++ to... anything.
garethrowlands 13 hours ago [-]
Switch + goto is very close to being a native syntax for state machines. It's also very efficient.
liontwist 21 hours ago [-]
goto is exactly this feature.
a_t48 1 days ago [-]
I believe HSMs can model this, but don't quote me. :)
nurettin 1 days ago [-]
Yes, of course in theory nested state machines should be able to model this. I feel like adding more complexity and bending the rules is a bit of a concession.
jeffreygoesto 23 hours ago [-]
Back in the days we implemented HSM helper classes in about 500 LoC and generated them from Enterprise Architect. No need to write a GUI yourself, but better to have a visual for documentation and review. Worked very well until we replaced EA with docs-as-code, now I miss that there is no nice simulator and Modeler for that workflow.
AnimalMuppet 1 days ago [-]
They can be. Or they can be... less easy.
Imagine you have an informally-specified, undocumented, at-least-somewhat-incomplete state machine. Imagine that it interacts with several other similar state machines. Still easy to reason about?
Now add multithreading. Still easy?
Now add locking. Still easy?
Cleanly-done state machines can be the cleanest way to describe a problem, and the simplest way to implement it. But badly-done state machines can be a total mess.
Alas, I think that the last time I waded in such waters, what I left behind was pretty much on the "mess" side of the scale. It worked, it worked mostly solidly, and it did so for more than a decade. But it was still rather messy.
lelanthran 1 days ago [-]
> Imagine you have an informally-specified, undocumented, at-least-somewhat-incomplete state machine. Imagine that it interacts with several other similar state machines. Still easy to reason about?
You think that developers that wrote an informally-specified, undocumented, at-least-somewhat-incomplete state-machine would have written that logic as a non-state-machine in a formally-specified, documented and at-least-somewhat-complete codebase?
State-machines are exceptionally easy to reason about because you can at least reverse-engineer a state-diagram from the state-machine code.
Almost-a-state-machine-but-not-quite are exceptionally difficult to reason about because you can not easily reverse-engineer the state-diagram from the state-machine code.
gpderetta 18 hours ago [-]
In fact state machines are great for documentation even if the code is not explicitly written as a state machine!
_huayra_ 17 hours ago [-]
Yes, and it's much better than having a dozen or more `bool` values that indicate some event occurred and put it into some "mode" (e.g. "unhealthy", "input exhausted", etc) and you have to infer what the "hidden state machine is" based on all of those bool values.
Want to add another "bool state"? Hello exponential growth...
rramadass 1 days ago [-]
But that is just true of any problem-solving/programming technique.
In general, state/event machine transition table and decision table techniques of structuring code are easier to comprehend than adhoc and even worse, poorly understood pattern-based techniques are.
cmrdporcupine 16 hours ago [-]
The C++ from Google that people in the outside world are seeing is not the C++ the article is talking about. Chromium and open sourced libraries from Google are not the same as C++ in Google3. I worked on both back in the day and ... There's slightly different style guides (not hugely different), but most importantly the tooling is not the same.
The kind of mass refactorings / cleanups / static analysis talked about in this article are done on a much more serious and large scale on C++ inside the Google3 monorepo than they are in Chromium. Different build systems, different code review tools, different development culture.
deltaburnt 15 hours ago [-]
Going from g3 to AOSP has been downright painful. It was like suddenly working in a different company the contrast was so stark.
cmrdporcupine 14 hours ago [-]
Interesting. I never worked in Android, but did in Chromium & Chromecast code bases. Biggest difference with Google3 was honestly in the tooling. Style guide was fairly close, maybe a bit more conservative. Also the lack of the core libs that eventually became Abseil.
I work full-time in Rust these days and everytime I go back to working in C++ it's a bit of a cringe. If I look long enough, I almost always find a use-after-free, even from extremely competent developers. Footgun language.
protomolecule 20 hours ago [-]
> riddled with state machines
What's wrong with state machines? Beats the tangled mess of nested ifs and fors.
bluGill 18 hours ago [-]
That depends on your problem. I've seen useful state machines. I've seen someone implement a simple decoder as a complex any-to-any state machine that couldn't be understood - a single switch statement would have been better. Nothing about state machines, but some people have a hammer and are determined to prove it can drive any screw - it works but isn't how you should do it.
jimmaswell 15 hours ago [-]
I've adopted a rule of thumb to have a very low bar to skip straight to writing a state machine. I've never once regretted it, personally. I'm sure they can be misused but I haven't came across that.
garethrowlands 13 hours ago [-]
Switch + goto is the classic way to implement a state machine in C.
taneq 17 hours ago [-]
> I compile a lot of C++ code from a lot of places, and the only time I run into code that somehow simply doesn't work on newer versions of C++
I'm impressed that you even get as far as finding out whether that much C++ from disparate sources works on a newer version of C++. The myriad, often highly customized and correspondingly poorly documented build systems invented for each project, the maze of dependencies, the weird and conflicting source tree layouts and preprocessor tricks that many projects use... it's usually a pain in the neck to get a new library to even attempt to build, let alone integrate it successfully.
Don't get me wrong, we use C++ and ship a product using it, and I occasionally have to integrate new libraries, but it's very much not something I look forward to.
23 hours ago [-]
shadowgovt 17 hours ago [-]
This phenomenon is mostly because, as the article notes, Google has one of the largest C++ deployments in the world. And since much of the C++ code needs to be extremely platform-agnostic (any given library might be running in a web service, a piece of Chromium or Android, and an embedded smart home device), they tend to be very conservative about new features because their code always has to compile to the lowest-common-denominator (and, more importantly, they're very, very sensitive to performance regressions; the devil you know is always preferred to risking that the devil you don't know is slower, even if it could be faster).
Google can embrace modern processes, but the language itself had better be compilable on whatever ancient version of gcc works on the one mission-critical architecture they can't upgrade yet...
j-krieger 16 hours ago [-]
> Like... I honestly feel bad for the Rust people, as I do not think the increasing attention they are going to get from Google is going to be at all positive for that ecosystem
We are just now feeling this. Some original contributors left the field, and lately the language has went in directions I don't agree with.
Conscat 15 hours ago [-]
As an outsider, I'm curious what directions those are. Are you referring to effects or keyword generics or something else?
j-krieger 13 hours ago [-]
Endless bikeshedding about `Pin` would be one example. I'm also not sure keyword generics are the correct way.
zozbot234 10 hours ago [-]
The discussions around 'Pin' are the opposite of bikeshedding. It's not about what color to pick for the shed, it's about reworking the feature to make it hopefully easier to reason about and use.
nicce 15 hours ago [-]
But Google is not even the first. Amazon has had their eyes in Rust for quite some time already.
returningfory2 1 days ago [-]
I think the article is pretty interesting. There are so many more interesting takes than just another boring Hacker News moan about Google.
pif 24 hours ago [-]
[flagged]
pif 21 hours ago [-]
[flagged]
zozbot234 21 hours ago [-]
Golang is a great programming language if your alternative is Java, C# or scripting languages like Python/Ruby/etc. Not everything needs to be written in C++ or Rust from the outset. It's also reasonably possible to rewrite small codebases from Golang to, e.g. Rust for better performance.
guappa 21 hours ago [-]
It really isn't, no. It joins an awkward syntax with bad API design and terrible safeguards.
otabdeveloper4 20 hours ago [-]
Don't be so mean, it's definitely a step up from PHP.
guappa 21 hours ago [-]
And then they were so unproficient that they made a terrible language that has the same amount of safeguards as C (ok a bit more, but not much more).
trmantrl 19 hours ago [-]
The technical pressure exerted on Python (which was resisted) is one thing. The social pressure incubated the most radical culture warriors the Internet has ever seen and its proponents have ruined the Python organization, driven away many people and have established a totalitarian and oppressive regime.
Interestingly, Google has fired the Python team this year. The revolution eats its own?
Anyway, Rust should take note and be extremely careful.
tialaramex 13 hours ago [-]
Based on what an ex-Google developer said in conversation at a party at the weekend (the discussion was about the choice of First Language for a Computer Science degree course, yes, I do go to exciting parties, many of those attending have never even been a CS lecturer):
Some years ago Google decided that Go projects were similar engineering effort, better performance, lower maintenance, and so on that basis there was no reason to authorise new Python software and their existing projects would migrate as-and-when.
bagxrvxpepzn 1 days ago [-]
To the people who work on C++ standards: I approve of the current C++ trajectory and please ignore all of the online noise about "the future of C++." To anyone that disagrees severely with the C++ trajectory as stated, please just consider another language, e.g. Rust. I don't want static lifetime checking in C++ and if you want static lifetime checking, please use Rust. I am not a government contractor, if you are a government contractor who must meet bureaucratic risk-averse government requirements, please use Rust. I have an existing development process that works for me and my customers, I have no significant demand for lifetime checking. If your development process is shiny and new and necessitates lifetime checking, then please use Rust. To Rust advocates, you can have the US government and big tech. You can even have Linux. Just leave my existing C++ process alone. It works and the trade offs we have chosen efficiently accomplish our goals.
aiono 14 hours ago [-]
You frame it like "Rust advocates" try to infiltrate into C++ language decision making and inject safety features into it. That's not the case at all. For years C++ committee simply ignored the need for safety and they didn't take Rust and lifetime analysis seriously. But now they themselves want it.
AlotOfReading 1 days ago [-]
C++ has lifetime rules just like Rust. They're simply implicit in the code and not enforced by the compiler. Do you prefer the uncertainty of silent miscompilations and undefined behavior to upfront compiler errors?
You're already using a language with a strong type system, so it's confusing to me why you would choose to draw the line here.
bagxrvxpepzn 1 days ago [-]
> Do you prefer the uncertainty of silent miscompilations and undefined behavior to upfront compiler errors?
Yes because then I don't have to spend hours writing esoteric spaghetti code to prove something to the compiler that is trivially known to be true. Your error is assuming static lifetime checking is free. As an engineer, I use judgement to make context-dependent trade offs.
If you like playing the compiler olympics, or your employer forces you to, please use Rust.
zozbot234 13 hours ago [-]
"Trivially known to be true" until the code evolves making your unstated assumptions not hold and everything breaks, often in complex and unintuitive ways involving interactions across modules. This is why these automated soundness checks are valuable.
restalis 8 hours ago [-]
"until the code evolves [...]"
That is already a desirable place to be, where you managed to get a working implementation ready to evolve. My issue with opinionated languages like Rust is that they make development more expensive. I then afford to pay the necessary work-effort for fewer projects than I otherwise could if I was to focus more on the problem(s) at hand instead of that and other mandatory constraints forced upon me by the compiler. I very much want my development tools to limit themselves on being tools, to assist me on the part of the problem I chose to focus on with little to no cost paid for their usage. I want to be able to focus on prototyping some working solution first, and only then, if the project's needs really warrant it, to switch on paying the development cost for other aspects, be it safety or whatnot.
wiseowise 16 hours ago [-]
> Yes because then I don't have to spend hours writing esoteric spaghetti code to prove something to the compiler that is trivially known to be true.
And that’s exactly the reason why we need more safety in C++.
I’m terrified at amount of code in real world written with this mindset.
virgilp 58 minutes ago [-]
At the same time, you should recognize that not all real code in the world is used to run planes & thermonuclear power plants. For a lot of the business software, it's actually fine if it's not perfectly safe. So if it's cheaper/ faster to develop it without paying the price of static safety checks, who is to say that this was a bad tradeoff?
I actually love the ideas that Rust brought forth. It definitely has a place in the ecosystem, and I'm glad to hear critical software is being rewritten in Rust! But that doesn't mean that C++ should copy it.
AlotOfReading 2 minutes ago [-]
C++ doesn't permit you to write code that's not perfectly safe. By using a C++ compiler, you're pinky-promising that you will write perfectly safe code even if the compiler can't verify that, lest nasal demons and other misfortunes fall upon you. If your code isn't safe and you expect that to be fine, you're not writing C++. This is a discussion about C++, so the default assumption is that you'll pay the costs of safe code instead of inventing an ill-specified dialect that happens to do what you want when it's shoved into a C++ compiler.
If you think we should instead evolve C++ so that safety isn't mandatory I'm right there with you, but it's not where the language is today and that discussion has also been shut down by the evolution working group. Moreover, Bjarne's policies mean that telling the critical software people to go fuck off to a different language fundamentally isn't part of the plan either.
lmm 2 minutes ago [-]
> For a lot of the business software, it's actually fine if it's not perfectly safe.
Is it fine if it silently gives the wrong answer? If so, why are you bothering with the software at all?
In my experience all nontrivial C++ codebases have silent memory corruption bugs (at least when built with popular compilers).
roland35 1 days ago [-]
I've found that often when I am writing esoteric spaghetti rust code... I need to start thinking about what I am trying too do! Most of the time it's a bad idea :)
HelloNurse 17 hours ago [-]
If one needs to "prove something to the compiler" it is usually something both complex and against the grain; on the other hand lifetime annotations are usually just "promise something to the compiler" to allow it to make a better job.
1 days ago [-]
rramadass 1 days ago [-]
> As an engineer, I use judgement to make context-dependent trade offs.
Well said.
This is why i am firmly in the Stroustrup camp of backward compatibility/zero overhead/better-C/etc. goodness of "old C++". I need to extend/maintain/rewrite tons of them and that needs to be as painless as possible. The current standards trajectory needs to be maintained.
The OP article is a rather poor one with no insights but mere hoopla over nothing.
munchler 9 hours ago [-]
If it's hoopla over nothing, why do you firmly identify with one of the factions defined by the article?
rramadass 2 hours ago [-]
What a silly question! There is no major schism in the C++ community as the article implies; merely a strong difference of opinion on certain proposals. This is normal in any committee. But since people are strongly wedded to their own proposals it might seem more severe than it actually is.
th2oi34234234 20 hours ago [-]
LOL; someone has definitely played with type-systems here.
oxnrtr 11 hours ago [-]
You sound like you can barely code yourself out of a wet paper bag.
lelanthran 20 hours ago [-]
> C++ has lifetime rules just like Rust. They're simply implicit in the code and not enforced by the compiler.
The problem is that the rules enforced by Rust is not restricted to lifetime rules, it's a much much larger superset that includes quite a lot of safe, legitimate and valid code.
AlotOfReading 18 hours ago [-]
Sure, but that's not a design philosophy C++ adheres to. Look at the modern C++ guidelines or profiles. The entire point is to rule out large swathes of safe, legitimate, and valid code in an optional and interoperable way.
C++ isn't beholden to Rust's trade-offs either. There's a whole spectrum of possibilities that don't require broken backwards compatibility. Hence:
"Why draw the line specifically at lifetime annotations?"
PittleyDunkin 19 hours ago [-]
That's what the unsafe keyword is for.
guappa 21 hours ago [-]
> You're already using a language with a strong type system
I'll have you know I made a variable void* just yesterday, to make my compiler shut up about the incorrect type :D
natemcintosh 17 hours ago [-]
And what about, for example, those government contractors who are in the same position as you: they have a large C++ codebase that currently works, and is too big to re-write in rust? Now they're being asked to make it safer. How will they do that with the "existing C++ process"?
jart 15 hours ago [-]
Didn't Project Zero publish a blog post a few months ago, saying that old code isn't your security problem? They said it's new code you have to worry about. Zero also had copious amounts of data to demonstrate their point. In any case, if you really want to rewrite C++ in Rust, LLMs are fantastic at doing that. They're not really good yet at writing a new giant codebase from first principles. But if you give them something that already exists and ask them to translate it into a different language, oftentimes the result works for me on the first try. Even if it's hundreds of lines long.
fulafel 5 hours ago [-]
A link would be helpful, but at face value: of course old code vulnerabilities are still a problem. Vulnerabilities in old code make the headlines all the time.
jart 4 hours ago [-]
It was difficult to dig up, but I found it for you. https://security.googleblog.com/2024/09/eliminating-memory-s... Also headlines do not accurately model reality. The news only reports on things that are newsworthy. It's comparatively rare that we'll discover new vulnerabilities in old code that's commonly used. That's what makes it newsworthy.
fulafel 2 hours ago [-]
Thanks. It's an interesting analysis around the "vulnerabilities decay exponentially" model, discussing how there are more vulnerabilities to be found in new code than old code given equal attention.
SkiFire13 10 hours ago [-]
The issue is that newer code often needs to communicate with older code, and interfacing C++ and Rust is not trivial.
jesse__ 15 hours ago [-]
Yeah I remember reading that post about bugs over time. IIRC 5 years was the time it takes for most bugs to get ferreted out.
moregrist 16 hours ago [-]
The funny thing about government funding is that it may be easier to secure capital for a Rust rewrite than for ongoing maintenance to add static lifetimes and other safety features to an existing C++ codebase.
Legislatures seem a lot more able to allocate large pots of money for major discrete projects than to guarantee an ongoing stream of revenue to a continuing project.
pizlonator 14 hours ago [-]
They can use Fil-C++ and then they get memory safety without any rewrites.
GrantMoyer 1 days ago [-]
While programming in Rust, I've never thought to myself, "man, this would be so much easier to express in C++". I've plenty of times thought the reverse while programming in C++ though.
Edit: except when interfacing with C APIs.
throwawayffffas 8 hours ago [-]
I have had the exact opposite experience.
bowsamic 22 hours ago [-]
Then you must be avoiding situations that traditionally use OOP
zozbot234 21 hours ago [-]
Most kinds of OOP can be expressed idiomatically in Rust. The big exception is implementation inheritance, which is highly discouraged in modern code anyway due to its complex and unintuitive semantics. (Specifically, its reliance on "open recursion", and the related "fragile base class" problem)
galangalalgol 19 hours ago [-]
People often say that modern c++ doesn't have the problems needing a solution like rust. Ironically that means people who write modern c++ haven't had any aramp up time needed when joining our rust projects. They were already doing things the right way. At least mostly. But now they don't have to worry about that one person who seems to be trying to trick the static analysis tools on purpose.
int_19h 10 hours ago [-]
Anything that involves object graphs (as opposed to trees) is a pain in Rust.
zozbot234 9 hours ago [-]
True, but not in a way that wouldn't be just as painful in C++.
int_19h 9 hours ago [-]
In Rust, the de facto standard advice for such cases seems to be, "just use indices into an array instead of references".
While this is sometimes done in C++ as well for various reasons, it's certainly not the default pattern there. If you have two things that need to point to each other, you just do that.
empath75 9 hours ago [-]
> While this is sometimes done in C++ as well for various reasons, it's certainly not the default pattern there. If you have two things that need to point to each other, you just do that.
And then you have to handle all the subtle memory bugs that you've introduced by doing that.
int_19h 3 hours ago [-]
I'm not arguing that there isn't a gain here, but GP's original assertion was that
> While programming in Rust, I've never thought to myself, "man, this would be so much easier to express in C++".
This is a concrete example of something that is much easier to express in C++. And, sure, you do pay the tax for that (although I will also dispute the notion that it is impossible to write C++ without memory bugs; it's just hard).
kkert 1 days ago [-]
This is interesting because i'm writing quite a bit of embedded Rust, and i always run into limitations of very barebones const generics. I always wish they'd have half the expressiveness of C++ constexpr and templates.
Win some, lose some though, as the overall development workflow is lightyears ahead of C++, mostly due to tooling
badmintonbaseba 23 hours ago [-]
The expressiveness of const generics (NTTPs) in C++ wouldn't go away if it adopted lifetime annotations and "safe" scopes. It's entirely orthogonal.
Rust decided to have more restrictive generic programming, with the benefit of early diagnostic of mistakes in generic code. C++ defers that detection to instantiation, which allows the generics to be more expressive, but it's a tradeoff. But this is an entirely different design decision to lifetime tracking.
zozbot234 22 hours ago [-]
Rust generics are not intended as a one-to-one replacement for C++ templates. Most complex cases of template-level programming would be addressed with macros (possibly proc macros) in Rust.
galangalalgol 19 hours ago [-]
Const generic expressions are still being worked. They are what is blocking portable simd. They are also a much cleaner way to implement things like matrix operations or really anything where a function with two or more arguments of one or more types returns things that have types that are a combination or modification of the input types.
zozbot234 18 hours ago [-]
The problem AIUI is that "const generic expressions" in full generality are as powerful as dependent types. It's not clear to me that the Rust folks will want to open that particular can of worms.
galangalalgol 18 hours ago [-]
I thought dependent types were types that depended on a value? What they are proposing are types that depend on types or compile time constants.
zozbot234 18 hours ago [-]
The problem is combining the "const generic" and "expression" part. If your "compile time constants" can actually be complex expressions, you arguably end up with the same kind of generality as dependent types.
This is true even for expressions that are only evaluated in a compile-time context, since dependently-typed languages do "everything" at compile time anyway, they don't have a phase distinction where you can talk about "runtime" being separate.
galangalalgol 18 hours ago [-]
Ah, yeah! I get it now. So c++ is a dependently typed language. That is hilarious. I want lisp syntax in c++29. That said, too many features are blocked on const generic expressions, so I think they are going to have to bite that off. There is already talk about migrating proceduralacros to be something more like normal rust, this moght fit in with that.
Rusky 15 hours ago [-]
C++ is not a dependently typed language, for the same reason that templates do not emit errors until after they are instantiated. All non-type template parameters get fully evaluated at instantiation time so they can be checked concretely.
A truly dependently typed language performs these checks before instantiation time, by evaluating those expressions abstractly. Code that is polymorphic over values is checked for all possible instantiations, and thus its types can actually depend on values that will not be known until runtime.
The classic example is a dynamic array whose type includes its size- you can write something like `concat(vector<int, N>, vector<int, M>) -> vector<int, N + M>` and call this on e.g. arrays you have read from a file or over the network. The compiler doesn't care what N and M are, exactly- it only cares that `concat` always produces a result with the length `N + M`.
groos 7 hours ago [-]
I'm not sure what "dependently typed" means but in C++20 and beyond, concepts allow templates to constrain their parameters and issue errors for the templates when they're specialized, before the actual instantiation happens. E.g., a function template with constraints can issue errors if the template arguments (either explicit or deduced from the call-site) don't satisfy the constraints, before the template body is compiled. This was not the case before C++20, where some errors could be issued only upon instantiation. With C++20, in theory, no template needs to be instantiated to validate the template arguments if constraints are provided to check them at specialization-time.
Rusky 6 hours ago [-]
This is the wrong side of the API to make C++20 dependently typed. Concepts let the compiler report errors at the instantiation site of a template, but they don't do anything to let the compiler report errors with the template definition itself (again before instantiation time).
To be clear this distinction is not unique to dependent types, either. Most languages with some form of generics or polymorphism check the definition of the generic function/type/etc against the constraints, so the compiler can report errors before it ever sees any instantiations. This just also happens to be a prerequisite to consider something "dependently typed."
zozbot234 13 hours ago [-]
> performs these checks before instantiation time
Notably Rust type-based generics do this, a key difference wrt. C++ templates. (You can use macros if you want checks after instantiation, of course.)
galangalalgol 15 hours ago [-]
In c++ it does care what N and M are at compile time, at least the optimizer does for autovectorization and unrolling. Would that not be the case with const generic expressions?
Rusky 6 hours ago [-]
The question of whether a language is dependently typed only has to do with how type checking is done. The optimizer doesn't come into play until later, so whether it uses the information is unrelated to whether the language is dependently typed.
smilekzs 13 hours ago [-]
> as the overall development workflow is lightyears ahead of C++, mostly due to tooling
My experience has been the other way around. Eclipse-based IDEs from NXP, TI, ST all have out-of-the-box usable tooling integration:
- MCU pinout and configuration codegen
- no need to manually fiddle with linker scripts
- static stack and code size analyzers (very helpful for fitting stuff in low-cost MCUs)
- stable JTAG-based debugging with:
- peripheral registers view (with bitfield definitions)
- RTOS threads view (run status, blocked on which resources, ...)
And yes, these are important enough for me to put up with Eclipse and pre-modern C/C++. I really want to write Rust for embedded but struggling with the tooling all the time didn't help.
afdbcreid 1 days ago [-]
That's actually quite interesting because this is not an inherent limitation of Rust, and it is definitely planned to be improved. And AFAIK, today (as opposed to last years) it is even being actively worked on!
bluGill 18 hours ago [-]
C++ is on the trajectory to create a future with more safety. Should we do profiles or static lifetime checking (or something else??) is still an open question (and both may be valid). However I'm glad c++ is thinking about that. We have real problems around safety in the real world and people are writing unsafe code even when modern safe code would be easier to write.
Of course it remains to be seen how this all plays out. Static lifetimes can be done good or bad. Profiles can be good or bad. Even if whatever we come up with is done well that doesn't mean people will (I know rust programmers who just put unsafe everywhere).
zozbot234 18 hours ago [-]
Profiles are vaporware. The C++ folks are pushing a fantasy of "full memory safety with no changes to existing code, not even annotations to enable sound static analysis." That's just a non-starter, there is no way to get to full memory safety from there unless you count very silly things like making "delete" and "free()" a no-op - and also running everything in a single thread for "concurrency safety".
bluGill 17 hours ago [-]
The only way to get anywhere is provide a path forward. I have a lot of C++98 code that has been working just find for 14+years (that is since before C++11). It isn't worth changing that unless we discover a bug in the code (after 14+ years unlikely) or we need to add new features (if we haven't in 14+ years we probably won't need a new feature there anytime soon). Code I write today is the latest C++. What I really want is a way to say don't write the bad things today, but still allow that old code to work. That is what profiles promises to me. Sure we will never to get full memory safety that way, but that isn't my goal, I just want to make my new code better, and when I come back to old code improve that too.
zozbot234 17 hours ago [-]
The case for "100% Safe C++" is that you might be able to annotate that old C++98 code in ways that don't otherwise alter its semantics, but still ensure safety. That would be a one-time cost that might be well-worth paying if the cost is low enough - Where "cost" depends on developer experience as opposed to mere volume of annotations. A "viral" compiler feature that auto-surfaces all the places that will need annotation for a given level of safety has the potential to be quite easy to learn and use effectively. It's not clear why the C++ folks are rejecting that approach, seemingly out-of-hand.
bluGill 15 hours ago [-]
I have > 10 million lines of C++ that is not annotated. There are many projects much larger than mine. If you cannot automatically annotate the code there is no point in trying as you can't do it manually. If you can automate it why not just build that into the compiler and skip the syntax?
zozbot234 13 hours ago [-]
> If you cannot automatically annotate the code there is no point in trying as you can't do it manually.
How can you know this without a "viral" analysis that tells you how much annotation is needed, and where? Perhaps the code factors out all the low-level, "memory unsafe" hacks to its own module, and that can be feasibly annotated. It's just not something we can know in advance.
usefulcat 12 hours ago [-]
> Perhaps the code factors out all the low-level, "memory unsafe" hacks to its own module, and that can be feasibly annotated.
While it is theoretically not impossible for that scenario to occur, I'd say it sounds wildly unlikely for anything that can be descried as 'old' code.
The fantasy is enough to get engagement and once you have engagement you can persuade people to do a "little" extra work to get the full benefits. My mother won't buy the product for $5, but if you tell her that it costs $10 but they're 2-for-1 today, she's going to buy that and feel like she got a bargain.
In terms of actually solving the problem well, it's not even captured in these hypothetical regulatory requirements. What you actually want is a safety culture, Rust has one, C++ does not, and no technology will change that. From what I can tell nobody at WG21 wants that to change anyway.
pjmlp 1 hours ago [-]
If you have access to the WG21 meeting minutes, it appears the safety discussions of the last meeting were quite entertaining.
zozbot234 10 hours ago [-]
> What you actually want is a safety culture, Rust has one
Rust has a safety culture because it involves requirements for Safe Rust that preserve safety while also playing well with modularity and iterative development. If "Safe C++" can enforce similar requirements, we can expect that a safety culture can be sustained there as well.
titzer 13 hours ago [-]
Look, we need more than just promises. C++ is charting a future to the past in the most torturously slow process possible, primarily because of absolutely intrasigent performance obsession that won't even admit the possibility of a 1% performance overhead for bounds checks. The C++ steering committee are the real extremists that are holding back the entire software industry because of a sacred cow and a free pass to externalize that cost onto the rest of us in terms of significantly less secure software.
bagxrvxpepzn 12 hours ago [-]
> The C++ steering committee are the real extremists that are holding back the entire software industry because of a sacred cow and a free pass to externalize that cost onto the rest of us in terms of significantly less secure software.
The C++ leadership serves the C++ community, not the entire software industry. You and everyone who disagrees with them are free to use and write software based on other languages, e.g. Java and Rust.
pjmlp 1 hours ago [-]
Many in the C++ community wouldn't acknowledge that.
Which is why disabling RTTI, disabling exceptions, creating their own standard library replacement, static analysers forbinding specific language constructs, is such a big deal in some C++ circles.
humanrebar 10 hours ago [-]
You can even add nonstandard features to existing compilers!
The neat thing is that once the standard committee learns about this use case, it could get de facto support as existing use!
sumanthvepa 22 hours ago [-]
Thank you for this. C++ should NOT try to be Rust. I find modern C++ really nice to program in, for the work I'm doing - 3D graphics. The combination of very powerful abstractions and excellent performance is what I'm looking for. I'm more than willing to endure percived lack of safety in the language.
tsimionescu 16 hours ago [-]
The lack of safety is perceived because it is there. There is no proof that anyone can write a C++ program larger than, say, 100k lines of code that doesn't have memory safety issues.
logicchains 15 hours ago [-]
And that memory safety is completely not an issue if you're writing something like a game, trading system, simulation, internal application or science calculation where there's no potentially hostile users who could do real harm by hacking your code. It's just a class of bug that in modern C++ is generally far outnumbered by logic bugs.
tsimionescu 13 hours ago [-]
Games absolutely are a problem for lack of memory safety - because the majority of games played today are connected to the internet explicitly. For trading system I don't even know what you mean, but I can't think of a trading system where you wouldn't care about security.
For simulations and scientific calculations, I do agree, to a vast extent. But in a world that is moving more and more towards zero-trust networking, even many of those will start being looked at as potential attack vectors into other systems.
PaulDavisThe1st 8 hours ago [-]
As a DAW developer, I find myself chuckling over security concerns in other kinds of apps.
You see, it is absolutely expected and required that our applications will load and run arbitrary 3rd party code, generally with the expectation that it lives in the same address space as our application (though this is not formally required).
No sockets, no network, no backdoor hacks. You write code, call it a VST plugin, make it sound desirable ... we are expected to load and run it.
Yes, several DAWs have made the move toward out-of-process execution of plugins, but that doesn't begin to address the myriad problems caused by loosely-written plugin APIs not adequately pinning down threading, thread priority, memory access and more.
Filesystem access? Of course! That code runs as you! Because you want it to!
lmm 5 hours ago [-]
And when someone creates a project file that sends them the personal information of anyone who opens it, is that an issue? Yes, pervasive arbitrary code plugins are game over if you can get anyone to use your plugin, but there's at least some awareness that you need to be careful opening a plugin you don't trust.
PaulDavisThe1st 4 hours ago [-]
Not sure that's true for the majority of DAW users.
Plugins are not associated with attack vectors, even though they are literally just that.
PLG88 12 hours ago [-]
I may be off base, but as the world moves to zero-trust networking, we can always embed a zero-trust network into our C++ app so that it can be distributed across the network while having no listening ports on the underlay network - i.e., my memory safety exploit cannot just be exploited by anyone on the WAN, LAN, or host OS network. My C++ app unattackable via conventional IP-based tooling, all conventional network threats are immediately useless.
This capability exists in completely open source, such as OpenZiti - https://openziti.io/.
AlotOfReading 10 hours ago [-]
The way C and C++ are standardized, you can't rely on the correct functioning of anything in the presence of undefined behavior, including memory unsafety. For what it's worth, I also opened a random file in the OpenZiti C SDK and immediately found safety issues like this: https://github.com/openziti/ziti-tunnel-sdk-c/blob/9993f61e6...
That's why this topic is such a big deal. Even people who really should know better like the OpenZiti authors aren't able to reliably write safe code.
drivebyhooting 3 hours ago [-]
Why is that a safety issue?
AlotOfReading 2 hours ago [-]
Malloc/Calloc can fail even if they typically don't on most Linux systems. You should always check for null pointers before accessing the resulting buffer, which doesn't happen here. The connections() block is also never explicitly freed anywhere I was able to find in a quick search. That's allowed, but definitely bad practice.
zozbot234 13 hours ago [-]
The issue of memory safety goes well beyond adversaries "hacking your code". Without memory safety, your code doesn't even have any kind of well-defined semantics so it's not feasible to defend against even "logic" bugs by automated means.
If you care about program correctness in any real sense, memory safety is table stakes.
feelamee 1 days ago [-]
Ok.
Please, just use your current C++ standard.
But we will go to use the new one with all features we want to use.
blub 1 hours ago [-]
Who’s “we”? The C++ developers that like the “Safe C++” proposal which is tacking Rust on top of C++ are a tiny minority.
It seems very fair to tell them to just use Rust and leave C++ alone.
pjmlp 1 hours ago [-]
Indeed, that is exactly what many FAANG companies are doing, have you noticed the slow down in velocity in major compilers regarding ISO C++ compliance?
bobnamob 17 minutes ago [-]
See Apple’s slowdown on clang development and subsequent advances in Swift<->C++ interop (even going as far as merging Swift code into FoundationDB)
And ofc Google’s investment in Carbon
pjmlp 3 minutes ago [-]
Or MSVC slow pace with C++23, after being the first to reach full C++20 support.
Everyone else outside the big three, is somewhere between C++14 and C++17.
blub 45 minutes ago [-]
Nope, still using C++17 and not bothered by any slowdown.
C++ has been moving too fast lately.
pjmlp 4 minutes ago [-]
It is currently an open debate what will be the very last ISO version the world will care about, C++17 might be the one, or C++26, bets are open.
diath 1 days ago [-]
On the contrary, why would I not want these things in C++ if I'm developing every project with -fsanitize=address,undefined to catch these types of errors anyway?
Attrecomet 23 hours ago [-]
What I don't understand is why you demand that C++ evolution be halted in a clearly suboptimal position so you don't need to change your processes. Just use the version of C++ that meets your needs, you clearly don't want nor need new developments. You are fine with being locked into bad designs for hash maps and unique ptr due to the (newly invented, in 2011/13) ABI stability being made inviolable, you clearly need no new developments in usability and security.
So why not be honest and just use C++01, or 11, or whatever it is that works for you, and let the rest of the ecosystem actually evolve and keep the language we invested so much effort into as a viable alternative? There's zero benefit, except to MS who want to sell this year's Visual Studio to all the companies with 80's-era C++...
liontwist 20 hours ago [-]
> evolution be halted in a clearly suboptimal position
It’s clear it’s imperfect. But not clear there is an obvious path to a nearby local maxima.
Design choices have tradeoffs.
And even if that were true, who would take advantage of that “better” language in a purely abstract sense? New language standards primarily exist to benefit existing C++ code bases, and the cohort of engineers who work on them. You have to consider that social reality.
bagxrvxpepzn 14 hours ago [-]
> What I don't understand is why you demand that C++ evolution be halted in a clearly suboptimal position so you don't need to change your processes.
I don't demand that C++ evolution be halted. I support the current trajectory of not adding viral annotations for the sake of implementing static lifetime checking. I want C++ to evolve into a better version of itself, I don't want it to become something it's not. If you want static lifetime checking, please use Rust. It already exists and it's great for people who need static lifetime checking.
chlorion 20 hours ago [-]
Imagine an engineer in any other field acting like this.
"I don't want to install air bags and these shiny safety gadgets into my cars. We have been shipping cars without them for years and it works for us and our customers."
The problem is that it doesn't actually work as well as you think, and you are putting people at risk without realizing it.
andrewflnr 16 hours ago [-]
You're trying to install airbags on a motorcycle, though. The design of the vehicle/language is incompatible with airbags/lifetimes. So if you want airbags... don't use C++.
(Yes, I know about airbag vests. Let's analogize those with external static checkers.)
bookspace 5 hours ago [-]
What if, bagxrv, is a Rust fan, just playing ya? Everyone knows Rust fans are the most vigorous developers on the internet. Just take a look at https://izzys.casa/2024/11/on-safe-cxx/
downut 19 hours ago [-]
You are making a general statement about the distribution of general consumers of computer languages, complete with a long tail, and the commenter is explaining that he is an expert car driver, way out there on the long tail. This tyranny of the less capable mode is really grating, especially on a site named "Hacker News".
As usual, the answer is quite simple: "please use rust". We promise to never mention when we break out nasm.
Driver anecdote: I have antilock brakes on my Tundra, but they are annoyingly counterproductive in 4WD descending 6" or larger sandy rocky steps. Do antilock brakes work overall best for the less capable mode? Of course! Do they work best for me? No.
ModernMech 18 hours ago [-]
We learned a long time ago as an industry that the expert car drivers are not immune to causing pile ups, which makes it all our problem to solve.
Safety by default with escape hatches when absolutely necessary is the better way to go for all, even if it means some power users have to change their ways.
lubesGordi 14 hours ago [-]
I don't know enough about what it would take to implement static lifetime checking. Is that fundamentally impossible to do in a backwards compatible way?
steveklabnik 14 hours ago [-]
It depends on what you mean by "backwards compatible," and what you mean by "static lifetime checking" :)
The profiles proposal suggests adding static lifetime checking, "without viral annotations." I use quotations because I don't really agree with this framing, but whatever. The paper is here if you'd like to read it yourself: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p30...
The core idea here is that you add annotations to opt in or out of certain checks. And opting in may be a compiler flag, requiring no changes to source code. So that would be "backwards compatibility" in that sense. Of course, code may fail these checks, so you'll have to add annotations to opt out, or re-write the code. We will see in practice how much change is required once implementations exist and are tried out.
But the other part is, these profiles do not attempt to cover all valid cases. And what I mean by that is, there are some lifetime issues that this proposal does not attempt to analyze. And, where the analysis is similar, they offer a subset of what other proposals do. These decisions were made because the authors believe that they'll reduce a significant number of issues, and are easier to adopt. And that's worth it instead of going for more checks.
The competing proposal, Safe C++, has you opt into safety checks on a file-per-file basis. So in that sense, it is also backwards compatible: all existing code compiles as-is. When you opt in to those checks, it adds new syntax, similar to Rust, to do the safety analysis checks. So you gain this benefit for only new code, but it also you get much more power. This syntax is necessary to communicate programmer intent to the checks, but is the "viral annotations" that the proponents of profiles don't like.
So, basically, that's the thing: both are backwards compatible, but offer very different tradeoffs in the design space.
aiono 14 hours ago [-]
If you want alias tracking and lifetime checking, yes they are backwards incompatible. They need "viral annotations" if we speak with the C++ committee terminology.
616c 1 days ago [-]
[flagged]
umanwizard 1 days ago [-]
Please don't shame people for using pseudonyms on here, regardless of whether you disagree with their concrete point. It's nice to have a place where people don't have to think about how their friends, family or colleagues will react before posting something.
AnimalMuppet 1 days ago [-]
> But why say so under a pseudonym
That's a rather odd complaint, coming from a pseudonym.
mempko 14 hours ago [-]
This! The hardest part of making software is making something that works for people. What I love about C++ is multi-paradigm programming. I can express my ideas directly using the appropriate paradigms. Regarding safety, with modern C++ programming, it's not hard to write software that's correct. Safety is often never the first thing I worry about.
If having strict safety means I can't express my mental models in code, I don't want it. It will slow me down. It will make it harder to write software that's useful.
Remember people, we are here to make things that are useful to people. If safety gets in the way of that, then it's not worth it.
jandrewrogers 1 days ago [-]
The parts of the government that think everything should be written in a memory-safe language (like Rust) are the same parts that already write everything in Java. Most of the high-end systems work is in C++, and that is the type of software where lifetimes and ownership are frequently unknowable at compile-time, obviating Rust's main selling point.
AlotOfReading 1 days ago [-]
It's not a hard dichotomy. Almost all of the rules Rust imposes are also present in C++, enforcement is simply left up to the fallible human programmer. Frankly though, is it that big a deal whether we call it unique_ptr/shared_ptr or Box/Arc if a lifetime is truly unknowable?
Rust shines in the other 95% of code. I spend some time every morning cleaning up the sorts of issues Rust prevents that my coworkers have managed to commit despite tooling safeguards. I try for 3 a day, the list is growing, and I don't have to dig deep to find them. My coworkers aren't stupid people, they're intelligent people making simple mistakes because they aren't computers. It won't matter how often I tell them "you made X mistake on Y line, which violates Z rule" because the issue is not their knowledge, it's the inherent inability of humans to follow onerous technical rules without mistakes.
galangalalgol 18 hours ago [-]
Yeah, I don't end up fighting rust very often, and when I do, it is right. And when I run into a case that it isnt, I have unsafe and the rustonimicon to help me. You can do anything in rust you can do in c++, it is just that rust defaults to safe instead of unsafe, and there is no single keyword to let you know the c++ you are looking at is safe.
Mike4Online 2 hours ago [-]
C++ is not just C++ but also the C preprocessor, the STL, the linker, the C libraries and SDKs you can't help but depend on, the build system, the build scripts, the package manager, the IDEs and IDE add-ons, the various quirks on various platforms, etc. That's on top of knowing the code base of your application.
Being really good at C++ almost demands that you surrender entire lobes of your brain to mastering the language. It is too demanding and too dehumanizing. Developers need a language and a complete tool chain that is designed as a cohesive whole, with as little implicit behavior, special cases and clever tricks as possible. Simple and straight-forward. Performance tweaks, memory optimizations and anything else that is not straightforward should be done exclusively by the compiler. I.E. we should be leveraging computers to do what they do best, freeing our attention so we can focus on the next nifty feature we're adding.
Zig is trying to do much of this, and it is a huge undertaking. I think an even bigger undertaking than what Zig is attempting is needed. The new "language" would also include a sophisticated IDE/compiler/static-analyzer/AI-advisor/Unit-Test-Generator that could detect and block the vast majority of memory safety errors, data races and other difficult bugs, and reveal such issues as the code is being written. The tool chain would be sophisticated enough to handle the cognitive load rather than force the developer to bear that burden.
pornel 20 hours ago [-]
There will be eventually only one faction left using C++ — the legacy too-big-to-refactor one.
The other faction that has lost faith in WG21, and wants newer, safer, nimble language with powerful tooling is already heading for the exits.
Herb has even directly said that adding lifetime annotations to C++ would create "an off-ramp from C++"[1] to the other languages — and he's right, painful C++ interop is the primary thing slowing down adoption of Rust for new code in mixed codebases.
Unfortunately, an option that is both safer and nimble doesn't appear to exist. I'm still hopeful, but at the moment it looks like rust is our future. A fate only marginally better than C++.
marcosdumay 17 hours ago [-]
Everything out there is nimbler than C++. So you only have to select for safer to get those, and anything with managed memory and Rust are safer. (Not an exclusive set, but you'll need to actually evaluate other options.)
adambatkin 1 days ago [-]
Something that Rust got _really_ right:
Editions. And not just that they exist, but that they are specified per module, and you can mix and match modules with different Editions within a bigger project. This lets a language make backwards incompatible changes, and projects can adopt the new features piecemeal.
If such a thing came to C++, there would obviously be limitations around module boundaries, when different modules used a different Edition. But perhaps this could be a way forward that could allow both camps to have their cake and eat it too.
Imagine a world where the main difference between Python 2 and 3 was the frontend syntax parser, and each module could specifically which syntax ("Edition") it used...
CrendKing 1 days ago [-]
But Edition can exist only because Rust intrinsically has the concept of package, which naturally defines the boundary. C++ has nothing. How do you denote a.cpp be of cpp_2017 edition which b.cpp be cpp_2026? Some per-file comment line at top of each file?
C++ is a mess in that it has too much historic baggage while trying to adapt to a fiercely changing landscape. Like the article says, it has to make drastic changes to keep up, but such changes will probably kill 80% of its target audiences. I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go. It is time to either switch to Rust, or pick one of its successor languages and put effort into it.
umanwizard 1 days ago [-]
Rust doesn't have the concept of package. (Cargo does, but Cargo is a different thing from Rust, and it's entirely possible to use Rust without Cargo).
Rust has the concept of _crate_, which is very close to the concept of compilation unit in C++. You build a crate by invoking `rustc` with a particular set of arguments, just as you build a compilation unit by invoking `g++` or `clang++` with a particular set of arguments.
One of these arguments defines the edition, for Rust, just like it could for C++.
ynik 21 hours ago [-]
That only works for C++ code using C++20 modules (i.e. for approximately nothing).
With textual includes, you need to be able to switch back and forth the edition within a single compilation unit.
humanrebar 10 hours ago [-]
It's not clear that modules alone will solve One Definition Rule issues that you're describing. It's actually more likely that programs will have different object files building against different Built Module Interfaces for the same module interface. Especially for widely used modules like the standard std one.
But! We'll be able to see all the extra parsing happen so in theory you could track down the incompatibilities and do something about them.
bluGill 17 hours ago [-]
Modules are starting to come out. They have some growing pains, but they are now ready for early adopters and are looking like they will be good. I'm still in wait and see mode (I'm not an early adopter), but so far everything just looks like growing pains that will be solved and then they will take off.
calibas 10 hours ago [-]
At the current rate, we'll have full module support for all of the most popular C++ libraries sometime around Apr 7th, 2618.
Mixing editions in a file happens in Rust with the macro system. You write a macro to generate code in your edition and the generation happens in the callers crate, no matter what edition it is.
hypeatei 1 days ago [-]
> I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go
I agree but also understand this is absolutely wishful thinking. There is so much inertia and natural resistance to change that C++ will be around for the next century barring nuclear armageddon.
actionfromafar 1 days ago [-]
I don't think even that would suffice. :)
adgjlsfhk1 1 days ago [-]
Cobol's still around. Just because a language exists doesn't mean that we have to keep releasing updated specifications and compiler versions rather than moving all those resources to better languages.
AnimalMuppet 1 days ago [-]
COBOL's most recent standard was released in 2023, which rather ruins your point.
tialaramex 10 hours ago [-]
I think the existence of COBOL-2023 actually suggests that it's not merely possible that in effect C++ 26 is the last C++ but that maybe C++ 17 was (in the same sense) already the last C++ and we just didn't know it.
After all doubtless COBOL's proponents did not regard COBOL-85 as the last COBOL - from their point of view COBOL-2002 was just a somewhat delayed further revision of the language that people had previously overlooked, surely now things were back on track. But in practice yeah, by the time of COBOL-2002 that's a dead language.
pjmlp 1 hours ago [-]
Fully agree, because for the use cases of being a safer C, and keeping stuff like LLVM and GCC running, that is already good enough.
From my point of view C++26 is going to be the last one that actually matters, because too many are looking forward to whatever reflection support it can provide, otherwise that would be C++23.
There is also the whole issue that past C++17, all compilers seem like a swiss cheese in language support for the two following language revisions.
bluGill 18 hours ago [-]
> I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go
That is not possible. The the following function in C++ std::vector<something> doSomething(std::string); Simple enough, memory safe (at least the interface, who knows what happens inside), performant, but how do you call that function from anything else? If you want to use anything else with C++ it needs to speak C++ and the means vector and string needs to interoperate.
zozbot234 18 hours ago [-]
You can interoperate via C ABI and just not use the C++ standard types across modules - which is the sane thing to do. Every other language that supports FFI via C linkage does this, only C++ insists on this craziness.
galangalalgol 18 hours ago [-]
Also I wouldn't start by rewriting the thing that calls do_something, I'd start by rewriting do_something. Calling into rust from c++ using something like zngur lets you define rust types in c++ and then call idiomatic rust. You can't do it in the opposite direction because you cannot safely represent all c++ types in rust, because some of them aren't safe.
bluGill 17 hours ago [-]
I have millions of lines of C++. do_something exists and is used but a lot of those lines and works well. I have a new feature that needs to call do_something. I'm not rewriting any code. My current code base was a rewrite of previous code into C++ started before rust existed), and it costs a nearly a billion dollars! I cannot go to my bosses and say that expensive rewrite that is only now starting to pay off because of how much better our code is needs to be scrapped. Maybe in 20 years we can ask for another billion (adjust for inflation) to rewrite again, but today either I write C++, or I interoperate with existing C++ with minimal effort.
I'm working on interoperation with existing C++. It is a hard problem and so far every answer I've found means all of our new features still needs to be written in C++ but now I'm putting in a framework where that code could be used by non-C++. I hope in 5 years that framework is in place by enough that early adopters can write something other than C++ - only time will tell though.
galangalalgol 17 hours ago [-]
Yeah that use case is harder, but I'm involved in a similar one. Our approach is to split off new work as a separate process when possible and do it entirely in rust. You can call into c++ from rust, it just means more unsafe code in rust wrapping the c++ that has to change when you or your great grandchild finally do get around to writing do_something in rust. I am super aware of how daunting it is, especially if your customer base isn't advocating for the switch. Which most don't care until they get pwned and then they come with lawyers. Autocxx has proven a painful way to go. The chrome team has had some input to stuff and seem to be making it better.
bluGill 17 hours ago [-]
Sure I can do that - but my example C++ function is fully memory safe (other than don't go off the end of the vector which static rules can enforce by banning []). If I make a C wrapper I just lost all the memory safety and now I'm at higher risk. Plus the effort to build that wrapper is not zero (though there are some generators that help)
tsimionescu 16 hours ago [-]
How about going off the end of the vector with an iterator, or modifying the vector while iterating it, or adding to the vector from two different threads or reading from one thread while another is modifying it or [...].
There is nothing memory safe whatsoever about std::vector<something> and std::string. Sure, they give you access to their allocated length, so they're better than something[] and char* (which often also know the size of their allocations, but refuse to tell you).
bluGill 15 hours ago [-]
> going off the end of the vector with an iterator,
The point of an iterator is to make it hard to do that. You can, but it is easy to not do that.
> modifying the vector while iterating it
Annoying, but in practice I've not found it hard to avoid.
> adding to the vector from two different threads or reading from one thread while another is modifying it
Rust doesn't help here - they stop you from doing this, but if threads are your answer rust will just say no (or force you into unsafe). Threads are hard, generally it is best to avoid this in the first place, but in the places where you need to modify data from threads Rust won't help.
zozbot234 13 hours ago [-]
> rust will just say no
This is just not accurate, you can use atomic data types, Mutex<> or RwLock<> to ensure thread-safe access. (Or write your own concurrent data structures, and mark them safe for access from a different thread.) C++ has equivalent solutions but doesn't check that you're doing the right thing.
pjmlp 17 hours ago [-]
Only if using a hardened runtime with bounds checking enabled, without any calls to c_str().
SkiFire13 22 hours ago [-]
> And not just that they exist, but that they are specified per module
Nitpick: editions are specified per crate, not per module.
---
Also note that editions allow to make mostly syntactic changes (add/remove syntax or change the meaning of existing ones), however it is greatly limited in what can be changed in the standard library because ultimately that is a crate dependency shared by all other crates.
steveklabnik 1 days ago [-]
There was a similar proposal for C++, using rust’s original names: epochs. It stalled out.
ykonstant 23 hours ago [-]
They should call them 'eras'. Then they can explain that epochs did not lead to a new era in the language, but eras will mark the next epoch of C++.
wink 20 hours ago [-]
My C++ knowledge is pretty weak in this regard but couldn't you link different compilation units together just like you link shared libraries? I mean it sounds like a nightmare from a layout-my-code perspective, but dumb analogy: foo/a/* is compiled as C++11 code and foo/b/ is compiled as C++20 code and foo/bin/ uses both? (Not fun to use.. but possible?)
Is that an ABI thing? I thought all versions up to and including C++23 were ABI compatible.
zozbot234 20 hours ago [-]
How does foo/bin use both when foo/a/* and foo/b/ use ABI-incompatible versions of stdlib types, perhaps in their public interfaces? This can easily lead to breakage in interop across foo/a/* and foo/b/ .
layer8 11 hours ago [-]
How does Rust do it?
GolDDranks 8 hours ago [-]
By linking both and not allowing mixing types, i.e. it considers types from a totally unrelated with types from b.
Also, Rust compiles the whole world at once, so any ABI breakage from mixing code from different compiler versions doesn't happen. (Editions are different thing from compiler versions, a single version of the compiler supports multiple editions.)
8 hours ago [-]
bluGill 18 hours ago [-]
What is the point? C++ is mostly ABI compatible (std::string broke between C++98 and C++11 in GNU - but we can ignore something from 13 years ago). The is very little valid C++11 code that won't build as C++23 without changes (I can't think of anything, but if something exists it is probably something really bad where in C++11 you shouldn't have done that).
Now there is the possibility that someone could come up with a new breaking syntax and want a C++26 marker. However nobody really wants that. In part because C++98 code rebuilt as C++11 often saw a significant runtime improvement. Even today C code built as C++23 probably runs faster than when compiled as C (the exceptions are rare - generally either the code doesn't compile as C++, or it compiles but runs wrong)
Maxatar 17 hours ago [-]
There are plenty of things between C++11 and C++23 that have been removed and hence won't compile:
Implicit capture of this in lambdas by copy.
std::iterator removed.
std::uncaught_exception() removed.
throw () exception specification removed.
std::strstream, std::istrstream, and std::ostrstream removed.
std::random_shuffle removed.
std::mem_fun and std::mem_fun_ref, std::bind1st and std::bind2nd removed.
There are numerous other things as well, but this is just off the top of my head.
j16sdiz 8 hours ago [-]
cppreference say strstream is removed in C++26, not C++23.
I knew they are bad, but I don't think it should be removed.
bluGill 15 hours ago [-]
None of those things I've never used. Many of those are in bad practice for C++11.
lmm 4 hours ago [-]
Sure. But per your own other posts in this thread, you've got > 10 million lines of "legacy C++". Probably those bad practices are present in that code and not automatically fixable. So switching to compiling everything with a C++23 compiler is every bit as much not an option for you as switching to Rust, no?
wink 18 hours ago [-]
There is no inherent point, I was just wondering, if it's possible, why people don't use such a homegrown module layout like Rust editions in C++.
I only ever worked in a couple of codebases where we had one standard for everything that was compiled and I suppose that's what 90% of people do, or link static libs, or shared libs, so externalize at an earlier step.
So purely a thought experiment.
humanrebar 20 hours ago [-]
The C++ profiles proposal is something like an "editions lite". It could evolve into more fully featured editions some day, though not without some significant tooling work to support prevention of severe memory and type safety issues across different projects linked into the same program.
kccqzy 15 hours ago [-]
That's irrelevant. Look, the C++ committee has decided yet again not to break ABI. That is to say, they have affirmed that they DO NOT want backwards incompatible changes. So suggesting a way to make backwards incompatible changes is of no interest to the C++ committee. They don't want it and they have said so more than once.
fefe23 19 hours ago [-]
Oh no! Herb Sutter is leaving Microsoft?!
That does not bode well for Microsoft. At least from the outside perspective it looks like he was the adult in the room, the driving force behind standards adoption and even trying to steer C++-the-language towards a better vision of the future.
If he is gone, MSVC will again be the unloved bastard child it has long been before Herb's efforts started to pay off. This is very disheartening news.
I'm happy he held out for this long even though he was being stonewalled every step of the way, like when Microsoft proposed std::span and it was adopted but minus the range checking (which was the whole point of std::span).
Now he has been pushing for a C++ preprocessor. Consider how desperate you have to be to even consider that as a potential solution for naysayers blocking your every move.
tux3 18 hours ago [-]
The rumor that has been widely circulating is that the MSVC backend is being reused as a code generator for the Rust compiler (because nobody really understands PDBs anymore, not even Microsoft, and especially LLVM doesn't. So rustc could be a MSVC frontend instead to reuse all the existing arcane logic.)
MSVC will continue to be used for many years, and especially the backend might see renewed effort. But I don't know about the C++ frontend specifically, I've seen complaints about more and more bugs on the cpp subreddit. It's possible MS will be investing a little less in C++.
pjmlp 17 hours ago [-]
Disregarding the rumor, it is quite public information that on Azure side, C and C++ are now only allowed for existing code bases, or scenarios where nothing else is available.
Meanwhile on Windows side, it was made officially at Ignite that a similar decision is now to be followed upon Windows as well.
Here the official stuff, so whatever happens to MSVC is secondary,
They've made statements like that for a long time now. But they've never escaped using C++ when performance matters. The game dev roles very clearly ask for C++, for example.
Rather, it seems that as computers have gotten faster, there's been more places where safety is preferable to performance.
pjmlp 2 hours ago [-]
The proof is on the pudding, how performance critical do you consider Pluton firmware, network cards firmware supporting Azure workloads?
Two examples of stuff publicly rewritten into Rust.
Games are special that isn't what Windows security cares about in first instance, when TinyGlade is the first ever commercial success using Rust.
Yet most games are done with Unreal and Unity, and yes there is lots of C++ there, but is mostly Blueprints, Verse, C# on top, that large majority of studios reach for.
_huayra_ 16 hours ago [-]
> in alignment with the Secure Future Initiative, we are adopting safer programming languages, gradually moving functionality from C++ implementation to Rust.
This seems like one hell of an initiative for the Windows OS. That is millions of lines of C++ code, often with parts from waaay back. A friend who works on one of the OS teams told me that his team got a boomerang hire that worked on Windows back in the 90s and he was still finding parts of his code in there!
I hope this corporate interest bodes well for Rust though. It seems like for C++ it really caused a schism over the ABI break issue where Chandler et al were basically rebuffed finding some timeline to break it, and then Google dropped all their support on the committee in favor of Carbon, Rust, etc.
pjmlp 15 hours ago [-]
Apple and Google focusing on their own stuff, is one of the reasons why clang lost velocity in ISO C++ adoption, most of the C++ compiler vendors that fork clang don't contribute frontend stuff only LLVM, and with them out, it took some time until new folks jumped in to replace their contributions.
Likewise you will noticed MSVC is no longer riding the wave in regards to C++23, after being the first to fully support C++20.
Then there are all those other compilers out there, lost somewhere between C++14 and C++17, and most likely never moving beyond that.
bluGill 18 hours ago [-]
> Now he has been pushing for a C++ preprocessor.
He has been showing it, but not pushing it. the difference is subtle but important. He is showing a lot of "what ifs" trying them, and pushing the useful ones back into the language. Reflection is on track for C++26 in large part because he inspired a lot of people with his metaclasses talk (a long time ago, but doing things right takes time)
chrsig 18 hours ago [-]
It looks like he's staying on the committee and what not, just changing his day job. That's actually one of the benefits of having a committee & iso standardization process -- things aren't so reliant on a single engineer staying employed at a single company.
I'm sure it's never as clean a situation as anyone would like, but hey, world is a rough place sometimes.
torginus 20 hours ago [-]
I have been saying this for a more than a decade, but the number one thing that killed C++ as an attractive modern language is (the lack of) modules - the ability to include C++ code and libraries from others (perhaps with transitive dependencies), would allow an actual community of devs and companies spring up around the language.
Instead we have greybeards and lone warriors, and million-line legacy codebases, half of which have their own idea on what a string or a thread is.
fsloth 19 hours ago [-]
” killed C++ as an attractive modern language”
I’m not sure were you got this perception that it’s dead.
C++ remains the only game in town in many domains.
That said, _unless you work in those domains_ there is no good reason to use C++ IMHO.
Apart from the legacy codebases, there’s lots of C++ greenfield development.
” the ability to include C++ code and libraries from others ”
Libraries in vcpkg - a large number - are compatible enough to be used in this sense. It’s possible your specific domain is lacking contributions or you’ve been looking from the wrong places?
MichaelZuo 19 hours ago [-]
Yeah, a ‘module’ based system of various language is so much less efficient that it seems absurd to compare them for anything that actually requires that performance.
torginus 18 hours ago [-]
Honestly not sure what you mean by that - in C#, for example it doesn't matter to the compiler where the code comes from, it can be JITed/inlined just the same even if its coming from a different dll.
I haven't seen any perf impact of splitting stuff between files/js modules in typescript either.
What I'm guessing is that you mean that static compilers, like that of C++, need to be able to 'see' large amounts of code to make clever inlining optimizations.
Which shouldn't be the case if the code is well designed, and/or the compiler can prove invariants necessary for optimization without having to look at the body of the code.
pjmlp 19 hours ago [-]
Modules could be better, the problem are some greybeards and lone warriors (thankfully not all), that insist using C++ as it was plain old C.
Basically, it is no different than renaming .js to .ts to take advantage of some stuff in Visual Studio Code, while keep writing plain old JavaScript.
eej71 19 hours ago [-]
I think the struggle with modules has much more to do with the complexity of the problem at hand. I think the solution looks very easy should one be willing to dispense with large parts of the ecosystem. But if your goal is to keep the ecosystem together and not break the world (ala python 2/3 or perl5/6) and solve the problem at hand (waves vaguely at modules) - then its a really hard problem.
I wish I could say modules don't work, but I have yet to understand them. Which is probably a big part of its problem.
pjmlp 18 hours ago [-]
All my hobby coding in C++ makes use of modules.
Visual C++ and clang, alongside MSBuild and CMake/ninja.
As for ecosystem fragmentation, it has been the same old story since WG14 and WG21 exist, each compiler and platform is their own snowflake of what they actually support.
_huayra_ 16 hours ago [-]
> All my hobby coding in C++ makes use of modules.
Do you have an example (of yours or others) that you could link?
I've been trying to get this up and running myself, but can't seem to whisper the right CMake prayers.
Stop using python 3 as an example. It is really tiring to hear about an extreme case of gross incompetence over and over again, while over in say the Grails/Spring ecosystem I don't even bother upgrading Grails 3 or 4 Plugins to Grails 6, because they still work as intended. When you upgrade a plugin from one version to the next, you're just swapping out build.gradle, gradle wrapper and a bunch of ancillary properties files. The build system changes, but everything else stays the same with only a tiny tiny minority of plugins being affected and even then the things that broke are absolute nonissues that can be fixed relatively quickly.
It is kind of interesting how the python community hasn't learned a thing from python 2/3. The problem isn't breaking backwards compatiblity. Probably the biggest mistake you can do is act like breaking backwards compatiblity is a big deal, therefore you should pile up as many breaking changes as possible and release them all at once so as to maximize pushback and upgrade friction.
It is in fact the exact opposite. If you break 10 libraries out of a million, you as the language developer can step in and upgrade them on behalf of the original maintainer. The users increment a library version when they increment the language version and done.
bluGill 17 hours ago [-]
Python3 is a great example. They looked at what others had done. They carefully thought about the problem. They build tools to migrate. They announced plans. They really thought they had found a better answer that would work out because they had planned for everything.
Of course we are now looking at things in hindsight and see what didn't work.
earthboundkid 16 hours ago [-]
It's really important to be clear about the lessons to be learned from Python 3.
1. Forward compatibility is more important than backward compatibility.
2. Automated refactoring tools don't help with 1.
The problem wasn't that they broke a lot in Python 3. It was that you couldn't write your Python 2 in such a way as to be compatible with it until well into the transition process as the six package got popular and the devs fixed needlessly broken things in Python 2.
VyseofArcadia 18 hours ago [-]
I have seen a lot of C++ code that has a lot of "this is clearly just C" in it. None of it is because of "greybeards and lone warriors". All of it was because it started as a C codebase, and sometime in the mid to late 90s when object-oriented fever swept the world they started just adding C++ on top of the existing C codebase.
Given that the general industry approach to technical debt is "yes, more please", it is unsurprising to me that any sufficiently old C++ project still has lots and lots of plain C inside it.
pjmlp 18 hours ago [-]
Except the complaint equally applies to green field projects.
munificent 14 hours ago [-]
I really really like this article. I think the two camps the author describes very much reflect my experience over the past couple of decades at a dotcom startup, then a game developer, and now at Google
However, I think the author is a little off on the root cause. They emphasize tooling: the ability to build reliably and cleanly from source. That's a piece of it, but a relatively small piece.
I think the real distinguishing factor between the two camps is automated testing. The author mentions testing a couple of times, but I want to emphasize how critical that is.
If you don't have a comprehensive set of test suites that you are willing to rely on when making code changes, then your source code is a black box. It doesn't matter if you have the world's greatest automated refactoring tools that output the most beautiful looking code changes. If you don't have automated tests to validate that the change doesn't break an app and cost the company money, you won't be able to land it.
Working on a "legacy C++ app" (like, for example, Madden NFL back when I was at EA) was like working on a giant black box. You could fairly confidently add new features and new code onto the side. But if you wanted to touch existing code, you needed a very compelling reason to do so in order to outweigh the risk of breaking something unexpectedly. Without automated tests, there was simply no reliable way to determine if a change caused a regression.
And, because C++ is C++, even entirely harmless seeming code changes can cause regressions. Once you've got things like reinterpret_cast<>, damn near any change can break damn near anything else.
So people working in these codebases behave sort of like surgeons with a "do no harm" philosophy. They touch as little as possible, as non-invasively as possible. Otherwise, the risk of harming the patient is too high.
It's a miserable way to program long-term. But it's really hard to get out of that mess once you're in it. It takes a monumental amount of political capital from engineering leadership to build a strong testing culture, re-architect a codebase to be testable, and write all the tests.
A lot of C++ committee changes aimed at legacy C++ developers are about "how can we help these people that are already in a mess survive?" That's a very different problem than asking, "Given a healthy, tested codebase, how can we make developers working in it go faster?"
badsectoracula 12 hours ago [-]
> A lot of C++ committee changes aimed at legacy C++ developers are about "how can we help these people that are already in a mess survive?" That's a very different problem than asking, "Given a healthy, tested codebase, how can we make developers working in it go faster?"
Having also worked at a few gamedev studios, IME there isn't a real distinction between the two since it is always a matter of time for the former to become the latter.
Sometimes it doesn't even take that long, all it takes is a single innocuous vertical slice with a pointlessly immovable deadline to inject enough harm in a codebase so you spend the next year fighting bugs that shouldn't have existed in the first place while also having to do everything else at the same time (and all planned timeframes made with only the "everything else" in mind, of course).
IMO even if it doesn't sound good, it is much more practical to learn how to deal with the mud than assume pigs do not exist :-P
munificent 11 hours ago [-]
> Having also worked at a few gamedev studios, IME there isn't a real distinction between the two since it is always a matter of time for the former to become the latter.
That was very much my experience at EA, but has definitely not been my experience at Google. While everyone struggles with tech debt, at Google I've worked in many codebases that have been continuously well-maintained with good test coverage for over a decade.
Really, once you build a culture that says, "People not on your team may edit your code without asking and will rely on your tests to make sure they don't break things,", teams get highly incentivized to write tests.
mgaunard 23 hours ago [-]
If you're comparing Herb Sutter and the Google people at the standard committee, there is one thing that was clear: Herb was good at getting people to agree on compromises that served everybody, while Google was mostly claiming they knew better than everybody else and pushing their own agenda.
danpalmer 1 days ago [-]
Python similarly has 2-3 factions in my experience: teams doing engineering in Python and using all the modern tooling, linting, packaging, types, testing, etc; teams doing data science and using modern but different tooling (i.e. Anaconda); and teams that don't get onboard in any of the language health initiatives and are on unsupported language versions with no packaging, tooling, linting, etc.
Javascript/Node/Typescript has even more identifiable factions.
I think developing factions around these things is unfortunately normal as languages grow up and get used in different ways. Rust has arguably tried to stay away from this, but the flip side is a higher learning curve because it just doesn't let certain factions exist. Go is probably the best attempt to prevent factions and gain wide adoption, but even then the generics crowd forced the language to adopt them.
dehrmann 1 days ago [-]
When you put it this way, personas might be a better term than factions.
danpalmer 1 days ago [-]
Yeah I think that's a much friendlier term. I do think language ecosystems have a hard time, because on the one hand they should be aiming to be as useful as possible, which means doing more, on the other hand they have to acknowledge that any given user will likely not use all the language and that the rest of it may hinder them, which means doing less.
C++ does a lot, but has a big disengaged crowd, for many reasons, and that crowd will suffer from the push forward. Python and Node are similar.
bogeholm 14 hours ago [-]
The first two factions you describe in Python (types, testing etc. vs. data science and Anaconda) can work together just fine.
Source: I am in both factions, as are my colleagues :)
shultays 23 hours ago [-]
“We must minimize the need to change existing code. For adoption in existing code, decades of experience has consistently shown that most customers with large code bases cannot and will not change even 1% of their lines of code in order to satisfy strictness rules, not even for safety reasons unless regulatory requirements compel them to do so.” – Herb Sutter
with large code bases cannot and will not change even 1% of their lines of code in order to satisfy strictness rules
Do people really say this? Voice this in committee? I have been in a few companies, and one fairly large one, and all are happy to and looking forward to upgrade newer standards and already spend a lot of time updating their build systems. Changing 1% of code on top of that is probably not really that much compared
loup-vaillant 22 hours ago [-]
> Changing 1% of code on top of that is probably not really that much compared
Quite a few companies have millions and millions of lines of code. Changing 1% of it would mean changing more than 10K lines of code, perhaps even more than 100K. In much bigger code bases, where changing anything has a risk of breaking something — not just because you might make a mistake, but because your program is full of Undefined Behaviour, and changing anything might manifest latent bugs.
Given that, I'm not surprised people say that Sutter quote with a straight face.
bregma 21 hours ago [-]
Many of my customers are in an industry with a huge C++ code base and it's all under active development. Safety certification requirements are onerous and lead-times for development are long: many are now experimenting with C++17 and C++20 is on the long-term horizon but not yet a requirement. Because of the safety certification requirements and the fact that the expected lifecycle of the software is the order of decades after their products have been released, changing any lines of their code for any reason is always risky. Lives can be at stake.
But this is a multi-billion-dollar industry. If you're working on scripting a little browser "app" for a phone things may be different.
titanomachy 16 hours ago [-]
“Little browser apps for phones” are a trillion-dollar industry
nicce 18 hours ago [-]
Is there a lot of manual work for getting the new certificate? E.g. is human rewiewing the code? If not, someone should build CI pipeline for the certification process.
bluGill 18 hours ago [-]
Hundreds of hours of manual testing. I don't have to do safety certificates, but my code gets 500 hours of manual testing (I'm not allowed to give real numbers, these numbers are close enough) - they find enough critical can't ship issues where the fix is risky enough to start all over that we typically are doing 2500 hours of manual testing. on every release.
We have a large automated test suite that runs on every build and takes hours. The problem with automated tests is they only verify situations you thought of work the way you think they should, while human testers find slight variations of setup that you wouldn't think matter until they do. Human tests also find cases where the way you expect things to work don't make sense in the real world.
bregma 12 hours ago [-]
Wait until you find out about the cat test. It found a failure mode no human had thought of. No amount of the developer claiming a test like that was not fair was enough to invalidate the results. No actual cats were harmed but treats may have been given.
ModernMech 9 hours ago [-]
Do you have more context? I'm having trouble googling what you're referencing.
noisy_boy 5 hours ago [-]
Simulate a cat walking on the keyboard to handle weird inputs?
ModernMech 4 hours ago [-]
Isn't that just fuzzing? I thought maybe there was a specific thing called the cat test.
rwmj 21 hours ago [-]
People just don't make mass changes to existing working code. Mostly they cannot. Even if the tooling was available, which it's not, it's also about reeducating their developers, who don't want to or can't change. Plus it'd have to be recertified. It's all cost with no benefit.
Except, allegedly, at Google. But is there any evidence they actually do this, eg. in public code bases? Or is it just hype?
j16sdiz 8 hours ago [-]
Google do this to their internal monorepo.
This is one of the reason why they are bad at open sourcing - their internal code almost never match what is released
Someone 23 hours ago [-]
Could be selection bias. Companies (or departments within companies) who are still actively developing their C++ code probably tend to hire more developers and consultants than companies who are doing minimal maintenance on their code base, and that might correlate well with the “two factions of C++” discussed here.
“Our code is an asset” ⇒ code kept up-to-date
“Our code is a burden, but we need it” ⇒ change averse
otabdeveloper4 19 hours ago [-]
> Changing 1% of code on top of that is probably not really that much compared
Changing 1% across all modules is a nightmare. Changing one module which is 1% of the code is nothing.
MathMonkeyMan 7 hours ago [-]
A company that I worked at had a few very large C++ related migrations, and they were all very very expensive.
The first was removing `long` from the code, since a lot of code assumed its size (is it like `int` or like `long long int`?) and as machines were upgraded it caused problems.
The second was moving to C++11/14/17. Most of the difficulty was toolchains on unixen that did not support the new versions of the language, or for which support was incomplete, or for which upgrading to a version with support broke existing builds.
The third was moving to Linux from big iron unixen. As far as I understand, this initiative is still underway. It was already underway in 2011 when I joined the company.
This is a rich company with a large, healthy engineering department. I imagine that most other companies would not or could not bother.
ModernMech 4 hours ago [-]
That old joke about Stroustrup inventing C++ to keep developers perpetually employed keeps ringing true.
Hilift 23 hours ago [-]
Are you referring to his book written 20 years ago or 25 years ago? "customers with large [C++] code bases" there aren't that many of these. Vendors, government. With code bases that have stewards, not programmers.
hypeatei 1 days ago [-]
One thing I cannot stand about C++ is the fractured nature of everything. Compilers, build tools, package management, etc... It feels like you need to be a wizard just to get a project compiling and start writing some code.
diath 1 days ago [-]
The worst part is when you want to bring along people that are not as much of a wizard as you are. I've been prototyping some multiplayer, online video game with MMO-like sharding for a while now, mostly the backend and core stuff for the project and wanted to get two of my friends on the project to develop the gameplay logic which is largely done through a dynamic scripting language, but some features (that, say, I did not foresee needed yet), require source changes to expose the APIs to the scripting language, now, these guys are capable of doing these changes but the onboarding process for a single potential co-developer is such a pain, I basically have to explain to them how to download a compiler, a package manager like vcpkg (which wasn't even that much usable for these types of things pre-versioning, and is still not working properly - i.e. trying to pin LuaJIT version to 2.0.5 for VM bytecode compatibility will attempt to build LuaJIT with cl.exe on Linux), a build system like CMake, and so on, then guide them through all the steps to get the compiler, the build system, and the libraries working, and then hope that in the end they will actually work and not force you to spend an entire day over a remote desktop software trying to get them to become productive.
tom_ 1 days ago [-]
Include more of your dependencies in the repo and build them aa part of the ordinary build process. Now a package manager does not need to get involved.
diath 1 days ago [-]
Manually copy-pasting source trees around sounds like such an outdated idea from decades ago on how to approach dependency management in a modern programming language. Not to mention that you then have to hook them up to the build system that you are using and not all of them will work out of the box with the one you are using for your project, sure, if you are using CMake and your dependency uses CMake, you can add a subproject, how do you deal with it when they're mixed-and-matched aside from rewriting the builds for every dependency you're pulling in; or without manually writing glue shell scripts to build them independently and put them into a directory? How do you then ensure the said shell script works across different platforms? There are way too many issues with that approach that are solved in other languages through a standardized project management tool.
a_t48 1 days ago [-]
You don't have to actually copypaste. You can use CMake and FetchContent/CPM. You can specify custom build commands or inline declare a project for anything small that you pull in that doesn't use CMake (you can call add_library with a glob on the folder FetchContent pulled in, for example - I've done so here https://github.com/basis-robotics/basis/blob/main/cpp/CMakeL... for a header only lib). For large external dependencies that are either very slow to compile or for some reason aren't CMake, reach for the system package manager or similar. If you want to be really cross platform and are supporting Qt/wxwidgets/etc, vcpkg+CMake+clang is solid combo, if a bit slow and a bit disk space heavy with the build cache.
lenkite 1 days ago [-]
Have you taken a look at CPM ? https://github.com/cpm-cmake/CPM.cmake . It makes CMake project management easy - no need for separate package manager tool.
mgaunard 23 hours ago [-]
And yet that's the right approach. It's not really copying but rather onboarding.
You don't want to depend on a third-party hosting the code, so you need to copy it, and pin it to a specific version. You might also need to patch it since you'll be using it and most likely will run into problems with it.
Using third-party code means taking ownership of all the problems one might encounter when trying to use it in your project, so you might as well just adapt it for it to work with your tools and processes.
If you use a modular system this is essentially just maintaining a fork.
physicsguy 16 hours ago [-]
People vendor dependencies in Go too!
cshokie 15 hours ago [-]
That’s similar to what vcpkg does under the covers. It clones the repo containing the dependency’s source code and then compiles it using the same compiler as the rest of your project. This avoids static libraries and ABI considerations while also avoiding having to copy/paste their entire source tree into your own.
cyclopeanutopia 1 days ago [-]
Can't you just put that into a docker container?
diath 1 days ago [-]
This is more of a workaround than a solution; see my other comment in this thread.
fsckboy 1 days ago [-]
you DO need to be a wizard to launch a large C++ project.
Yes, languages that are beginner friendly are ... friendlier. Yes, languages that stick to one or a small number of programming paradigms are friendlier. But if you want the "flexible efficiency and raw power of C" and "something higher level than C", C++ is your baby.
Maybe it would be better if we all used Java, Rust, and Go, but C++ sings its siren von Neumann song to the wizards, and there will always be wizard musicologists who steer their projects toward those rocks and, when they have just enough wax in their ears, they sail right past the rocks and come out the other side of the straits leading the rest of the fleet.
You can choose to follow them or not, for there's no shame in coming in 4th.
lenkite 24 hours ago [-]
Even the wizards are moving to Rust/Zig since C++ stdlib performance is becoming terrible thanks to the ABI-frozen till heat-death of the universe decision. Even wizards don't want to build a stdlib of their own from scratch.
Feel the committee was smoking weed that day in la-la land. You can ignore all the safety stuff from Sean Baxter, but saying no to performance on the altar of permanent, un-specified ABI backward compatibility - when such was never mentioned as a design goal of C++ - means its "Goodbye C++" for a long, long list of orgs and "wizards". The ABI was NEVER specified formally by the C++ standard - so why bother sacrificing the world for its immortal existence ?
C++ is NO longer the choice of language for greenfield native projects and the committee takes the full blame.
asyx 24 hours ago [-]
Really looking forward to zig 1.0. I feel like C++ has become a language where professionals are fine with the historical grime but for hobbyist and people that need C++ occasionally there is just no motivation in the community to make this language more ergonomic.
physicsguy 13 hours ago [-]
ABI compatibility is one of those things that is necessary with such a long history, especially with commercial libraries that don't really have an equivalent in the newer languages. The issue with C++ that doesn't exist with it's competitors is that there is a long tail of software people use commercially that isn't source available that's incredibly important in certain use cases.
I worked in a previous role on C++ CAD/simulation software that required vendored things like solid modelling kernels and it was incredibly painful. Occasionally one of the vendors would just not do the work and you'd end up having to spend half a year ripping out the dependency that worked perfectly well. The team working on the software were generally in favour of moving up through to modern standards, while I was there we did 03 -> 17 for e.g. but that didn't finish til 4 years after the C++17 standard came out for all sorts of reasons. When VS2017 came out everyone breathed a sigh of relief because suddenly we didn't have to wait to upgrade the compiler.
panstromek 21 hours ago [-]
So here's the thing. Almost none of the problems I have with C++ are related to "flexible efficiency and raw power of C". You could easily have language that is even more flexible and powerful, but much easier to use. Or not even use, just install.
C++ was always by far the most inefficient langauge to work with for me, because there's just so much chore and nonsense that you have to get through to get anything done, and almost none of it has any reasonable purpose, there's no efficency tradeoff. I'm pretty sure that the insane build situation or UB in uninitialized variables or unspecified argument evaluation order never really benefited anybody, they are just bad decisions in the language, and that's all.
bluGill 18 hours ago [-]
> UB in uninitialized variables
You will be happy to learn the uninitialized variables are not UB as of C++26.
quotemstr 10 hours ago [-]
They're just initialized to some unspecified value and cause almost-as-hard-to-diagnose faults.
pjmlp 1 days ago [-]
So much for the theory, then there is the hard reality how standard library is implemented, the variantions across implementations, and how the ongoing ABI drama is preventing any performance improvements to it.
chrsig 19 hours ago [-]
> but C++ sings its siren von Neumann song to the wizards, and there will always be wizard musicologists who steer their projects toward those rocks and, when they have just enough wax in their ears, they sail right past the rocks and come out the other side of the straits leading the rest of the fleet.
beautiful, in equal parts true, sad, and endearing.
but also remember the vasa.
AlotOfReading 1 days ago [-]
Profiles aren't a mess because they're intended for legacy codebases instead of big tech monorepos. They're a mess because they're not a serious effort. There's no actual vision of what problems they're trying to solve or what the use cases are, or even what kind of guarantee profiles are going to make.
seanhunter 22 hours ago [-]
Ports of massive legacy codebases are possible and they happen. They can be extremely difficult, they take will and effort but they can get done. The idea that you have to slow down the development of the language standard for people who won't port to the new version is weird- Those people won't be updating compilers anyway.
How do I know this? I migrated a codebase of about 20m lines of C++ at a major investment bank from pre-ansi compilers to ansi conformance across 3 platforms (Linux, Solaris and Windows). Not all the code ran on all 3 platforms (I'm looking at you, Solaris) but the vast majority did. Some of it was 20 years old before I touched it - we're talking pre-STL not even just pre ansi. The team was me + one other dude for Linux and Solaris and me + one other different dude for windows, and to give you an idea the target for gcc went from gcc 2.7[1] to gcc 4[2], so a pretty massive change. The build tooling was all CMake + a bunch of special custom shell we had developed to set env vars etc and a CI/CD pipeline that was all custom (and years ahead of its time). Version control was CVS. So, single central code repo and if there was a version conflict an expert (of which I was one but it gives me cold sweats) had to go in, edit the RCS files by hand and if they screwed up all version control for everyone was totally hosed until someone restored from backup and redid the fix successfully.
While we were doing the port to make things harder there was a community of 667 developers[3] actively developing features on this codebase and it had to get pushed out come hell or high water every 2 weeks. Also, this being the securities division of a major investment bank, if anything screwed up real money would be lost.
It was a lot of work, but it got done. I did all my work using vim and quickfix lists (not any fancy pants tooling) including on windows but my windows colleague used visual C++ for his work.[4]
[1] Released in 1995
[2] Released in 2005
[3] yes. The CTO once memorably described it to me as "The number of the beast plus Kirat". Referring to one particularly prolific developer who is somewhat of a legend on Wall Street.
[4] This was in the era of "debugging the error novel" so you're talking 70 pages of ascii sometimes for a single error message with a template backtrace, and of course when you're porting you're getting tens of thousands of these errors. I actually wrote FAQs (for myself as much as anything) about when you were supposed to change "class" to "typename", when you needed "typedef typename" and when you just needed "typedef" etc. So glad I don't do that any more.
throwaway2037 21 hours ago [-]
Was it Morgan Stanley? That is the only shop I can think of that is so focused on C++. Hell, they hired Bjarne Stroustrup.
But since you say version control was CVS, then I guess it was Goldman. They still have that sheizen for SecDB/Slang today.
And I assume that "Kirat" is Kirat Singh of Goldman SecDB/JPM Athena/BofA Quartz/Beacon?
seanhunter 20 hours ago [-]
Yes goldman and yes that Kirat. Fun fact, the Windows port colleague was John Madsen who later became CTO of Goldman I think.
I’m not sure I understand the whole ABI argument. Isn’t the raison d’être for namespace versions precisely to evolve the language? Why can’t the existing implementations be copied into a std::v2 but with a changed ABI. Existing ABI issues are non-issues because the old code remains while new code will by default compile against v2 picking up all the goodies and can downgrade the types they actually use across ABI in the places they need by changing the namespace version used for a given compilation unit via compile-time flags (or something along these lines)?
Were namespace versions determined to not solve this problem? That would be the most ironic thing after all if the change management system introduced in c++11 to avoid std::string is either unused, untrusted, or unworkable for the purpose it was intended.
ramshanker 1 hours ago [-]
I am working on a new C++ project in 2024 for my part time project. And this article provided me enough information to battle future "Why not use XYZ instead" discussion. ;)
My Rational for Using C++ in 2024: (A) Extreme computational performance desired. (B) I learned C++ 20 years back. (C) C++ has good enough Cross-Platform (OS) compatibility.
liontwist 21 hours ago [-]
“Governments are telling you to stop using C++”.
This invokes the imagery of a 1950s Apollo era scientist saying something serious. But I promise you there is no visionary low level language authority in the background. It’s just a staffer being influenced by the circle of blogs prominent on programming Reddit and twitter.
> no overhead principle
It’s actually nice to hear they are asserting a more conservative outlook and have some guiding design principle.
Bjarne is more of a super-bureaucrat than a designer. In the early days he pulled C++ into whatever language movements were popular. For a while it looked like Rust was having that influence.
But the outcome has been a refinement of C++ library safety features which are moderate and easy to adopt.
PittleyDunkin 1 days ago [-]
> Nimble, modern, highly capable tech corporations that understand that their code is an asset. (This isn’t strictly big tech. Any sane greenfield C++ startup will also fall into this category.)
Oh I see, this is a fantasy.
badmintonbaseba 23 hours ago [-]
Keyword is "sane". You can probably count all "sane greenfield C++ startups" on one hand.
mkoubaa 17 hours ago [-]
It's also just plain wrong. Even the cleanest most beautiful and efficient code is a liability. You sell software, not code.
It's all about the magnitude of the liability, not the direction
fwip 15 hours ago [-]
Code is an asset in the same way that any process documents in your organization are. They represent codified solutions to problems.
You do not need to re-solve this problem, and when a similar problem occurs, you can adapt the existing solution to the new problem.
Another way to think about it: if code was not an asset, we would delete it immediately after compilation.
mkoubaa 8 hours ago [-]
Having no code corresponding to the software in service is a bigger liability than having it
tucnak 1 days ago [-]
The Rust people pursue "solidarity" as a virtue. They don't understand that factions is a way of life, so any sufficiently impactful technology will be "fractured" to some extent. This is a good thing. Unitarity, solidarity, homogenous philosophies—are not, but they would have to learn it the hard way like everybody else.
omoikane 16 hours ago [-]
> Stories of people trying their best to participate in the C++-standard committee process across multiple years
It was a fascinating story, particularly about how people finally coming to terms with accepting that a seemly ugly way of doing things really is the best way (you just can't "parse better").
It's fascinating how much more complicated this ends up being to deliver in the C and C++ ecosystem.
#embed has to pretend - in principle - that we're going to conjure all these byte values into existence, as actual numbers, and then by the "as if" rule the compiler is not really going to do that because it would be crazy slow. The reality that we're just going to shove the data into the program as if it was an array is an (obviously, implemented everywhere) optimisation, rather than part of the language specification.
The analogous Rust `include_bytes!` just gets you a &'static [u8; N] -- an immutable reference to an array of N bytes which lives forever.
At first I thought OK, well, maybe the C approach lets you do some clever compile stuff that Rust can't do. Nope. If I have a compile time function checksum which calculates a 64-bit checksum of the slice passed by immutable reference - and a file of 128MB of data called firmware.bin, Rust is completely fine with let sum = checksum(include_bytes!("firmware.bin")); and that results in a 64-bit value, the 128MB file evaporated after being checksummed.
zamalek 9 hours ago [-]
> Relatively modern, capable tech corporations that understand that their code is an asset.
I strongly disagree with this. The more code you have, the more resources you have to spend maintaining it. There is a very relevant example close by in the post: the bit about Google having a clang-based tool that can refqactor the entire codebase. Great! Problem is, an engineer had to spend their time writing that, and you had to pay that engineer money - all because you have an unmanageable amount of code.
The real tech asset is processes: the things you have figured out in order to manage such an ungodly amount of code. Your most senior engineers, specifically what's in their heads, are an asset too.
serjts 17 hours ago [-]
The real, everpresent and probably future nail in the coffin of C++ is the lack of a standard apckage manager and build system. The rest is just what happened to be picked up by social/news as it is easier and flashier to talk about.
matt3210 17 hours ago [-]
Conan and cmake problem solved
serjts 16 hours ago [-]
Ah, we are from the same tribes! Let's go talk to the the bazel and to the vcpkg tribes. But what about the fact that cmake isn't a build system, also conan 2.0 was a bit rough the last I saw.. maybe thats why clion/intellij dropped support for it out of the box and now uses vcpkg?
Conscat 10 hours ago [-]
I've been contributing some C++ packages to xrepo, which personally imo is the best of all worlds. c:
einpoklum 12 hours ago [-]
There is a Bazel tribe? I've heard it mentioned a couple of time but I have yet to encounter a C++ project I need to build which supports Bazel but not CMake. In fact, just _any_ Bazel support seems to me quite rare. Am I living in a bubble?
As for vcpkg - yeah, that's popular, for sure.
Conscat 10 hours ago [-]
Daisy Hollman says she has "drunk the Bazel kool-aid" and is a big proponent of its usage outside Google.
einpoklum 9 hours ago [-]
Bazel is a project created by Google, and Hollman works for Google. So - perhaps the Bazel tribe is people working at Google? There _are_ quite a few of those....
howtofly 4 hours ago [-]
[dead]
up2isomorphism 4 hours ago [-]
It does not require particular careful inspection to see that with all these zillions features comes into C++ 20, C++ still does not a have a straightforward string split function. And I still feel printf is more reliable and easier to use than all these “modern” fmt.
There must be some extremely ideological reason behind these horrible “modern” C++ standards.
There are some good trend happening during C++ 11, but now it is completely out of control now.
uluyol 8 hours ago [-]
I think the discussions in these threads show how accurate the framing of this article is. You have some people celebrating Google and friends (slowly) leaving the C++ ecosystem and those that continue to emphasize the flaws that have driven companies away from it in recent history (safety being #1) on the list.
__d 1 days ago [-]
The author doesn’t appear to consider the use of binary-only (typically commercial licensed) libraries. There’s still a need for an ABI and “modern tooling” isn’t a magic wand.
kkert 1 days ago [-]
I'd guess that majority of such binary-only libraries use C ABI interfaces. The entire Windows ecosystem of COM modules works on top of C ABI's.
rfoo 24 hours ago [-]
Until the moment when you are forced to use a third-party SDK with std:: and boost:: (yeah, WTF?) types in the interface.
Oh, and you can't avoid that, say, you are working on a trading bot and that's the only "supported" way to connect to an exchange.
In the end people usually just reverse engineer and reimplement to get rid of such cursed blob. Fortunately, it works - the vendor can't effectively push all clients to update their SDK too, so all wire protocols are infinitely backward compatible.
gary_0 20 hours ago [-]
The last time I was forced to deal with such a proprietary SDK (that required an ancient Windows C++ runtime, and segfaulted like crazy, natch), rather than waste months reverse-engineering it, I wrapped it in a separate process and talked to it via IPC. That got the job done, and every time their shitty code locked up or crashed, I just restarted the wrapper process from the main application.
marcosdumay 16 hours ago [-]
Serialized data over stdin/stdout is becoming my favorite protocol for ABI compatibility.
The amount of problems this solves is incredible, and it creates none of the ops issues with configuring and launching some new kind of Docker image.
HelloNurse 19 hours ago [-]
For mummified binary dependencies, C# allows tediously fine control over stack frames in DLL function calls, and similar FFI systems are likely to be equally malleable; there's probably a blind spot towards reverse engineering in C++, due to the expectation that a random ABI should "just work".
rfoo 18 hours ago [-]
The problem is actually not ABI, it's ODR violation. You can make it work, just make your own wrapper in C ABI, link it with whatever dependency (and version) that your vendor insists on, then `-fvisibility=hidden` and partial link the entire shit to avoid ODR violation.
People reverse these SDK partly because it makes the codebase saner, and partly because, well, this is trading, a saner implementation is almost guaranteed to be faster than vendor's bullshit one, and guess who cares about being a little bit faster than everyone else?
Mond_ 14 hours ago [-]
Woah, my post made it to the front page and I'm late. Hi!
In hindsight I would've probably written a few things differently, but I really didn't want to fall into a trap of getting stuck editing.
Having skimmed it, I hope more people read that article.
imp0cat 12 hours ago [-]
Several months later, I learned I had experienced slight brain damage due to hypoxia and I’ve been slowly recovering ever since. The worst part of all of this is that I said in that post that I was enjoying golang. In other words, I had brain damage and suddenly found writing Go to be fun. Take from that what you will
OMG. ;) It's an interesting rant nonetheless.
ModernMech 5 hours ago [-]
I read it.
To save people the trouble it seemed like a manic rant intended to pick several bones (at least the author is self aware enough to admit as much). It's heavy on the "trust me, I have sources" and light on actual content. It's got enough drama and insinuations from calling people liars, narcissists, to finally nazis. It veers from committee drama, to Trump, to feminism, to AI... very hard to follow.
Worthy of a daytime soap opera but other than that there's nothing notable there. Except it does make me want to avoid all these people, on both sides of whatever drama this is.
imp0cat 11 hours ago [-]
One example of this is [...] the new proposed (but not yet approved) Boost website. This is located at boost.io and I’m not going to turn that into a clickable link, and that’s because this proposed website brings with it a new logo. [...] This logo features a Nazi dog whistle. The Nazi SS lightning bolts.
The thing about dog whistles like this is that you can feign ignorance or act like someone is seeing something that isn’t there, but for something egregious it’s very hard to defend it in this case.
Of course, there’s other political dog whistles out there in the tech world right now. Justine Tunney named her C library, cosmopolitan5, which I personally believe is named after the term Rootless Cosmopolitan. This is a pejorative Soviet epithet which was used primarily during Joseph Stalin’s antisemitic campaign in the late 40s and early 50s. This is obviously much harder to prove6 as Justine has done a very good job of deleting some very eyebrow raising tweets over the years, even having them scrubbed from The Internet Archive’s Wayback Machine [...]
Justine, unfortunately, doesn’t appear to have made any amends either, at least publicly, or even acknowledged her past behavior, though she is more than happy to reference her time in the Occupy Wall Street movement. These days however, she’s busy working on llamafiles for Mozilla. For those of you not in the know, a llamafile is basically for turning an LLM’s weights into an executable.
And then he makes (yet another!) detour to AI and C++ which I am going to follow.
It's a massive post though. Right now I am an hour in and probably about 75% done and I am skipping most of the linked articles. Except for the Ender's game parts. I highly recommend those.
nottorp 21 hours ago [-]
Two factions? Considering C++ has everything, I'd assume there are tens of factions.
humanrebar 19 hours ago [-]
This is true. That is why there is no leadership committee for the C++ ecosystem. There is no way to select one.
16 hours ago [-]
bayindirh 20 hours ago [-]
I personally like these discussions about C++. Yes, I think C++ should continue to be C++. I also like it that way.
On the other hand, having a bit more transparency into the workgroups and their way of doing things may allow the process become a bit more efficient, approachable, and maybe would allow shedding some of the problems which have accumulated due to being so isolated from the world.
Some of the alleged events really leave a bad taste in the mouth, and really casts a shade of doubt for the future of C++.
Lastly, alienating people by shredding their work and bullying them emotionally is not the best way to build a next generation of caretakers for one of the biggest languages in the world. It might not fall overnight, but it'll certainly rot from its core if not tended properly. Nothing is too big to fail.
29athrowaway 2 hours ago [-]
C++ is dead by entropy. So complex nobody can truly learn it anymore.
Languages should not have a package management system. They all have a all the world is my language blindspot and fail hard when you have anything else. Sometimes you can build plugins in a different language but they still assume the one true language is all you want.
package management belongs to the os - or at least something else.
don't get me wrong, package management is a real problem and needs to be solved. I'm arguing against a language package manager we need a language agnostic package manager.
diath 1 days ago [-]
I think C++ is a living proof that not having a standard tooling around the language makes the language a complete pain in the ass to use, with any other language that does standard package managing/tooling out of the box, I can just pin the versions, commit a file to the repository, and on any computer that I'm working on I just issue a single command and everything is handled for me; meanwhile one of the C++ projects I've been working on, it turned out that I cannot build it on my server because one of the libraries I'm using only worked with clang17 which my desktop OS provides but the Debian I'm using on my server is shipping with clang16, and the library was not compatible with the earlier version of some C++ implementation, meanwhile Arch on my desktop updated to clang18, which also broke the library in some fashion, so now I'm sitting here with two systems, one where I want to deploy my software, and one where I want to develop the software, both of which are completely defunct and unable to build my project anymore; now I have to figure out how to build the specific version of clang on both systems and ensure I override a bunch of environment variables when configuring the builds on both of these systems, and then do the same on every new computer I'm developing/deploying on - with a proper tool I could just tell the project file to "use this compiler with this version with this standard" and things would just work. Some people will tell you "yeah bro just use docker with this and that and you will have a reproducible build system everywhere", but the thing is - I do not want to learn a completely unrelated tool and spend hours writing some scripts just to be able to continue working on my project when in any other programming language (like Go, Rust, JS), I can just install the runtime, clone the repo, run a command, and everything is handled for me seamlessly like it should be in 2024.
beeflet 1 days ago [-]
The problem for me is a "political" one, not a matter of convenience: When I choose a linux distro I implicitly trust the distro maintainers to not backdoor the liveCD, so I might as well trust them to maintain packages transparently. If something happens upstream, we expect the distro maintainers to patch out undesirable behavior, integrate changes into the system as a whole or warn us of changes. Most distros are the same in functionality: the choice of a certain distro is mostly a choice of which political institution (such as a business or non-profit) that we trust to maintain the interoperability of the OS.
Languages need to be more agnostic than a package manager requires because I should not have to rope another organization into my trust model.
Cargo already goes too far in encouraging a single repository (crates.io) for everything through its default behavior. Who maintains crates.io? Where is the transparency? This is the most important information the user should know when deciding to use crates.io, which is whether or not they can trust the maintainers not to backdoor code, and it is rarely discussed or even mentioned!
The default cargo crate (template?) encourages people to use permissive licensing for their code. So that is an example where you are already making implicit political decisions on behalf of the ecosystem and developers. That is alarming and should not be for the language maintainers to decide at all.
In C/C++ you have a separation of the standard from the implementation. This is really what makes C/C++ code long-lived, because you do not have to worry about the standard being hijacked by a single group. You have a standard and multiple competing implementations, like the WWW. I cannot encourage the use of Rust while there is only a single widely-accepted implementation.
diath 1 days ago [-]
The problem with that is that no Linux distro maintainer will ever put effort into maintaining every version of every library and compiler perpetually for a specific, seemingly random, programming language (or at least, reasonably, within few major versions including all minor releases in between), but with a tool that versions dependencies and allows for, say, git-based upstream with tag-versioned releases, you can expect to pick any specific version and for things to just work; managing library code for a specific programming language, be it any language, does not seem like the responsibility of an operating system, if anything, the package manager from your OS should be able to just supply the tool to manage the said language (like you currently can with npm, cargo or go); that also does not touch the topic of making things work across different platforms, sure, you maybe found a way to solve this issue in your imaginary Linux distro, how do you solve the problem for a co-developer that uses Windows, or macOS?
Additionally, you do not have to necessarily enforce these things on the language level, the standard and the tooling could live as two independent projects coming from the same entity. You could still use the compiler and the libraries from your OS, and build the code like that, or you could just reach out to an optional standardized tool that serves as a glue for all the external tools in a standardized way.
Yes, there are a lot of valid concerns with this approach as well, but personally for me, as a frustrated C++ developer, who is most likely going to still use the language for a decade to come, I feel like all the other languages I had mentioned in my previous post had addressed what is my biggest point of frustration with C++, so it's definitely an issue that could be solved. Many tried to do it independently, but due to personal differences, no funding, and different ideas of what should be the scope of such tooling, we ended up with a very fragmented ecosystem of tools, none of which have yet to date been able to fully address an issue that other languages solved.
Measter 18 hours ago [-]
> The default cargo crate (template?) encourages people to use permissive licensing for their code. So that is an example where you are already making implicit political decisions on behalf of the ecosystem and developers. That is alarming and should not be for the language maintainers to decide at all.
You and I must be using two very different versions of Cargo, because on mine the default template doesn't specify a license.
bluGill 19 hours ago [-]
What you are asking for is standard command line flags for the compiler. Which probably cannot happen though it would be nice.
That and a better package manager so your clang wrong version problem cannot have. Which is what I was trying to get at.
jcelerier 1 days ago [-]
I'd recommend using upstream apt llvm repos if you are using Debian or debian-derivatives like Ubuntu, to make sure you have the same compiler everywhere.
biorach 21 hours ago [-]
> Some people will tell you "yeah bro just use docker with this and that and you will have a reproducible build system everywhere", but the thing is - I do not want to learn a completely unrelated tool and spend hours writing some scripts just to be able to continue working on my project
You're working with some seriously hairy technologies, dealing with very knotty compatibility issues, and you don't want to learn... Docker?
I find this odd because it's relatively simple (certainly much simpler that a lot of what you're currently dealing with), well documented, has a very small, simple syntax and would probably solve your problems with much less effort than setting up a third development machine.
bluGill 19 hours ago [-]
Docker solves the problems in some cases. However it forces you to ignore those knotty compatibility issues which is limiting. (You can't run on *BSD, Mac, windows... if you use docker) As such for many docker is not in the list of acceptable answers - in particular any open source project should consider docker not an option to solve their problems.
biorach 16 hours ago [-]
My understanding of the post I was replying to was that the compatibility issues were due to different versions of Linux having different clang versions. If I've understood correctly then Docker is highly likely to be a good solution.
> any open source project should consider docker not an option to solve their problem
That's generalising far too much.
nickelpro 1 days ago [-]
Specifications for package interchange are absolutely essential, which is distinct from language endorsed package managers.
Python doesn't have a language package manager, you're free to use pip or poetry or uv or whatever, but it does have PEP 517/518, which allow all Python package managers to interact with a common package ecosystem which encompasses polyglot codebases.
C++ is only starting to address this problem with efforts like CPS. We have a plethora of packaging formats, Debian, pkg-config, conan, CMake configs, but they cannot speak fluently to one another so the package ecosystem is fractured, presenting an immense obstacle to any integration effort.
physicsguy 2 hours ago [-]
Python polyglot code bases are not a solved problem at all. There have been difficulties installing TensorFlow and PyTorch with poetry for some time, and the installs still regularly break. This is the reason so many people use Conda. In HPC people are increasingly using Spack and EasyBuild to stop you having 10 versions of BLAS installed with all your Python dependencies.
Comparing it to other languages isn’t really fair since they don’t have polyglot code bases in the same way, and where native packages exist in for e.g. Npm, then you run into the same problems anyway.
howtofly 1 days ago [-]
> Python doesn't have a language package manager, you're free to use pip or poetry or uv or whatever, but it does have PEP 517/518, which allow all Python package managers to interact with a common package ecosystem which encompasses polyglot codebases.
This is a long-standing pain point. LWN has a series of reports covering this, one of which is: https://lwn.net/Articles/920832/
the__alchemist 1 days ago [-]
Interesting point, and I'm included to agree with your main point. I don't think the OS level is preferable, however:
Point 1: I do not want my program to only run on only one OS, or to require custom code to make it multi-platform.
Point 2: What if there's no OS?
beeflet 1 days ago [-]
>Point 1: I do not want my program to only run on only one OS, or to require custom code to make it multi-platform.
To run on only one OS at build time? I usually just set up cross-compilers from linux if I am making cross-platform C/C++ code.
>Point 2: What if there's no OS?
You can use a system like bitbake I think.
pornel 21 hours ago [-]
Which Linux distribution has packages for macOS, Windows, and Android?
pjmlp 17 hours ago [-]
And that list isn't even exhaustive regarding OSes in production.
mrkeen 24 hours ago [-]
This sets up an untenable N*M explosion:
Will the GhostBSD maintainers pin the right version of Haskell's aeson package?
Will the Fedora Asahi devs stay on top of the latest Ocaml TLS developments?
Will MS package PureScript's code for DOM manipulation?
rileymat2 1 days ago [-]
I think the term "package management system" is a bit over broad a term to talk about.
If we are talking about global shared dependencies, sure it may belong in the OS.
If we are talking about directly shared code, it may as well belong in the language layer.
If we are talking about combining independent opaque libraries, then it might belong in a different "pseudo os" level like NPM.
gwervc 1 days ago [-]
> package management belongs to the os
It clearly doesn't except if you're a fan of dll hell and outdated packages.
bluGill 19 hours ago [-]
Window's package management is famously bad. However bugs in their implementation cannot be used to shoot down the concept.
pie_flavor 16 hours ago [-]
If your solution fails on the large majority of computers, can it really be called a solution? 'All the world is my language' blindspots are nothing compared to 'all the world is GNU/Linux' blindspots.
beeflet 1 days ago [-]
the solution to DLL hell is to patch the applications to all use the same version of the library.
FridgeSeal 1 days ago [-]
Oh but of course!
The solution to…a problem created directly by a specific approach is to…do even more work ourselves to try and untangle ourselves? And just cross our fingers and just _hope_ that every app/library is fully amenable to being patched this way?
Alternatively, we could realise that this isn’t really feasible at the scale that the ecosystem operates at now, and that instead of taking an approach that requires us to “do extra work to untangle ourselves” we should try and…not have that problem in the first place.
beeflet 1 days ago [-]
I don't think it's unreasonable to have a system where every program uses the same version of a library.
>And just cross our fingers and just _hope_ that every app/library is fully amenable to being patched this way?
It requires some foresight in designing the application, and whether or not you even choose to use that application in the first place. We should strive to decrease the complexity of the system as a whole. The fact that packages are using different versions of the same library in the first place is a canary and the system should disincentivize that use case to some extent. Using static libraries or a chroot or a sandbox for everything is sweeping the problems under the carpet.
>taking an approach that requires us to “do extra work to untangle ourselves” we should try and…not have that problem in the first place.
I would prefer a system that allows you to link every application to the same library as a default, but also allows for some per-application override, perhaps by using symlinks. That would cover the majority of use cases. But I do not think that dynamic linking is generally in vain.
In my own projects, I try to rely on static linking as much as possible, so I understand your perspective as a developer. But as a user I do not want programs to have their own dependencies separate from the rest of the system.
jcelerier 1 days ago [-]
> I don't think it's unreasonable to have a system where every program uses the same version of a library.
I really think it is. Even at the scale of a single app it may sometimes make sense to have multiple versions of a same library, if for instance it implements a given algorithm in two different ways and both ways have useful properties
uecker 1 days ago [-]
Then shouldn't these APIs be exposed as different libraries?
jcelerier 15 hours ago [-]
maybe ? in the end it's up to the person developing said library
lodovic 23 hours ago [-]
I have seen this (linking with multiple versions of the same library) for maintaining backwards compatibility, for example to support parsing a data file from a previous version, but never for selecting different algorithms.
SAI_Peregrinus 11 hours ago [-]
> I don't think it's unreasonable to have a system where every program uses the same version of a library.
If there were guarantees that every library would always be both forwards and backwards compatible, that would be reasonable. Sadly, that's not the case.
FridgeSeal 1 days ago [-]
Could a more streamlined “conception” of something like Gentoo fix this?
Applications ship their lock files + version constraints. Gets merged into a user/os level set of packages. You update one package, OS can figure out what it has to rebuild and goes off and does that.
Still shit-out-of-luck for anything proprietary, and it’s still super possible for users to end up looking at compile failures, but technically fits the bill?
palata 22 hours ago [-]
> The solution to…a problem created directly by a specific approach is to…do even more work ourselves to try and untangle ourselves?
The solution is to be more professional. DLL hell comes from libraries that break compatibility: serious libraries should not break compatibility, or at least not often. Then when they do and you happen to have the issue, it's totally fair to go patch the library you depend on that depends on the breaking lib. Even in proprietary software.
The modern way is to use ZeroVer [1] and language package managers that pull hundreds of dependencies in the blink of an eye. Then asking that people compile everything themselves or use the one system deemed worthy of support (usually Windows and the very latest Ubuntu). And of course not caring about security one bit.
This only works in the context of a single distribution. The moment you have two competing distributions, you're going to have to fork and end up with distro specific applications. Package maintainers won't be able to keep up and software becomes outdated.
exDM69 19 hours ago [-]
> package management belongs to the os
Os package managers do a fundamentally different task than dependency management tools used in development.
They ship a bunch of applications and the libraries you need to run the applications.
If you need different version of libfoo than e.g. Firefox does, you're out of luck.
Need to support a customer with an older release which needs a different version of libfoo? Not gonna happen.
Unless you're talking about Nix or Guix, your OS package manager is not a substitute for a dependency management tools.
bluGill 19 hours ago [-]
Fair enough. The world needs a package manager that is language agnostics and provides input to OS package managers as well as build tools (which should also be language agnostic).
pjmlp 17 hours ago [-]
And works even across UNIX and mainframe OSes without package managers....
FridgeSeal 1 days ago [-]
But like…why?
Let’s say we make a “thing” which contains packages for all participating languages.
98% of the time, aren’t users just going to go “filter down to my language” and just continue what they’re doing, except with a somewhat worse overall experience, depending on whatever the “lowest common denominator” API + semantics we use for this shared package management solution.
Multi-language build systems already exist, which happily serve the needs to those projects which find themselves needing cross-language (+distributed) builds. Could there be some easier versions of these? Sure, but I don’t feel like “throw everyone in the same big box” is the solution here.
bluGill 19 hours ago [-]
> I don’t feel like “throw everyone in the same big box” is the solution
It has to be - while nobody needs more than a subset of that big box, the intersection of what everyone needs turns out to be throw everyone in the same big box. If you have anything less than that one big box you end up many standards and then everyone chooses which standard and in turn something important you need choose the other standard and you can't use it (ie the situation we are in now)
Of course making that "one standard to rule them all" easy enough to use is a hard problem. It may be itself impossible and thus everyone drops back to the current mess.
RcouF1uZ4gsC 1 days ago [-]
Disagree completely. OS package managers are one of the biggest sources of problems.
Basically, once you have an OS level package manager, you have issues of versioning and ABI. You have people writing to the lowest common denominator - see for example being limited to the compiler and libraries available on an old Red Hat version. This need to maintain ABI compatibility has been one of the hugest issues with evolving C++.
The OS package manager ends up being a Procrustean bed forcing everything into its mold whether or not it actually fits.
Also, this doesn't even have the issue of multiple operating systems and even distros which have different package managers.
Rust and Go having their own package managers has helped greatly with real world usage and evolution.
beeflet 1 days ago [-]
This is a weird opinion, but I think that the OS package manager's complexity is largely owing to the unix directory structure which it just dumps all binaries in /bin, all configuration files in /etc, all libraries in /lib. It comes from a time where everything on the OS was developed by the same group of people.
By dumping all the same file types in massive top-level directories, you need a separate program (the package manager) to keep track of which files belong to which packages and dealing with their versions and ABI and stuff. Each package represents code developed by a specific group with a certain model of the system's interoperability.
GoboLinux has an interesting play on the problem by changing the directory structure so that the filesystem does most of the heavy lifting.
kccqzy 1 days ago [-]
Agreed. At least, languages should not require its own package management system to be used. There should be a way to invoke the compiler or interpreter without involving that language's own package management system, so that something else (like Bazel) can build on top. Fortunately, most common languages are all like that. You can invoke rustc without cargo. You can use python without pip. You can use javac without maven.
1 days ago [-]
poulpy123 10 hours ago [-]
ideally yes. In practice if they wnat to provide a reasonnably good experience they have to do it
moomin 1 days ago [-]
Honestly I don’t know why more languages don’t just adopt e.g. npm, maven or NuGet. They’re largely language independent at the binary level anyway.
beeflet 1 days ago [-]
npm, maven, and NuGet have caused me far more problems in trying to reproduce builds than the OS package manager ever will.
mgaunard 23 hours ago [-]
The main problem with bad C++ tooling is often the same, it's having a modular system that relies on importing/exporting binaries, then tracking binary versions when combining applications.
You should only track source versions and build things from source as needed.
xdmr 7 hours ago [-]
A plague o' both both your houses!
titzer 13 hours ago [-]
Replace C++ with asbestos (no, I'm serious, not just stark), and we're basically having exactly the same conversation that's gone on over decades in the meatspace world, with analogous players, sunk cost/investment calculus, and migration consternation. The only part of the conversation that is missing is the liability conversation and damages.
And I do take asbestos as a serious example. Asbestos is still manufactured and used! Believe or not there are "safe" uses of asbestos and there are protocols around using them. Nevermind the fact that there is a lot of FUD and dishonesty about where exactly the line cuts on what is safe versus not safe...for example we are finding out how brake dust affects the wider environment as we crawl out from under the tent of utter misinformation of a highly motivated entrenched industry.
I feel like this is not a new human phenomenon. We made particularly poor choices in what tech we became dependent on, and lo and behold, the entrenched interests keep telling us it's not that bad and we should keep doing it because...reasons.
It will eventually play out the way it must; C++ might seem a lot more innocuous than asbestos, and in some ways that's true, but it resists all effort to reform it and will probably end up needing to just be phased out.
einpoklum 12 hours ago [-]
Is asbestos superior to other established solutions in terms of "performance", and only lacking on safety?
summerlight 8 hours ago [-]
Asbestos was once considered one of the best material for the industry with many desirable properties like high durability, insulation, flexibility, cheap cost etc etc. I don't think human has found a drop-in replacement for asbestos yet.
1 days ago [-]
KerrAvon 16 hours ago [-]
I feel the need to point out that `const` is a viral annotation in C++
cryptonector 16 hours ago [-]
> Speaking of big tech, did you notice that Herb Sutter is leaving Microsoft, and that it seems like MSVC is slow to implement C++23 features, and asking the community for prioritization.
Uh, they took decades to implement a bunch of C99 features. Is that predictive? I suspect it is.
pphysch 12 hours ago [-]
The "standardization" of C++, SQL, et al. are some of the most catastrophic examples of premature abstraction in software development.
Programming languages benefit far more from a robust implementation, tooling, and good technical documentation (which may read like a standard) than from a prescriptive standard. The latter generates enormous waste, for what?
chris_wot 1 days ago [-]
I think he has this about right. The project I contribute to (and no, I'm not a massive contributor) is LibreOffice and it is a C++ codebase. It has a decent build system that is easy for anyone to run out of the box. It uses modern C++17+ code, and though it has a lot of legacy code, it is being constantly modified by people like Noel Grandin via clang plugins (along with a lot of manual effort).
This code was originally developed in the late 1980s.
A good packaging tool would have helped a lot.
umanwizard 1 days ago [-]
I'm stoked to hear they're on C++17 now.
When I contributed to LibreOffice (GSoC 2012) they were still on C++03 !
badmintonbaseba 23 hours ago [-]
Well, can't really blame them in 2012. Especially that C++11 did bring an ABI break. Looks like they keep it fresh, although C++17 is getting a bit dated. Migration from C++17 to 20 or even 23 is probably a breeze though compared to migrating 03 to 11.
umanwizard 19 hours ago [-]
IIRC it wasn't just the ABI break that was a problem, it was the fact that they wanted to build on systems that didn't have a C++11-compliant compiler available yet.
bluGill 18 hours ago [-]
In 2012 that was reasonable. In 2024 that would be unreasonable, but they are not stuck on C++03 in 2024. C++17 today with serious plans to force upgrade to C++20 in the near future is a reasonable place to be today.
einpoklum 12 hours ago [-]
> It uses modern C++17+ code
Ha ha ha, that's funny. It uses pre-98 C++ code, that's set in stone because of extension/UNO APIs. Yes, you can use C++17 in a bunch of places, but not for the basic structures, classes, idioms etc.
And - that's coming from a huge LibreOffice supporter. Speak at conventions, got the T-shirts, everything.
chris_wot 11 hours ago [-]
You are referring to the UNO API. The internal code is most definitely not stuck in "pre-98 C++ code".
moralestapia 1 days ago [-]
[flagged]
klempner 1 days ago [-]
The idea that RAII covers "99% of your ass" is "the low-IQ level statement".
Temporal safety is the primary hard problem from a memory safety standpoint, and RAII does nothing to solve it at least the moment a memory allocation crosses abstraction boundaries.
wigglyartichoke 1 days ago [-]
What's an example? I'm just a hobbyist when it comes to c++
dbremner 14 hours ago [-]
Here is a real safety issue that I found and fixed a couple weeks ago. This is an ancient language runtime which originally ran on MS-DOS, Amiga, Atari, and at least a dozen now-discontinued commercial Unices. I've been porting it to 64-bit OSes as a weekend hack. While this particular issue is unlikely to appear in modern applications, a similar pattern might manifest today as a use-after-free error with std::string_view or an invalid iterator.
Background:
typedef int Text;
The Text type is used to encode different kinds of string-like values. Negative values represent a generated dictionary. Valid indexes in the string intern table (https://en.wikipedia.org/wiki/String_interning) represent a stored string. Other values represent generated variable names.
const char *textToStr(Text t) - This takes a Text value and returns a pointer to a null-terminated string. If t is a string intern index, then it returns a pointer to the stored string. If t represents either a generated dictionary or generated variable name, then it calls snprintf on a static buffer and returns the buffer's address.
Problem:
The use of a static buffer in textToStr introduces a temporal safety issue when multiple calls are made in the same statement. Here’s an excerpt from a diagnostic error message, simplified for clarity:
printf(stderr, "Invalid use of \"%s\" with \"%s\"",
textToStr(e),
textToStr(s));
If both e and s are generated dictionaries or variables, then each call to textToStr overwrites the static buffer used by the other. Since the evaluation order of function arguments in C++ is undefined, the result is unpredictable and depends on the compiler and runtime.
scintill76 5 hours ago [-]
If you're saying C++ won't automatically save you from design mistakes, then I agree. This is a poorly-specified function. The caller can't know exactly how they need to handle the returned pointer.
Potential solutions:
* Return std::string and eat the cost of sometimes copying interned strings
* std::string with your own allocator could probably deal with the two cases fairly cheaply and transparently to the caller
* overload an operator<< or similar and put your conversion result directly into a stream, rather than a buffer that then goes into a stream
* put your generated values in a container that can keep pointers into it valid under all the operations you do on it (I think STL sets would work), keep them there for the rest of the lifetime of the program, and return your pointers to them (or to the interned constant strings)
I think many of these could be termed RAII, so I lean toward the idea in this subthread that RAII and other C++ idioms will help you stay safe, if you use them.
P.S. The function is also not safe if called by multiple threads concurrently. Maybe thread-local storage could be an easy fix for that part, but the other solutions should fix it too. [But if you have any shared storage that caches the generated values such as the last solution, it needs to be protected from concurrency.]
moralestapia 1 days ago [-]
No, if you have temporal safety issues you didn't understand RAII. That is pretty much the whole point of RAII.
AnimalMuppet 1 days ago [-]
If you want anyone to believe you, you're going to have to give more than just a blank assertion. Can you give at least a sketch of your reason for your claim?
moralestapia 1 days ago [-]
Reasoning is, if your objects outlive the scope of your class, then they most likely belong to a class that's higher in the hierarchy (they already do, de facto).
SoothingSorbet 1 days ago [-]
Please explain how you would solve the iterator invalidation problem using only C++ and RAII. Thanks.
moralestapia 21 hours ago [-]
This small thread is about temporal safety so you're out of luck.
genrilz 18 hours ago [-]
One, iterator invalidation can be a temporal safety problem. Specifically, if you have an iterator into a vector and you insert into the same vector, the vector might reallocate, resulting in the iterator pointing into invalid memory.
Two, consider the unique_ptr example then. Perhaps a library takes a reference to the contents of the unique_ptr to do some calculation. (by passing the reference into a function) Later, someone comes along and sticks a call to std::async inside the calculation function for better latency. Depending on the launch policy, this will result in a use after free either if the future is first used after the unique_ptr is dead, or if the thread the future is running on does not run until after the unique_ptr is dead.
EDIT: Originally I was just thinking of the person who inserted the std::async as being an idiot, but consider that the function might have been templated, with the implicit assumption that you would pass by value.
sirwhinesalot 24 hours ago [-]
Every study on security vulnerabilities has shown that "just don't screw up bro" doesn't scale.
Even if we ignore the absolute clown move of having no bounds checks by default (and std::span doesn't have them at all), it's very easy to get into trouble with anything involving C++ iterators and references.
moralestapia 21 hours ago [-]
>Every study on security vulnerabilities has shown that "just don't screw up bro" doesn't scale.
Let's see them.
sirwhinesalot 16 hours ago [-]
Well, ignoring all the reports from google, microsoft and mozilla, all of whom are part of a cabal spreading misinformation on the percentage of vulnerabilities caused by memory unsafety in C++ (all 3 arrived at around 70% so it's clearly a made up number they colluded to spread), and ignoring the reports from the United States government (probably infiltrated by rust cultists), I can recommend the paper Memory Errors: The Past, the Present, and the Future
ModernMech 12 hours ago [-]
Are you being sarcastic? I genuinely cannot tell.
sirwhinesalot 12 hours ago [-]
Yes, except for the legitimate paper recommendation.
adgjlsfhk1 1 days ago [-]
RAII only helps with 1 of 4 primary cases of safety. RAII deals (badly) with temporal safety, but not spacial safety (bounds errors etc), safe initialization (use before initialization), or undefined behavior (overflow/underflow, aliasing, etc).
badmintonbaseba 23 hours ago [-]
Use-after-free (or reference/iterator invalidation in general) is the main issue. RAII doesn't help there at all. RAII helps with deterministically cleaning up resources, which is important, but barely related to safety.
AnimalMuppet 1 days ago [-]
How does RAII not help with safe initialization? It's right in the name.
moralestapia 1 days ago [-]
>RAII deals (badly) with temporal safety
>safe initialization (use before initialization)
These two are solved by proper use of RAII.
But you have a point with UB. That's always been an issue, though, it's part of the idiosyncrasies of C/C++; all languages have their equivalent of UB.
sapiogram 1 days ago [-]
> equivalent to people unable to grasp type coercion on JS and thus blaming the language for it (literally just use '===' and stop bitching about it).
They're not even remotely equivalent. A single eslint rule has immediately and permanently fixed this in every Javascript project I've worked on, both for me and my coworkers' code. RAII helps, but in C++, no amount of linters and language features can fully protect you.
charleslmunger 1 days ago [-]
As a relative newcomer to C++, I have found RAII to be fine for writing in object-oriented style. But I would only subject myself to the suffering and complexity of C++ if I really wanted excellent performance, and while RAII does not generally have a runtime cost by itself, engineering for full performance tends to exclude the things that RAII makes easy. If you manage memory via arenas, you want to make your types trivially destructible. If you don't use exceptions, then RAII is not necessary to ensure cleanup. In addition, use of destructors tends towards template APIs that generate lots of duplicate code, when an API that used a C-style function pointer for generic disposal would have produced much smaller code.
And C++'s object model can add additional complexity and sources of UB. In C++20 previously valid code that reads a trivially destructible thread_local after it has been destroyed became UB, even though nothing has actually happened to the backing storage yet.
rerdavies 23 hours ago [-]
As an old-timer, I think you have some serious misconception about how RAII works, and what it does for you.
> Arena management
There's nothing that stops you from using arena allocators in C++. (See pmr allocators in C++17 for handling complex non-POD types).
> The cost of RAII_
you're going to have to clean up one way or another. RAII can be zero-overhead, and usually generates less code than the C idiom of "goto Cleanup".
> Use of destructors leads toward template APIs.
Not getting that. Use of destructors leads to use of destructors. Not much else.
> If you don't use exceptions....
Why on earth would you not use exceptions? Proper error handling in C is a complete nightmare.
But even if you don't, lifetime management is a huge problem in C. Not at all trivial to clean things up when you're done. Debugging memory leaks in C code was always a nightmare. The only thing worse was debugging wild memory writes. C++ RAII: very difficult to leak things (impossible, if you're doing it right, which isn't hard), and if it ever does happen almost always related to using C apis that should have been properly wrapped with RAII in the first place.
Granted, wrapping C handles in RAII was a bit tedious in C++89; but C++17 now allows you to write a really tidy AutoClose template for doing RAII close/free of C library pointers now. Not in the standard library, but really easy to roll your own:
// call snd_pcm_close when the variable goes out of close.
using snd_pcm_T = pipedal::AutoClose<snd_pcm_t*,snd_pcm_close>;
snd_pcm_T pcm_handle = snd_pcm_open(....);
> C++ 20 undefined behavior of a read-after-free problem.
That's not UB; that's a serious bug. And C's behavior would also be "UB" if you read after freeing a pointer.
charleslmunger 16 hours ago [-]
>As an old-timer, I think you have some serious misconception about how RAII works, and what it does for you.
I appreciate the education :-)
>There's nothing that stops you from using arena allocators in C++.
This is true, but arenas have two wonderful properties - if your only disposable resource is memory, you don't need to implement disposal at all; and you can free large numbers of individual allocations in constant time for immediate reuse. RAII doesn't help for either of these cases, right?
>Use of destructors leads to use of destructors
I guess what I mean is... It's totally possible and common to have a zillion copies of std::vector in your binary, even though the basic functionality for trivially copyable trivially destructible types is identical and could be serviced from the same implementation, parameterized only on size and alignment. Destruction could be handled with a function pointer. But part of the reason templates are used so heavily seems to be that there's an expectation that libraries should handle types with destructors as the common case.
>lifetime management is a huge problem in C. Not at all trivial to clean things up when you're done.
Absolutely true if you're linking pairs of many malloc and free calls. But if you have a model where a per-frame or per-request or per-operation arena is used for all allocations with the same lifetime, you don't have this problem.
>And C's behavior would also be "UB" if you read after freeing a pointer.
The specific issue I ran into was the destructor of one thread_local reading the value of another thread_local. In C++17 the way to do this was to make one of them trivially destructible, as storage for thread locals is released after all code on that thread has finished, and the lifetime for trivially destructible types ends when storage is freed. In C++20 this was changed, such that the lifetime of a thread local ends when destroyed (rather than when storage is freed) if it's trivially destructible. C thread local lifetimes are tied to storage only and don't have this problem.
badmintonbaseba 18 hours ago [-]
You can use `unique_ptr` with a custom deleter for wrapping C libraries.
using snd_pcm_T = std::unique_ptr<
snd_pcm_t,
decltype([](void* ptr){snd_pcm_close(ptr);})
>;
auto pcm_handle = snd_pcm_T(snd_pcm_open(...));
The lambda in decltype is C++20, otherwise the deleter type could be declared out-of-line.
1 days ago [-]
binary132 1 days ago [-]
Just give up, they’ll never get it.
throw16180339 1 days ago [-]
What's to get? It's an unsupported claim with substantial counterexamples in the form of every large C++ project. If everyone in the world gets RAII wrong then it doesn't matter what it's theoretically capable of.
moralestapia 21 hours ago [-]
Sure pal, in an imaginary world where:
Apache/nginx don't exist,
Chrome/V8 don't exist,
Firefox doesn't exist,
GCC/Clang don't exist,
MySQL doesn't exist,
TensorFlow doesn't exist,
VLC doesn't exist,
and the list goes on and on ...
Philpax 19 hours ago [-]
every single one of those has had exploits
moralestapia 19 hours ago [-]
Did I say they don't?
All software has bugs.
Is this supposed to be news?
genrilz 18 hours ago [-]
Problem with memory corruption as a bug is that unlike most classes of bug, memory corruption allows remote code execution. (see return oriented programming for the basic version, block oriented programming for the more complex version that bypasses most (all?) mitigation strategies) There are other types of bug that allow remote code execution like this, such as SQL or command-line injection, but those can be solved with better libraries.* However, memory management requires a strong enough type system in the language.
* Sorta, for command-line injection you have to know the way the command you are using processes flags and environmental variables in order to know that the filtering you are doing will work. It is absolutely better to use a library instead if you can get away with it.
paulddraper 15 hours ago [-]
> We’re basically seeing a conflict between two starkly different camps of C++-users:
> * Relatively modern, capable tech corporations that understand that their code is an asset. (This isn’t strictly big tech. Any sane greenfield C++ startup will also fall into this category.)
> * Everyone else. Every ancient corporation where people are still fighting over how to indent their code, and some young engineer is begging management to allow him to set up a linter.
Well said.
And because of this, a lot of the first is leaving for greener pastures.
glitchc 14 hours ago [-]
So the gist of the article is this: The C++ committee should take charge of tooling and implement standardized tooling that matches the standards. Okay, but that won't stop the existence of other tooling, including old tooling, and it won't fix the problem of legacy code. So what's the point? Why bother? Plus unsafe memory calls are mainly found in libraries and applications, not the core language. How will standardized tooling fix that or any of the existing problems for that matter?
nuancebydefault 13 hours ago [-]
When I comment on HN topics that are related to C++, there's a very high chance that I get downvoted. Anyways I can't help it, I will comment here...
I feel it would be best for the C++ language that its development would stop. There's no way to fix its current problems. The fact that it stayed compatible with previous iterations over so many years is an accomplishment, almost a miracle, it should be cherished. Deviating from that direction doesn't make sense. Keeping that does not make sense either.
physicsguy 16 hours ago [-]
C++ is still important in domains where performance is really critical.
I also think there's a place where it can easily be supplanted, but currently cross platform native software has Qt and bindings for it in other languages are mixed.
In performance critical things, Rust still doesn't feel like the final answer since you end up cloning a lot and refactors are very painful. Go obviously has it's issues since SIMD support is non-existent and there is limited control over garbage collection, though it works well for web APIs.
Conscat 15 hours ago [-]
You can write SIMD in Go asm, and wrap it up in a normal Go API. It's not great though.
physicsguy 14 hours ago [-]
I'm well aware, but in practice there needs to be some way of at least autovectorising loops built in to the compiler, even JIT/GC'd languages like C# will do this for you.
neonsunset 13 hours ago [-]
.NET's compiler does not perform loop autovectorization as it has not been as profitable of a compiler throughput investment as other optimizations (but it does many small optimizations that employ SIMD operations otherwise like unrolling string and span comparisons, copies, moving large structs, zeroing, etc., it also optimizes the SIMD operations themselves ala LLVM).
.NET does however offer best-in-class portable SIMD API and large API surface of platform intrinsics both of which are heavily used by CoreLib and many performance-oriented libraries. You can usually port intrinsified implementations hand-written in C++ to C# while making the code more readable and portable and not losing any performance (though sometimes you have to make different choices to make the compiler happy).
Autovectorization is usually very fragile and in areas where you care about it hand-written implementation always provides much better results that will not randomly break on minor changes to compiler version or the code, that must be carefully guarded against.
It would be still nice to have it eventually, and I was told that JIT team actively discusses this but there are just many more lower hanging fruits that will light up in disproportionately more instances of user code.
If it's any consolation, Clang/LLVM is not a silver bullet either and you will find situations where .NET's compiler output is competitive or even better: https://godbolt.org/z/3aKnePaez
EVa5I7bHFq9mnYK 17 hours ago [-]
What about performance? The appeal of C was that it translated nicely to pdp-11 instructions with virtually no overhead. Then the appeal of C++ was that it translated nicely to C code (in fact, first versions of C++ were just a preprocessor, passing the job down to the actual C compiler), and you could still insert ASM code if needed.
All these new features introduce some run-time overhead, it seems.
gollum999 14 hours ago [-]
Which features in particular?
One of C++'s core tenants is (and has been since the 90's) zero-cost abstractions. Or really, "zero-runtime-cost abstractions"; compile times tend to increase.
Obviously some abstractions necessarily require more computation (e.g. raw pointers vs reference-counted smart pointers). But in many cases new features (if implemented correctly!) give better semantics and additional compile-time safety while still compiling down to equivalent binary.
einpoklum 12 hours ago [-]
So, here's the thing: Officially, C++ is committed to "What you don’t use, you don’t pay for (zero-overhead rule)”. This is item 2.4 in the reaffirmed design goals:
in practice, compilers support it for some contexts.
(Anoter, minor issue is the discrepancy of "No viral annotation" and "no heavy annotation" with the need to mark things noexcept to avoid exceptio handling overhead.)
nickelpro 10 hours ago [-]
For unique_ptr: This is not a problem that can be solved by the standards committee, they don't control the SysV / Itanium / Win64 standards. You can still use raw pointers if you want to, nothing has been lost from C.
For restrict: Universally supported as `__restrict`, thus not a priority for anyone to "officially" solve. Most major performance complaints fall into this category. Eg, std::regex is bad, sure, but nobody uses std::regex so fixing it doesn't matter.
einpoklum 9 hours ago [-]
> This is not a problem that can be solved by the standards committee
SysV/Itanium/Win64 knows nothing about the abstract difference between a 64-bit value and a 64-bit value inside a class instance. Don't see what prevents the solution from the language side.
> Universally supported as `__restrict`
1. That's not C++. If we're talking about what compilers can offer outside the language standard - that's a different discussion. We don't have to standardize, then, just get a working implementation somewhere and lobby other compiler-makers to adopt it. A compiler might implement Baxter's "safe mode" idea as a non-standard extension, for example.
2. Even the compilers supoorting `__restrict` only support it for parameters of functions - nowhere else.
nickelpro 8 hours ago [-]
> Don't see what prevents the solution from the language side.
The standard has nothing to say about calling convention. Calling conventions are defined by the ABI standards. unique_ptr is a class template, how a class is passed between routines is defined by ABI standards, ergo how unique_ptr is passed between routines is defined by the ABI standards. I don't know what else you're implying here. Unless you're saying we should have language-level smart pointers, not class templates, in which case yes that's an awful idea.
> That's not C++...
If you're arguing about some abstract, Platonic ideal of C++ that is divorced from the actual implementations that exist in the world, I don't know what your point is. We write code to be compiled by compilers that exist, not printed out and contemplated in a museum. The compilers support __restrict, people use it all the time, so its not a problem.
> Even the compilers...
Where else do you want it? What would its meaning even be to something like a member variable?
Restrict is pointless in any scenario that doesn't involve a potentially aliasing ABI boundary, ie, function parameters. In every other scenario it is ignored.
dillon 10 hours ago [-]
My naive opinion is a commitment to not break the ABI is a good thing not just for everyone else but for C++ as well. Languages like C#, Swift and Python (maybe even Rust?) have tools to integrate with C++ fairly deeply and cleanly. If C++ commits to being stable enough then there won’t be a reason to rewrite some amount of C++ into something else. It’s not a surprise that big tech is trying to move away from C++ and that’s not necessarily bad and remaining stable means the transition isn’t rushed. In the meantime people who enjoy and excel at writing C++ still can. Just seems like an overall positive thing to commit to.
nickelpro 10 hours ago [-]
This isn't about language ABI, which is the realm of the various implementations which have their own stability guarantees.
ABI stability in the context of the standards committee is about library ABI, specifically the standard library. When the committee updated the wording about C++'s std::string in C++11, it meant implementers needed to change the layout of a std::string, making this "new" std::string incompatible with the "old" std::string. Any libraries passing std::string across API boundaries needed to be recompiled with the "new" std::string.
This has no effect on FFIs for interop with other languages, which are not passing STL types across language boundaries to begin with (a std::string has no meaning in Python).
ABI stability for the standard library is motivated by large, old, coroporate codebases which had poor API practices, passed STL types across ABI boundaries, and subsequently lost access to the source code of those libraries and applications or otherwise cannot recompile them for some reason. Many people question the wisdom of catering to such users.
Rendered at 08:50:05 GMT+0000 (UTC) with Wasmer Edge.
Meanwhile, most of the C++ code from Google seems to be written in some mishmash of different ideas, always at some halfway point along a migration between something ancient and something passable... but never anything I would ever dare to call "modern", and thereby tends to be riddled with state machines and manual weak pointers that lead to memory corruption.
So... I really am not sure I buy the entire premise of this article? Honestly, I am extremely glad that Google is finally leaving the ecosystem, as I generally do not enjoy it when Google engineers try to force their ridiculous use cases down peoples' throats, as they seem to believe they simply know better than everyone else how to develop software.
Like... I honestly feel bad for the Rust people, as I do not think the increasing attention they are going to get from Google is going to be at all positive for that ecosystem, any more than I think the massive pressure Google has exerted on the web has been positive or any more than the pressure Google even exerted on Python was positive (not that Python caved to much of it, but the pressure was on and the fact that Python refused to play ball with Google was in no small part what caused Go to exist at all).
(FWIW, I do miss Microsoft's being in the space, but they honestly left years ago -- Herb's existence until recent being kind of a token consideration -- as they have been trying to figure out a tactical exit to C++ ever since Visual J++ and, arguably, Visual Basic, having largely managed to pivot to C# and TypeScript for SDKs long ago. That said... Sun kicking Microsoft out of Java might have been really smart, despite the ramifications?)
I spilled my coffee, I was just talking the other day to some coworkers how I don't trust google open source. Sure they open their code but they don't give a damn about contributions or making it easy for you to use the projects. I feel a lot of this sentiment extends to GCP as well.
So many google projects are better than your average community one, but they never gain traction outside of google because it is just too damn hard to use them outside of google infra.
The only Google project that seems to evade this rule that I know of is Go.
Here is a concrete reason why Google open source sucks when it comes to contributions and I don't think it can be improved unless Google changes things drastically: (1) an external contributor makes a nice change and a PR on GitHub; (2) the change breaks internal use cases and their tests; (3) the team is unwilling to fix the PR or port the internal test (which may be a test several layers down the dependency tree) to open source.
> making it easy for you to use the projects
Google internally use Blaze, a version of Bazel. It's so ridiculously easy for one team to use another team's project that even just thinking about what the rest of us needs to do to use another project is unloved dreadful work. So people don't make that effort.
I do not see either of these two points changing. Sure there are individuals at Google that really care about open source community, but most don't, and so their project is forever a cathedral not a bazaar.
In the case of KHTML, they never used it in the first place, so it seems like a particularly inappropriate example. I assume you actually meant Webkit? In that case, they spent half a decade and thousands of engineer-years contributing to Webkit, so it doesn't fit the original complaint about not "trying to contribute" either.
Personally, I take the cathedral/bazaar distinction to indicate different development cadences and philosophies, rather than whether contributions are allowed/encouraged.
Various cathedral-style projects (eg: FreeBSD, Emacs) still actively take contributions and encourage involvement.
There's something even further along the spectrum that's "we provide dumps of source code, but don't really want your patches." I'm not sure what the best term is for that, but "source [merely] available" sometimes has that connotation.
[1] https://opensource.org/osd
In fact "source available" usually means you can see the source code, but there are severe restrictions on the source, such as no permission to modify the source even for your own use, or no permission to create forks of the project containing the modifications, or severe restrictions on such modifications. An example is MongoDB's Server Side Public License, which is source-available but not open source.
And when they don't when talking about source code, they are wrong. If someone says that an RJ45 cable is "a piece of software" because it's "soft" (you can bend it), would you say it's just a different perspective?
Open source, in the context of software, has a particular meaning. And it is the case that many software developers don't know it, so it's worth teaching them.
I've given up on mock frameworks. They make it too easy to make an interface for everything and then test that you are calling functions with the expected parameters instead of the program works as you want. A slight change to how I call some function results in 1000 failed tests and yet I'm confident that I didn't break anything the user could notice (sometimes I'm wrong in this confidence - but none of the failing tests give me any clue that I'm wrong!)
Especially if you have a build process that always runs your unit tests, it's nice to have a very fast test/compile/debug loop.
https://github.com/doctest/doctest
I feel like you could make a madlib where you could plug in any two project names and this sentence would make sense.
Also, IMO, both doctest and catch2 are far superior to Google Test.
Mocks have their place. A prototypical example is at user-visible endpoints (eg: a mock client).
I'm not arguing that mocks don't have their place. However I have found that by declaring I won't use them at all I overall come up with better solutions and thus better tests.
1) Databases and other persistent storage. Though in this case, the best mock for a database is generally another (smaller, easily snapshottable) database, not something like googlemock.
2) Network and other places where the hardware really matters. Sometimes, I really want to drop a particular message, to exercise some property of the sender. This is often possible to code around in greenfield projects, but in existing code it can be much simpler to just mock the network out.
3) Cases where I am calling out to some external black-box. Sometimes it's impractical to replicate the entire black-box in my test. This could be e.g. because it is a piece of specialized hardware, or it's non-deterministic in a way that I'd prefer my test not to be. I don't want to actually call out to an external black-box (hygiene), so some kind of a mock is more or less necessary.
That makes sense internally for Google because they have their massive monorepo, but it sure as hell makes it a pain in the ass to adopt for everyone else.
Google open source libraries are often a mess when you try to include more than one of them in the same project, but googletest isn't an example of the mess. It's actually pretty straightforward.
Completely agree. In isolation all of their libs are great, but inevitably I end up having to build Abseil from source, to then build Protobuf off of that, to then build gRPC off of that. If I can include the sanitizers under Google then that also becomes painful because Abseil (at least) will have ABI issues if it isn't built appropriately. Thinking about it I'd really just like a flat_hash_map replacement so I can drop Abseil.
https://github.com/mapbox/protozero
boost has a flat_hash_map implementation for quite a few versions now, which from what I could see generally beat or is competitive with the absl implementation: https://www.reddit.com/r/cpp/comments/yikfi4/boost_181_will_...
I still go back and forth on whether google test and mock are worth it.
Google benchmark is also nice.
honestly if you write C++ for work, there's no excuse for your company to not give you the beefiest dev machine that money can reasonably buy. given that rust exists, I think "get a faster computer" is a totally valid answer to build times, especially now that skylake malaise era is over and CPUs are getting faster
I find this amusing because one of the main reasons i avoid Rust (in the sense that i prefer to build things written in other languages if possible - i don't mind if someone else uses it and gives me a binary/library i can use - and it never went beyond "i might check this at some point, sometime, maybe" in my mind) is the build times compared to most other compilers :-P.
Also, at least personally, if i get a faster computer i want my workflow to be faster.
Nix build is still stuck in the one from 3-4 y back because bazel doesn't play well. Debian too has some issues building the thing...
It's python packaging and the way the only really supported binary distribution method of Tensorflow for many many years was to use Pip and hope it doesn't crash. And it's reflected in how the TF build scripts only support building python lib as artefact, everything else at the very least involved dissecting bazel intermediate targets
For the humans, we can render the hashes as something friendly, but there's no reason to confuse the machines with our human notions of friendliness.
Otherwise I agree, because if you must be careful, you might as well use tooling that's built for such care. But if you're doing that, do you need the Dockerfile? And that's how you end up with nix/guix.
Hence the failure of Longhorn, or any attempt coming out from Microsoft Research.
Ironically, given your Sun remark, Microsoft is back into the Java game, having their own distribution of OpenJDK, and Java is usually the only ecosystem that has day one parity with anything Azure puts out as .NET SDK.
https://github.com/microsoft/openjdk-aarch64
https://www.infoq.com/news/2023/02/microsoft-openjdk-feature...
Two quite common names in the Microsoft ecosystem.
https://www.zdnet.com/article/microsoft-splits-up-its-xaml-t...
If I'm stretching this "windev" thing, the domain for a lot of employee accounts (including mine) was NTDEV, that had a longer history afaik, nobody called an org that..
I didn't come up with this definition myself.
If I am not mistaken, I can probably even dig some Sinosfky references using it.
Apparently they have a programming language for which you can "one-click-switch" between english and french for the keywords??? https://pcsoft.fr/windev/ebook/56/
One thing I learned, for example, is do not access global immutable state from within a function. All inputs come through the parameters, all outputs through the parameters or the return value.
For instance, you might be tempted to write a function that opens an HTTP connection, performs an API call, parses the result, and returns it. But you'll have a really hard time testing that function. If you decompose it into several tiny functions (one that opens a connection, one that accepts an open connection and performs the call, and one that parses the result), you'll have a much easier time testing it.
(This clicked for me when I wrote code as I've described, wrote tests for it, and later found several bugs. I realized my tests did nothing and failed to catch my bugs, because the code I'd written was impossible to test. In general, side effects and global state are the enemies of testability.)
You end up with functions that take a lot of arguments (10+), which can feel wrong at first, but it's worth it, and IDEs help enormously.
This pattern is called dependency injection.
https://en.wikipedia.org/wiki/Dependency_injection
See also, the "functional core, imperative shell" pattern.
https://www.youtube.com/watch?v=yTkzNHF6rMs
1. don't do console I/O in leaf functions. Instead, pass a parameter that's a "sink" for output, and let the caller decide what do with it. This helps a lot when converting a command line program to a gui program. It also makes it practical to unit test the function
2. don't allocate storage in a leaf function if the result is to be returned. Try to have storage allocated and free'd in the same function. It's a lot easier to keep track of it that way. Another use of sinks, output ranges, etc.
3. separate functions that do a read-only gathering of data, from functions that mutate the data
Give these a try. I bet you'll like the results!
It sounds like too many words to refer ro plain old inversion of control and CQRS. They're both tried and true techniques.
Well, you may be celebrating a bit prematurely then. Google still has a ton of C++ and they haven't stopped writing it. It's going to take ~forever until Google has left the C++ ecosystem. What did happen was that Google majorly scaled down their efforts in the committee.
When it comes to the current schism on how to improve the safety of C++ there are largely two factions:
* The Bjarne/Herb [1] side that focuses on minimal changes to the code. The idea here is to add different profiles to the language and then [draw the rest of the fucking owl]. The big issue here is that it's entirely unclear on how they will achieve temporal and spatial memory safety.
* The other side is represented by Sean Baxter and his work on Safe C++. This is basically a whole-sale adoption of Rust's semantics. The big issue here is that it's effectively introducing a new language that isn't C++.
Google decided to pursue Carbon and isn't a major playing in either of the above efforts. Last time I checked, that language is not not meant to be memory safe.
[1] https://github.com/BjarneStroustrup/profiles [2] https://safecpp.org/draft.html
Carbon is intended to be memory safe! (Not sure whether you intended to write a double negative there.) There are a few reasons that might not be clear:
* Carbon has relatively few people working on it. We currently are prioritizing work on the compiler at the moment, and don't yet have the bandwidth to also work on the safety design.
* As part of our migration-from-C++ story, where we expect code to transition C++ -> unsafe Carbon -> safe Carbon, we plan on supporting unsafe Carbon code with reasonable ergonomics.
* Carbon's original focus was on evolvability, and didn't focus on safety specifically. Since then it has become clear that memory safety is a requirement for Carbon's success, and will be our first test of those evolvability goals. Talks like https://www.youtube.com/watch?v=1ZTJ9omXOQ0 better reflect more recent plans around this topic.
The double negative was not intended :)
I feel like if I'm gonna go through the whole nightmare of a code port I should get something for it as opposed to just relying on interop
Carbon is an experiment, that they aren't sure how it is going to work out in first place.
> "If you can use Rust, ignore Carbon"
https://github.com/carbon-language/carbon-lang/blob/e09bf82d...
> "We want to better understand whether we can build a language that meets our successor language criteria, and whether the resulting language can gather a critical mass of interest within the larger C++ industry and communit"
https://github.com/carbon-language/carbon-lang/blob/e09bf82d...
He at least claims that Carbon will have memory safety features such as borrow checking down the line. I guess we'll see.
The entire philosophy errs too much in the direction of “being reasonable” and “pragmatic” while getting fundamental things wrong.
> Over time, safety should evolve using a hybrid compile-time and runtime safety approach to eventually provide a similar level of safety to a language that puts more emphasis on guaranteed safety, such as Rust. However, while Carbon may encourage developers to modify code in support of more efficient safety checks, it will remain important to improve the safety of code for developers who cannot invest into safety-specific code modifications.
That’s really just paying lip service to Rust without recognizing that the key insight is that optional memory safety isn’t memory safety.
It is kind of neat just how much Rust has managed to disrupt the C++ ecosystem and dislodge its position.
Herb is developing a whole second syntax, I wouldn't call that minimal changes. And probably the only way to evolve the language at this point, because like you said Sean is introducing a different language entirely, so its not C++ at that point.
I really like some of Herb's ideas,but it seems less and less likely they'll ever be added to C++
He compares it to the JS/TS relationship.
He also sells the language differently from any other language that also compiles to native via C++, like Eiffel and Nim among others, due to conflict of interest to have WG21 chair propose yet another take on C++.
typescript
javascript cppfront cppIs this true in the general case? I thought there were typescript features that didn't have direct JavaScript alternatives, for example enums.
So, yes, you can't just strip types, but it's close.
That guarantees that the types do not determine the output (e.g. no const enums), not that you can "strip" types to get the same output.
Decorators would be another example. (Though they have always been marked experimental.)
And of course JSX, but that's not a TypeScript invention.
You can see where CPPFront inserts a `cpp2::move` call automatically, and how that differs from a superficially equivalent Cpp1 function.
OP is right, TypeScript is a whole new syntax, and it's shtick is that it can be transpiled into JavaScript.
Attempting to understand every state and edge case before writing code is a fool's errand because it would amount to writing the entire program anyway.
State machines are a clear, concise, elegant pattern to encapsulate logic. They're dead simple to read and reason about. And, get this, writing one FORCES YOU to fully understand every possible state and edge case of the problem you're solving.
You either have an explicit state machine, or an implicit one. In my entire career I have never regretted writing one the instant I even smell ambiguity coming on. They're an indefatigable sword to cut through spaghetti that's had poorly interacting logic sprinkled into it by ten devs over ten years, bring it into the light, and make the question and answer of how to fix it instantly articulable and solvable.
I truly don't understand what grudge you could have against the state machine. Of all the patterns in software development I'd go as far as to hold it in the highest regard above all others. If our job is to make computers do what we want them to do in an unambiguous and maintainable manner then our job is to write state machines.
What do you suggest instead of a state machine?
If needs be the state-machine can be reconstructed on a whiteboard by a team of five.
It doesn't matter if they have equivalent power. One of those representations fundamentally allows your software to have an architecture, the other doesn't.
Let's look next at that "compiler" thing and high-level languages. The hardware-native one suffices, no need for all that bloat.
I'll use a state machine!
Now, I have two problems :-(
I place a note at the top of my diagrams stating what the default state would be on receipt of an unexpected event. There is no such thing as "event silently gets swallowed because no transition exists", because, in implementation, the state machine `switch` statement always has a `default` clause which triggers all the alarm bells.
Works very well in practice; I used to write hard real-time munitions control software for blowing shit up. Never had a problem.
Ha, Ha, Ha! The juxtaposition of these two phrases is really funny. I would like to apply for a position on the Testing team :-)
Downside of course is now you have a dependency on qt.
If you can afford to do things like this you can most likely use something other than C++ and save yourself a lot of headaches.
Surely you can understand that, despite the recent c++ hate, my job doesn't give a fuck and we aren't migrating our massive codebase from c++ to... anything.
Imagine you have an informally-specified, undocumented, at-least-somewhat-incomplete state machine. Imagine that it interacts with several other similar state machines. Still easy to reason about?
Now add multithreading. Still easy?
Now add locking. Still easy?
Cleanly-done state machines can be the cleanest way to describe a problem, and the simplest way to implement it. But badly-done state machines can be a total mess.
Alas, I think that the last time I waded in such waters, what I left behind was pretty much on the "mess" side of the scale. It worked, it worked mostly solidly, and it did so for more than a decade. But it was still rather messy.
You think that developers that wrote an informally-specified, undocumented, at-least-somewhat-incomplete state-machine would have written that logic as a non-state-machine in a formally-specified, documented and at-least-somewhat-complete codebase?
State-machines are exceptionally easy to reason about because you can at least reverse-engineer a state-diagram from the state-machine code.
Almost-a-state-machine-but-not-quite are exceptionally difficult to reason about because you can not easily reverse-engineer the state-diagram from the state-machine code.
Want to add another "bool state"? Hello exponential growth...
In general, state/event machine transition table and decision table techniques of structuring code are easier to comprehend than adhoc and even worse, poorly understood pattern-based techniques are.
The kind of mass refactorings / cleanups / static analysis talked about in this article are done on a much more serious and large scale on C++ inside the Google3 monorepo than they are in Chromium. Different build systems, different code review tools, different development culture.
I work full-time in Rust these days and everytime I go back to working in C++ it's a bit of a cringe. If I look long enough, I almost always find a use-after-free, even from extremely competent developers. Footgun language.
What's wrong with state machines? Beats the tangled mess of nested ifs and fors.
I'm impressed that you even get as far as finding out whether that much C++ from disparate sources works on a newer version of C++. The myriad, often highly customized and correspondingly poorly documented build systems invented for each project, the maze of dependencies, the weird and conflicting source tree layouts and preprocessor tricks that many projects use... it's usually a pain in the neck to get a new library to even attempt to build, let alone integrate it successfully.
Don't get me wrong, we use C++ and ship a product using it, and I occasionally have to integrate new libraries, but it's very much not something I look forward to.
Google can embrace modern processes, but the language itself had better be compilable on whatever ancient version of gcc works on the one mission-critical architecture they can't upgrade yet...
We are just now feeling this. Some original contributors left the field, and lately the language has went in directions I don't agree with.
Interestingly, Google has fired the Python team this year. The revolution eats its own?
Anyway, Rust should take note and be extremely careful.
Some years ago Google decided that Go projects were similar engineering effort, better performance, lower maintenance, and so on that basis there was no reason to authorise new Python software and their existing projects would migrate as-and-when.
You're already using a language with a strong type system, so it's confusing to me why you would choose to draw the line here.
Yes because then I don't have to spend hours writing esoteric spaghetti code to prove something to the compiler that is trivially known to be true. Your error is assuming static lifetime checking is free. As an engineer, I use judgement to make context-dependent trade offs.
If you like playing the compiler olympics, or your employer forces you to, please use Rust.
That is already a desirable place to be, where you managed to get a working implementation ready to evolve. My issue with opinionated languages like Rust is that they make development more expensive. I then afford to pay the necessary work-effort for fewer projects than I otherwise could if I was to focus more on the problem(s) at hand instead of that and other mandatory constraints forced upon me by the compiler. I very much want my development tools to limit themselves on being tools, to assist me on the part of the problem I chose to focus on with little to no cost paid for their usage. I want to be able to focus on prototyping some working solution first, and only then, if the project's needs really warrant it, to switch on paying the development cost for other aspects, be it safety or whatnot.
And that’s exactly the reason why we need more safety in C++.
I’m terrified at amount of code in real world written with this mindset.
I actually love the ideas that Rust brought forth. It definitely has a place in the ecosystem, and I'm glad to hear critical software is being rewritten in Rust! But that doesn't mean that C++ should copy it.
If you think we should instead evolve C++ so that safety isn't mandatory I'm right there with you, but it's not where the language is today and that discussion has also been shut down by the evolution working group. Moreover, Bjarne's policies mean that telling the critical software people to go fuck off to a different language fundamentally isn't part of the plan either.
Is it fine if it silently gives the wrong answer? If so, why are you bothering with the software at all?
In my experience all nontrivial C++ codebases have silent memory corruption bugs (at least when built with popular compilers).
Well said.
This is why i am firmly in the Stroustrup camp of backward compatibility/zero overhead/better-C/etc. goodness of "old C++". I need to extend/maintain/rewrite tons of them and that needs to be as painless as possible. The current standards trajectory needs to be maintained.
The OP article is a rather poor one with no insights but mere hoopla over nothing.
The problem is that the rules enforced by Rust is not restricted to lifetime rules, it's a much much larger superset that includes quite a lot of safe, legitimate and valid code.
C++ isn't beholden to Rust's trade-offs either. There's a whole spectrum of possibilities that don't require broken backwards compatibility. Hence: "Why draw the line specifically at lifetime annotations?"
I'll have you know I made a variable void* just yesterday, to make my compiler shut up about the incorrect type :D
Legislatures seem a lot more able to allocate large pots of money for major discrete projects than to guarantee an ongoing stream of revenue to a continuing project.
Edit: except when interfacing with C APIs.
While this is sometimes done in C++ as well for various reasons, it's certainly not the default pattern there. If you have two things that need to point to each other, you just do that.
And then you have to handle all the subtle memory bugs that you've introduced by doing that.
> While programming in Rust, I've never thought to myself, "man, this would be so much easier to express in C++".
This is a concrete example of something that is much easier to express in C++. And, sure, you do pay the tax for that (although I will also dispute the notion that it is impossible to write C++ without memory bugs; it's just hard).
Win some, lose some though, as the overall development workflow is lightyears ahead of C++, mostly due to tooling
Rust decided to have more restrictive generic programming, with the benefit of early diagnostic of mistakes in generic code. C++ defers that detection to instantiation, which allows the generics to be more expressive, but it's a tradeoff. But this is an entirely different design decision to lifetime tracking.
This is true even for expressions that are only evaluated in a compile-time context, since dependently-typed languages do "everything" at compile time anyway, they don't have a phase distinction where you can talk about "runtime" being separate.
A truly dependently typed language performs these checks before instantiation time, by evaluating those expressions abstractly. Code that is polymorphic over values is checked for all possible instantiations, and thus its types can actually depend on values that will not be known until runtime.
The classic example is a dynamic array whose type includes its size- you can write something like `concat(vector<int, N>, vector<int, M>) -> vector<int, N + M>` and call this on e.g. arrays you have read from a file or over the network. The compiler doesn't care what N and M are, exactly- it only cares that `concat` always produces a result with the length `N + M`.
To be clear this distinction is not unique to dependent types, either. Most languages with some form of generics or polymorphism check the definition of the generic function/type/etc against the constraints, so the compiler can report errors before it ever sees any instantiations. This just also happens to be a prerequisite to consider something "dependently typed."
Notably Rust type-based generics do this, a key difference wrt. C++ templates. (You can use macros if you want checks after instantiation, of course.)
My experience has been the other way around. Eclipse-based IDEs from NXP, TI, ST all have out-of-the-box usable tooling integration:
- MCU pinout and configuration codegen
- no need to manually fiddle with linker scripts
- static stack and code size analyzers (very helpful for fitting stuff in low-cost MCUs)
- stable JTAG-based debugging with:
And yes, these are important enough for me to put up with Eclipse and pre-modern C/C++. I really want to write Rust for embedded but struggling with the tooling all the time didn't help.Of course it remains to be seen how this all plays out. Static lifetimes can be done good or bad. Profiles can be good or bad. Even if whatever we come up with is done well that doesn't mean people will (I know rust programmers who just put unsafe everywhere).
How can you know this without a "viral" analysis that tells you how much annotation is needed, and where? Perhaps the code factors out all the low-level, "memory unsafe" hacks to its own module, and that can be feasibly annotated. It's just not something we can know in advance.
While it is theoretically not impossible for that scenario to occur, I'd say it sounds wildly unlikely for anything that can be descried as 'old' code.
The fantasy is enough to get engagement and once you have engagement you can persuade people to do a "little" extra work to get the full benefits. My mother won't buy the product for $5, but if you tell her that it costs $10 but they're 2-for-1 today, she's going to buy that and feel like she got a bargain.
In terms of actually solving the problem well, it's not even captured in these hypothetical regulatory requirements. What you actually want is a safety culture, Rust has one, C++ does not, and no technology will change that. From what I can tell nobody at WG21 wants that to change anyway.
Rust has a safety culture because it involves requirements for Safe Rust that preserve safety while also playing well with modularity and iterative development. If "Safe C++" can enforce similar requirements, we can expect that a safety culture can be sustained there as well.
The C++ leadership serves the C++ community, not the entire software industry. You and everyone who disagrees with them are free to use and write software based on other languages, e.g. Java and Rust.
Which is why disabling RTTI, disabling exceptions, creating their own standard library replacement, static analysers forbinding specific language constructs, is such a big deal in some C++ circles.
The neat thing is that once the standard committee learns about this use case, it could get de facto support as existing use!
For simulations and scientific calculations, I do agree, to a vast extent. But in a world that is moving more and more towards zero-trust networking, even many of those will start being looked at as potential attack vectors into other systems.
You see, it is absolutely expected and required that our applications will load and run arbitrary 3rd party code, generally with the expectation that it lives in the same address space as our application (though this is not formally required).
No sockets, no network, no backdoor hacks. You write code, call it a VST plugin, make it sound desirable ... we are expected to load and run it.
Yes, several DAWs have made the move toward out-of-process execution of plugins, but that doesn't begin to address the myriad problems caused by loosely-written plugin APIs not adequately pinning down threading, thread priority, memory access and more.
Filesystem access? Of course! That code runs as you! Because you want it to!
Plugins are not associated with attack vectors, even though they are literally just that.
This capability exists in completely open source, such as OpenZiti - https://openziti.io/.
That's why this topic is such a big deal. Even people who really should know better like the OpenZiti authors aren't able to reliably write safe code.
If you care about program correctness in any real sense, memory safety is table stakes.
It seems very fair to tell them to just use Rust and leave C++ alone.
And ofc Google’s investment in Carbon
Everyone else outside the big three, is somewhere between C++14 and C++17.
So why not be honest and just use C++01, or 11, or whatever it is that works for you, and let the rest of the ecosystem actually evolve and keep the language we invested so much effort into as a viable alternative? There's zero benefit, except to MS who want to sell this year's Visual Studio to all the companies with 80's-era C++...
It’s clear it’s imperfect. But not clear there is an obvious path to a nearby local maxima.
Design choices have tradeoffs.
And even if that were true, who would take advantage of that “better” language in a purely abstract sense? New language standards primarily exist to benefit existing C++ code bases, and the cohort of engineers who work on them. You have to consider that social reality.
I don't demand that C++ evolution be halted. I support the current trajectory of not adding viral annotations for the sake of implementing static lifetime checking. I want C++ to evolve into a better version of itself, I don't want it to become something it's not. If you want static lifetime checking, please use Rust. It already exists and it's great for people who need static lifetime checking.
"I don't want to install air bags and these shiny safety gadgets into my cars. We have been shipping cars without them for years and it works for us and our customers."
The problem is that it doesn't actually work as well as you think, and you are putting people at risk without realizing it.
(Yes, I know about airbag vests. Let's analogize those with external static checkers.)
As usual, the answer is quite simple: "please use rust". We promise to never mention when we break out nasm.
Driver anecdote: I have antilock brakes on my Tundra, but they are annoyingly counterproductive in 4WD descending 6" or larger sandy rocky steps. Do antilock brakes work overall best for the less capable mode? Of course! Do they work best for me? No.
Safety by default with escape hatches when absolutely necessary is the better way to go for all, even if it means some power users have to change their ways.
The profiles proposal suggests adding static lifetime checking, "without viral annotations." I use quotations because I don't really agree with this framing, but whatever. The paper is here if you'd like to read it yourself: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p30...
The core idea here is that you add annotations to opt in or out of certain checks. And opting in may be a compiler flag, requiring no changes to source code. So that would be "backwards compatibility" in that sense. Of course, code may fail these checks, so you'll have to add annotations to opt out, or re-write the code. We will see in practice how much change is required once implementations exist and are tried out.
But the other part is, these profiles do not attempt to cover all valid cases. And what I mean by that is, there are some lifetime issues that this proposal does not attempt to analyze. And, where the analysis is similar, they offer a subset of what other proposals do. These decisions were made because the authors believe that they'll reduce a significant number of issues, and are easier to adopt. And that's worth it instead of going for more checks.
The competing proposal, Safe C++, has you opt into safety checks on a file-per-file basis. So in that sense, it is also backwards compatible: all existing code compiles as-is. When you opt in to those checks, it adds new syntax, similar to Rust, to do the safety analysis checks. So you gain this benefit for only new code, but it also you get much more power. This syntax is necessary to communicate programmer intent to the checks, but is the "viral annotations" that the proponents of profiles don't like.
So, basically, that's the thing: both are backwards compatible, but offer very different tradeoffs in the design space.
That's a rather odd complaint, coming from a pseudonym.
If having strict safety means I can't express my mental models in code, I don't want it. It will slow me down. It will make it harder to write software that's useful.
Remember people, we are here to make things that are useful to people. If safety gets in the way of that, then it's not worth it.
Rust shines in the other 95% of code. I spend some time every morning cleaning up the sorts of issues Rust prevents that my coworkers have managed to commit despite tooling safeguards. I try for 3 a day, the list is growing, and I don't have to dig deep to find them. My coworkers aren't stupid people, they're intelligent people making simple mistakes because they aren't computers. It won't matter how often I tell them "you made X mistake on Y line, which violates Z rule" because the issue is not their knowledge, it's the inherent inability of humans to follow onerous technical rules without mistakes.
Being really good at C++ almost demands that you surrender entire lobes of your brain to mastering the language. It is too demanding and too dehumanizing. Developers need a language and a complete tool chain that is designed as a cohesive whole, with as little implicit behavior, special cases and clever tricks as possible. Simple and straight-forward. Performance tweaks, memory optimizations and anything else that is not straightforward should be done exclusively by the compiler. I.E. we should be leveraging computers to do what they do best, freeing our attention so we can focus on the next nifty feature we're adding.
Zig is trying to do much of this, and it is a huge undertaking. I think an even bigger undertaking than what Zig is attempting is needed. The new "language" would also include a sophisticated IDE/compiler/static-analyzer/AI-advisor/Unit-Test-Generator that could detect and block the vast majority of memory safety errors, data races and other difficult bugs, and reveal such issues as the code is being written. The tool chain would be sophisticated enough to handle the cognitive load rather than force the developer to bear that burden.
The other faction that has lost faith in WG21, and wants newer, safer, nimble language with powerful tooling is already heading for the exits.
Herb has even directly said that adding lifetime annotations to C++ would create "an off-ramp from C++"[1] to the other languages — and he's right, painful C++ interop is the primary thing slowing down adoption of Rust for new code in mixed codebases.
[1]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p34...
"newer" is hopefully a non-goal.
Unfortunately, an option that is both safer and nimble doesn't appear to exist. I'm still hopeful, but at the moment it looks like rust is our future. A fate only marginally better than C++.
If such a thing came to C++, there would obviously be limitations around module boundaries, when different modules used a different Edition. But perhaps this could be a way forward that could allow both camps to have their cake and eat it too.
Imagine a world where the main difference between Python 2 and 3 was the frontend syntax parser, and each module could specifically which syntax ("Edition") it used...
C++ is a mess in that it has too much historic baggage while trying to adapt to a fiercely changing landscape. Like the article says, it has to make drastic changes to keep up, but such changes will probably kill 80% of its target audiences. I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go. It is time to either switch to Rust, or pick one of its successor languages and put effort into it.
Rust has the concept of _crate_, which is very close to the concept of compilation unit in C++. You build a crate by invoking `rustc` with a particular set of arguments, just as you build a compilation unit by invoking `g++` or `clang++` with a particular set of arguments.
One of these arguments defines the edition, for Rust, just like it could for C++.
But! We'll be able to see all the extra parsing happen so in theory you could track down the incompatibilities and do something about them.
https://arewemodulesyet.org/
I agree but also understand this is absolutely wishful thinking. There is so much inertia and natural resistance to change that C++ will be around for the next century barring nuclear armageddon.
After all doubtless COBOL's proponents did not regard COBOL-85 as the last COBOL - from their point of view COBOL-2002 was just a somewhat delayed further revision of the language that people had previously overlooked, surely now things were back on track. But in practice yeah, by the time of COBOL-2002 that's a dead language.
From my point of view C++26 is going to be the last one that actually matters, because too many are looking forward to whatever reflection support it can provide, otherwise that would be C++23.
There is also the whole issue that past C++17, all compilers seem like a swiss cheese in language support for the two following language revisions.
That is not possible. The the following function in C++ std::vector<something> doSomething(std::string); Simple enough, memory safe (at least the interface, who knows what happens inside), performant, but how do you call that function from anything else? If you want to use anything else with C++ it needs to speak C++ and the means vector and string needs to interoperate.
I'm working on interoperation with existing C++. It is a hard problem and so far every answer I've found means all of our new features still needs to be written in C++ but now I'm putting in a framework where that code could be used by non-C++. I hope in 5 years that framework is in place by enough that early adopters can write something other than C++ - only time will tell though.
There is nothing memory safe whatsoever about std::vector<something> and std::string. Sure, they give you access to their allocated length, so they're better than something[] and char* (which often also know the size of their allocations, but refuse to tell you).
The point of an iterator is to make it hard to do that. You can, but it is easy to not do that.
> modifying the vector while iterating it
Annoying, but in practice I've not found it hard to avoid.
> adding to the vector from two different threads or reading from one thread while another is modifying it
Rust doesn't help here - they stop you from doing this, but if threads are your answer rust will just say no (or force you into unsafe). Threads are hard, generally it is best to avoid this in the first place, but in the places where you need to modify data from threads Rust won't help.
This is just not accurate, you can use atomic data types, Mutex<> or RwLock<> to ensure thread-safe access. (Or write your own concurrent data structures, and mark them safe for access from a different thread.) C++ has equivalent solutions but doesn't check that you're doing the right thing.
Nitpick: editions are specified per crate, not per module.
---
Also note that editions allow to make mostly syntactic changes (add/remove syntax or change the meaning of existing ones), however it is greatly limited in what can be changed in the standard library because ultimately that is a crate dependency shared by all other crates.
Is that an ABI thing? I thought all versions up to and including C++23 were ABI compatible.
Also, Rust compiles the whole world at once, so any ABI breakage from mixing code from different compiler versions doesn't happen. (Editions are different thing from compiler versions, a single version of the compiler supports multiple editions.)
Now there is the possibility that someone could come up with a new breaking syntax and want a C++26 marker. However nobody really wants that. In part because C++98 code rebuilt as C++11 often saw a significant runtime improvement. Even today C code built as C++23 probably runs faster than when compiled as C (the exceptions are rare - generally either the code doesn't compile as C++, or it compiles but runs wrong)
Implicit capture of this in lambdas by copy.
std::iterator removed.
std::uncaught_exception() removed.
throw () exception specification removed.
std::strstream, std::istrstream, and std::ostrstream removed.
std::random_shuffle removed.
std::mem_fun and std::mem_fun_ref, std::bind1st and std::bind2nd removed.
There are numerous other things as well, but this is just off the top of my head.
I knew they are bad, but I don't think it should be removed.
I only ever worked in a couple of codebases where we had one standard for everything that was compiled and I suppose that's what 90% of people do, or link static libs, or shared libs, so externalize at an earlier step.
So purely a thought experiment.
That does not bode well for Microsoft. At least from the outside perspective it looks like he was the adult in the room, the driving force behind standards adoption and even trying to steer C++-the-language towards a better vision of the future.
If he is gone, MSVC will again be the unloved bastard child it has long been before Herb's efforts started to pay off. This is very disheartening news.
I'm happy he held out for this long even though he was being stonewalled every step of the way, like when Microsoft proposed std::span and it was adopted but minus the range checking (which was the whole point of std::span).
Now he has been pushing for a C++ preprocessor. Consider how desperate you have to be to even consider that as a potential solution for naysayers blocking your every move.
MSVC will continue to be used for many years, and especially the backend might see renewed effort. But I don't know about the C++ frontend specifically, I've seen complaints about more and more bugs on the cpp subreddit. It's possible MS will be investing a little less in C++.
Meanwhile on Windows side, it was made officially at Ignite that a similar decision is now to be followed upon Windows as well.
Here the official stuff, so whatever happens to MSVC is secondary,
https://azure.microsoft.com/fr-fr/blog/microsoft-azure-secur...
https://blogs.windows.com/windowsexperience/2024/11/19/windo...
Rather, it seems that as computers have gotten faster, there's been more places where safety is preferable to performance.
Two examples of stuff publicly rewritten into Rust.
Games are special that isn't what Windows security cares about in first instance, when TinyGlade is the first ever commercial success using Rust.
Yet most games are done with Unreal and Unity, and yes there is lots of C++ there, but is mostly Blueprints, Verse, C# on top, that large majority of studios reach for.
This seems like one hell of an initiative for the Windows OS. That is millions of lines of C++ code, often with parts from waaay back. A friend who works on one of the OS teams told me that his team got a boomerang hire that worked on Windows back in the 90s and he was still finding parts of his code in there!
I hope this corporate interest bodes well for Rust though. It seems like for C++ it really caused a schism over the ABI break issue where Chandler et al were basically rebuffed finding some timeline to break it, and then Google dropped all their support on the committee in favor of Carbon, Rust, etc.
Likewise you will noticed MSVC is no longer riding the wave in regards to C++23, after being the first to fully support C++20.
Then there are all those other compilers out there, lost somewhere between C++14 and C++17, and most likely never moving beyond that.
He has been showing it, but not pushing it. the difference is subtle but important. He is showing a lot of "what ifs" trying them, and pushing the useful ones back into the language. Reflection is on track for C++26 in large part because he inspired a lot of people with his metaclasses talk (a long time ago, but doing things right takes time)
I'm sure it's never as clean a situation as anyone would like, but hey, world is a rough place sometimes.
Instead we have greybeards and lone warriors, and million-line legacy codebases, half of which have their own idea on what a string or a thread is.
I’m not sure were you got this perception that it’s dead.
C++ remains the only game in town in many domains.
That said, _unless you work in those domains_ there is no good reason to use C++ IMHO.
Apart from the legacy codebases, there’s lots of C++ greenfield development.
” the ability to include C++ code and libraries from others ”
Libraries in vcpkg - a large number - are compatible enough to be used in this sense. It’s possible your specific domain is lacking contributions or you’ve been looking from the wrong places?
I haven't seen any perf impact of splitting stuff between files/js modules in typescript either.
What I'm guessing is that you mean that static compilers, like that of C++, need to be able to 'see' large amounts of code to make clever inlining optimizations.
Which shouldn't be the case if the code is well designed, and/or the compiler can prove invariants necessary for optimization without having to look at the body of the code.
Basically, it is no different than renaming .js to .ts to take advantage of some stuff in Visual Studio Code, while keep writing plain old JavaScript.
I wish I could say modules don't work, but I have yet to understand them. Which is probably a big part of its problem.
Visual C++ and clang, alongside MSBuild and CMake/ninja.
As for ecosystem fragmentation, it has been the same old story since WG14 and WG21 exist, each compiler and platform is their own snowflake of what they actually support.
Do you have an example (of yours or others) that you could link?
I've been trying to get this up and running myself, but can't seem to whisper the right CMake prayers.
It is kind of interesting how the python community hasn't learned a thing from python 2/3. The problem isn't breaking backwards compatiblity. Probably the biggest mistake you can do is act like breaking backwards compatiblity is a big deal, therefore you should pile up as many breaking changes as possible and release them all at once so as to maximize pushback and upgrade friction.
It is in fact the exact opposite. If you break 10 libraries out of a million, you as the language developer can step in and upgrade them on behalf of the original maintainer. The users increment a library version when they increment the language version and done.
Of course we are now looking at things in hindsight and see what didn't work.
1. Forward compatibility is more important than backward compatibility. 2. Automated refactoring tools don't help with 1.
The problem wasn't that they broke a lot in Python 3. It was that you couldn't write your Python 2 in such a way as to be compatible with it until well into the transition process as the six package got popular and the devs fixed needlessly broken things in Python 2.
Given that the general industry approach to technical debt is "yes, more please", it is unsurprising to me that any sufficiently old C++ project still has lots and lots of plain C inside it.
However, I think the author is a little off on the root cause. They emphasize tooling: the ability to build reliably and cleanly from source. That's a piece of it, but a relatively small piece.
I think the real distinguishing factor between the two camps is automated testing. The author mentions testing a couple of times, but I want to emphasize how critical that is.
If you don't have a comprehensive set of test suites that you are willing to rely on when making code changes, then your source code is a black box. It doesn't matter if you have the world's greatest automated refactoring tools that output the most beautiful looking code changes. If you don't have automated tests to validate that the change doesn't break an app and cost the company money, you won't be able to land it.
Working on a "legacy C++ app" (like, for example, Madden NFL back when I was at EA) was like working on a giant black box. You could fairly confidently add new features and new code onto the side. But if you wanted to touch existing code, you needed a very compelling reason to do so in order to outweigh the risk of breaking something unexpectedly. Without automated tests, there was simply no reliable way to determine if a change caused a regression.
And, because C++ is C++, even entirely harmless seeming code changes can cause regressions. Once you've got things like reinterpret_cast<>, damn near any change can break damn near anything else.
So people working in these codebases behave sort of like surgeons with a "do no harm" philosophy. They touch as little as possible, as non-invasively as possible. Otherwise, the risk of harming the patient is too high.
It's a miserable way to program long-term. But it's really hard to get out of that mess once you're in it. It takes a monumental amount of political capital from engineering leadership to build a strong testing culture, re-architect a codebase to be testable, and write all the tests.
A lot of C++ committee changes aimed at legacy C++ developers are about "how can we help these people that are already in a mess survive?" That's a very different problem than asking, "Given a healthy, tested codebase, how can we make developers working in it go faster?"
Having also worked at a few gamedev studios, IME there isn't a real distinction between the two since it is always a matter of time for the former to become the latter.
Sometimes it doesn't even take that long, all it takes is a single innocuous vertical slice with a pointlessly immovable deadline to inject enough harm in a codebase so you spend the next year fighting bugs that shouldn't have existed in the first place while also having to do everything else at the same time (and all planned timeframes made with only the "everything else" in mind, of course).
IMO even if it doesn't sound good, it is much more practical to learn how to deal with the mud than assume pigs do not exist :-P
That was very much my experience at EA, but has definitely not been my experience at Google. While everyone struggles with tech debt, at Google I've worked in many codebases that have been continuously well-maintained with good test coverage for over a decade.
Really, once you build a culture that says, "People not on your team may edit your code without asking and will rely on your tests to make sure they don't break things,", teams get highly incentivized to write tests.
Javascript/Node/Typescript has even more identifiable factions.
I think developing factions around these things is unfortunately normal as languages grow up and get used in different ways. Rust has arguably tried to stay away from this, but the flip side is a higher learning curve because it just doesn't let certain factions exist. Go is probably the best attempt to prevent factions and gain wide adoption, but even then the generics crowd forced the language to adopt them.
C++ does a lot, but has a big disengaged crowd, for many reasons, and that crowd will suffer from the push forward. Python and Node are similar.
Source: I am in both factions, as are my colleagues :)
Quite a few companies have millions and millions of lines of code. Changing 1% of it would mean changing more than 10K lines of code, perhaps even more than 100K. In much bigger code bases, where changing anything has a risk of breaking something — not just because you might make a mistake, but because your program is full of Undefined Behaviour, and changing anything might manifest latent bugs.
Given that, I'm not surprised people say that Sutter quote with a straight face.
But this is a multi-billion-dollar industry. If you're working on scripting a little browser "app" for a phone things may be different.
We have a large automated test suite that runs on every build and takes hours. The problem with automated tests is they only verify situations you thought of work the way you think they should, while human testers find slight variations of setup that you wouldn't think matter until they do. Human tests also find cases where the way you expect things to work don't make sense in the real world.
Except, allegedly, at Google. But is there any evidence they actually do this, eg. in public code bases? Or is it just hype?
This is one of the reason why they are bad at open sourcing - their internal code almost never match what is released
“Our code is an asset” ⇒ code kept up-to-date
“Our code is a burden, but we need it” ⇒ change averse
Changing 1% across all modules is a nightmare. Changing one module which is 1% of the code is nothing.
The first was removing `long` from the code, since a lot of code assumed its size (is it like `int` or like `long long int`?) and as machines were upgraded it caused problems.
The second was moving to C++11/14/17. Most of the difficulty was toolchains on unixen that did not support the new versions of the language, or for which support was incomplete, or for which upgrading to a version with support broke existing builds.
The third was moving to Linux from big iron unixen. As far as I understand, this initiative is still underway. It was already underway in 2011 when I joined the company.
This is a rich company with a large, healthy engineering department. I imagine that most other companies would not or could not bother.
You don't want to depend on a third-party hosting the code, so you need to copy it, and pin it to a specific version. You might also need to patch it since you'll be using it and most likely will run into problems with it.
Using third-party code means taking ownership of all the problems one might encounter when trying to use it in your project, so you might as well just adapt it for it to work with your tools and processes.
If you use a modular system this is essentially just maintaining a fork.
Yes, languages that are beginner friendly are ... friendlier. Yes, languages that stick to one or a small number of programming paradigms are friendlier. But if you want the "flexible efficiency and raw power of C" and "something higher level than C", C++ is your baby.
Maybe it would be better if we all used Java, Rust, and Go, but C++ sings its siren von Neumann song to the wizards, and there will always be wizard musicologists who steer their projects toward those rocks and, when they have just enough wax in their ears, they sail right past the rocks and come out the other side of the straits leading the rest of the fleet.
You can choose to follow them or not, for there's no shame in coming in 4th.
Feel the committee was smoking weed that day in la-la land. You can ignore all the safety stuff from Sean Baxter, but saying no to performance on the altar of permanent, un-specified ABI backward compatibility - when such was never mentioned as a design goal of C++ - means its "Goodbye C++" for a long, long list of orgs and "wizards". The ABI was NEVER specified formally by the C++ standard - so why bother sacrificing the world for its immortal existence ?
C++ is NO longer the choice of language for greenfield native projects and the committee takes the full blame.
I worked in a previous role on C++ CAD/simulation software that required vendored things like solid modelling kernels and it was incredibly painful. Occasionally one of the vendors would just not do the work and you'd end up having to spend half a year ripping out the dependency that worked perfectly well. The team working on the software were generally in favour of moving up through to modern standards, while I was there we did 03 -> 17 for e.g. but that didn't finish til 4 years after the C++17 standard came out for all sorts of reasons. When VS2017 came out everyone breathed a sigh of relief because suddenly we didn't have to wait to upgrade the compiler.
C++ was always by far the most inefficient langauge to work with for me, because there's just so much chore and nonsense that you have to get through to get anything done, and almost none of it has any reasonable purpose, there's no efficency tradeoff. I'm pretty sure that the insane build situation or UB in uninitialized variables or unspecified argument evaluation order never really benefited anybody, they are just bad decisions in the language, and that's all.
You will be happy to learn the uninitialized variables are not UB as of C++26.
beautiful, in equal parts true, sad, and endearing.
but also remember the vasa.
How do I know this? I migrated a codebase of about 20m lines of C++ at a major investment bank from pre-ansi compilers to ansi conformance across 3 platforms (Linux, Solaris and Windows). Not all the code ran on all 3 platforms (I'm looking at you, Solaris) but the vast majority did. Some of it was 20 years old before I touched it - we're talking pre-STL not even just pre ansi. The team was me + one other dude for Linux and Solaris and me + one other different dude for windows, and to give you an idea the target for gcc went from gcc 2.7[1] to gcc 4[2], so a pretty massive change. The build tooling was all CMake + a bunch of special custom shell we had developed to set env vars etc and a CI/CD pipeline that was all custom (and years ahead of its time). Version control was CVS. So, single central code repo and if there was a version conflict an expert (of which I was one but it gives me cold sweats) had to go in, edit the RCS files by hand and if they screwed up all version control for everyone was totally hosed until someone restored from backup and redid the fix successfully.
While we were doing the port to make things harder there was a community of 667 developers[3] actively developing features on this codebase and it had to get pushed out come hell or high water every 2 weeks. Also, this being the securities division of a major investment bank, if anything screwed up real money would be lost.
It was a lot of work, but it got done. I did all my work using vim and quickfix lists (not any fancy pants tooling) including on windows but my windows colleague used visual C++ for his work.[4]
[1] Released in 1995
[2] Released in 2005
[3] yes. The CTO once memorably described it to me as "The number of the beast plus Kirat". Referring to one particularly prolific developer who is somewhat of a legend on Wall Street.
[4] This was in the era of "debugging the error novel" so you're talking 70 pages of ascii sometimes for a single error message with a template backtrace, and of course when you're porting you're getting tens of thousands of these errors. I actually wrote FAQs (for myself as much as anything) about when you were supposed to change "class" to "typename", when you needed "typedef typename" and when you just needed "typedef" etc. So glad I don't do that any more.
But since you say version control was CVS, then I guess it was Goldman. They still have that sheizen for SecDB/Slang today.
And I assume that "Kirat" is Kirat Singh of Goldman SecDB/JPM Athena/BofA Quartz/Beacon?
Very impressive indeed.
Were namespace versions determined to not solve this problem? That would be the most ironic thing after all if the change management system introduced in c++11 to avoid std::string is either unused, untrusted, or unworkable for the purpose it was intended.
My Rational for Using C++ in 2024: (A) Extreme computational performance desired. (B) I learned C++ 20 years back. (C) C++ has good enough Cross-Platform (OS) compatibility.
This invokes the imagery of a 1950s Apollo era scientist saying something serious. But I promise you there is no visionary low level language authority in the background. It’s just a staffer being influenced by the circle of blogs prominent on programming Reddit and twitter.
> no overhead principle
It’s actually nice to hear they are asserting a more conservative outlook and have some guiding design principle.
Bjarne is more of a super-bureaucrat than a designer. In the early days he pulled C++ into whatever language movements were popular. For a while it looked like Rust was having that influence.
But the outcome has been a refinement of C++ library safety features which are moderate and easy to adopt.
Oh I see, this is a fantasy.
It's all about the magnitude of the liability, not the direction
You do not need to re-solve this problem, and when a similar problem occurs, you can adapt the existing solution to the new problem.
Another way to think about it: if code was not an asset, we would delete it immediately after compilation.
This links to:
https://thephd.dev/finally-embed-in-c23
It was a fascinating story, particularly about how people finally coming to terms with accepting that a seemly ugly way of doing things really is the best way (you just can't "parse better").
The feature itself is interesting too.
https://gcc.godbolt.org/z/jGajc6xd5
#embed has to pretend - in principle - that we're going to conjure all these byte values into existence, as actual numbers, and then by the "as if" rule the compiler is not really going to do that because it would be crazy slow. The reality that we're just going to shove the data into the program as if it was an array is an (obviously, implemented everywhere) optimisation, rather than part of the language specification.
The analogous Rust `include_bytes!` just gets you a &'static [u8; N] -- an immutable reference to an array of N bytes which lives forever.
At first I thought OK, well, maybe the C approach lets you do some clever compile stuff that Rust can't do. Nope. If I have a compile time function checksum which calculates a 64-bit checksum of the slice passed by immutable reference - and a file of 128MB of data called firmware.bin, Rust is completely fine with let sum = checksum(include_bytes!("firmware.bin")); and that results in a 64-bit value, the 128MB file evaporated after being checksummed.
I strongly disagree with this. The more code you have, the more resources you have to spend maintaining it. There is a very relevant example close by in the post: the bit about Google having a clang-based tool that can refqactor the entire codebase. Great! Problem is, an engineer had to spend their time writing that, and you had to pay that engineer money - all because you have an unmanageable amount of code.
The real tech asset is processes: the things you have figured out in order to manage such an ungodly amount of code. Your most senior engineers, specifically what's in their heads, are an asset too.
As for vcpkg - yeah, that's popular, for sure.
There must be some extremely ideological reason behind these horrible “modern” C++ standards.
There are some good trend happening during C++ 11, but now it is completely out of control now.
Oh, and you can't avoid that, say, you are working on a trading bot and that's the only "supported" way to connect to an exchange.
In the end people usually just reverse engineer and reimplement to get rid of such cursed blob. Fortunately, it works - the vendor can't effectively push all clients to update their SDK too, so all wire protocols are infinitely backward compatible.
The amount of problems this solves is incredible, and it creates none of the ops issues with configuring and launching some new kind of Docker image.
People reverse these SDK partly because it makes the codebase saner, and partly because, well, this is trading, a saner implementation is almost guaranteed to be faster than vendor's bullshit one, and guess who cares about being a little bit faster than everyone else?
In hindsight I would've probably written a few things differently, but I really didn't want to fall into a trap of getting stuck editing.
To save people the trouble it seemed like a manic rant intended to pick several bones (at least the author is self aware enough to admit as much). It's heavy on the "trust me, I have sources" and light on actual content. It's got enough drama and insinuations from calling people liars, narcissists, to finally nazis. It veers from committee drama, to Trump, to feminism, to AI... very hard to follow.
Worthy of a daytime soap opera but other than that there's nothing notable there. Except it does make me want to avoid all these people, on both sides of whatever drama this is.
It's a massive post though. Right now I am an hour in and probably about 75% done and I am skipping most of the linked articles. Except for the Ender's game parts. I highly recommend those.
On the other hand, having a bit more transparency into the workgroups and their way of doing things may allow the process become a bit more efficient, approachable, and maybe would allow shedding some of the problems which have accumulated due to being so isolated from the world.
Some of the alleged events really leave a bad taste in the mouth, and really casts a shade of doubt for the future of C++.
Lastly, alienating people by shredding their work and bullying them emotionally is not the best way to build a next generation of caretakers for one of the biggest languages in the world. It might not fall overnight, but it'll certainly rot from its core if not tended properly. Nothing is too big to fail.
EDIT: found one on wayback: https://web.archive.org/web/20241124225457/https://herecomes...
package management belongs to the os - or at least something else.
don't get me wrong, package management is a real problem and needs to be solved. I'm arguing against a language package manager we need a language agnostic package manager.
Languages need to be more agnostic than a package manager requires because I should not have to rope another organization into my trust model.
Cargo already goes too far in encouraging a single repository (crates.io) for everything through its default behavior. Who maintains crates.io? Where is the transparency? This is the most important information the user should know when deciding to use crates.io, which is whether or not they can trust the maintainers not to backdoor code, and it is rarely discussed or even mentioned!
The default cargo crate (template?) encourages people to use permissive licensing for their code. So that is an example where you are already making implicit political decisions on behalf of the ecosystem and developers. That is alarming and should not be for the language maintainers to decide at all.
In C/C++ you have a separation of the standard from the implementation. This is really what makes C/C++ code long-lived, because you do not have to worry about the standard being hijacked by a single group. You have a standard and multiple competing implementations, like the WWW. I cannot encourage the use of Rust while there is only a single widely-accepted implementation.
Additionally, you do not have to necessarily enforce these things on the language level, the standard and the tooling could live as two independent projects coming from the same entity. You could still use the compiler and the libraries from your OS, and build the code like that, or you could just reach out to an optional standardized tool that serves as a glue for all the external tools in a standardized way.
Yes, there are a lot of valid concerns with this approach as well, but personally for me, as a frustrated C++ developer, who is most likely going to still use the language for a decade to come, I feel like all the other languages I had mentioned in my previous post had addressed what is my biggest point of frustration with C++, so it's definitely an issue that could be solved. Many tried to do it independently, but due to personal differences, no funding, and different ideas of what should be the scope of such tooling, we ended up with a very fragmented ecosystem of tools, none of which have yet to date been able to fully address an issue that other languages solved.
You and I must be using two very different versions of Cargo, because on mine the default template doesn't specify a license.
That and a better package manager so your clang wrong version problem cannot have. Which is what I was trying to get at.
You're working with some seriously hairy technologies, dealing with very knotty compatibility issues, and you don't want to learn... Docker?
I find this odd because it's relatively simple (certainly much simpler that a lot of what you're currently dealing with), well documented, has a very small, simple syntax and would probably solve your problems with much less effort than setting up a third development machine.
> any open source project should consider docker not an option to solve their problem
That's generalising far too much.
Python doesn't have a language package manager, you're free to use pip or poetry or uv or whatever, but it does have PEP 517/518, which allow all Python package managers to interact with a common package ecosystem which encompasses polyglot codebases.
C++ is only starting to address this problem with efforts like CPS. We have a plethora of packaging formats, Debian, pkg-config, conan, CMake configs, but they cannot speak fluently to one another so the package ecosystem is fractured, presenting an immense obstacle to any integration effort.
Comparing it to other languages isn’t really fair since they don’t have polyglot code bases in the same way, and where native packages exist in for e.g. Npm, then you run into the same problems anyway.
This is a long-standing pain point. LWN has a series of reports covering this, one of which is: https://lwn.net/Articles/920832/
Point 1: I do not want my program to only run on only one OS, or to require custom code to make it multi-platform.
Point 2: What if there's no OS?
To run on only one OS at build time? I usually just set up cross-compilers from linux if I am making cross-platform C/C++ code.
>Point 2: What if there's no OS?
You can use a system like bitbake I think.
Will the GhostBSD maintainers pin the right version of Haskell's aeson package?
Will the Fedora Asahi devs stay on top of the latest Ocaml TLS developments?
Will MS package PureScript's code for DOM manipulation?
If we are talking about global shared dependencies, sure it may belong in the OS.
If we are talking about directly shared code, it may as well belong in the language layer.
If we are talking about combining independent opaque libraries, then it might belong in a different "pseudo os" level like NPM.
It clearly doesn't except if you're a fan of dll hell and outdated packages.
The solution to…a problem created directly by a specific approach is to…do even more work ourselves to try and untangle ourselves? And just cross our fingers and just _hope_ that every app/library is fully amenable to being patched this way?
Alternatively, we could realise that this isn’t really feasible at the scale that the ecosystem operates at now, and that instead of taking an approach that requires us to “do extra work to untangle ourselves” we should try and…not have that problem in the first place.
>And just cross our fingers and just _hope_ that every app/library is fully amenable to being patched this way?
It requires some foresight in designing the application, and whether or not you even choose to use that application in the first place. We should strive to decrease the complexity of the system as a whole. The fact that packages are using different versions of the same library in the first place is a canary and the system should disincentivize that use case to some extent. Using static libraries or a chroot or a sandbox for everything is sweeping the problems under the carpet.
>taking an approach that requires us to “do extra work to untangle ourselves” we should try and…not have that problem in the first place.
I would prefer a system that allows you to link every application to the same library as a default, but also allows for some per-application override, perhaps by using symlinks. That would cover the majority of use cases. But I do not think that dynamic linking is generally in vain.
In my own projects, I try to rely on static linking as much as possible, so I understand your perspective as a developer. But as a user I do not want programs to have their own dependencies separate from the rest of the system.
I really think it is. Even at the scale of a single app it may sometimes make sense to have multiple versions of a same library, if for instance it implements a given algorithm in two different ways and both ways have useful properties
If there were guarantees that every library would always be both forwards and backwards compatible, that would be reasonable. Sadly, that's not the case.
Applications ship their lock files + version constraints. Gets merged into a user/os level set of packages. You update one package, OS can figure out what it has to rebuild and goes off and does that.
Still shit-out-of-luck for anything proprietary, and it’s still super possible for users to end up looking at compile failures, but technically fits the bill?
The solution is to be more professional. DLL hell comes from libraries that break compatibility: serious libraries should not break compatibility, or at least not often. Then when they do and you happen to have the issue, it's totally fair to go patch the library you depend on that depends on the breaking lib. Even in proprietary software.
The modern way is to use ZeroVer [1] and language package managers that pull hundreds of dependencies in the blink of an eye. Then asking that people compile everything themselves or use the one system deemed worthy of support (usually Windows and the very latest Ubuntu). And of course not caring about security one bit.
[1]: https://0ver.org/
Os package managers do a fundamentally different task than dependency management tools used in development.
They ship a bunch of applications and the libraries you need to run the applications.
If you need different version of libfoo than e.g. Firefox does, you're out of luck.
Need to support a customer with an older release which needs a different version of libfoo? Not gonna happen.
Unless you're talking about Nix or Guix, your OS package manager is not a substitute for a dependency management tools.
Let’s say we make a “thing” which contains packages for all participating languages.
98% of the time, aren’t users just going to go “filter down to my language” and just continue what they’re doing, except with a somewhat worse overall experience, depending on whatever the “lowest common denominator” API + semantics we use for this shared package management solution.
Multi-language build systems already exist, which happily serve the needs to those projects which find themselves needing cross-language (+distributed) builds. Could there be some easier versions of these? Sure, but I don’t feel like “throw everyone in the same big box” is the solution here.
It has to be - while nobody needs more than a subset of that big box, the intersection of what everyone needs turns out to be throw everyone in the same big box. If you have anything less than that one big box you end up many standards and then everyone chooses which standard and in turn something important you need choose the other standard and you can't use it (ie the situation we are in now)
Of course making that "one standard to rule them all" easy enough to use is a hard problem. It may be itself impossible and thus everyone drops back to the current mess.
Basically, once you have an OS level package manager, you have issues of versioning and ABI. You have people writing to the lowest common denominator - see for example being limited to the compiler and libraries available on an old Red Hat version. This need to maintain ABI compatibility has been one of the hugest issues with evolving C++.
The OS package manager ends up being a Procrustean bed forcing everything into its mold whether or not it actually fits.
Also, this doesn't even have the issue of multiple operating systems and even distros which have different package managers.
Rust and Go having their own package managers has helped greatly with real world usage and evolution.
By dumping all the same file types in massive top-level directories, you need a separate program (the package manager) to keep track of which files belong to which packages and dealing with their versions and ABI and stuff. Each package represents code developed by a specific group with a certain model of the system's interoperability.
GoboLinux has an interesting play on the problem by changing the directory structure so that the filesystem does most of the heavy lifting.
You should only track source versions and build things from source as needed.
And I do take asbestos as a serious example. Asbestos is still manufactured and used! Believe or not there are "safe" uses of asbestos and there are protocols around using them. Nevermind the fact that there is a lot of FUD and dishonesty about where exactly the line cuts on what is safe versus not safe...for example we are finding out how brake dust affects the wider environment as we crawl out from under the tent of utter misinformation of a highly motivated entrenched industry.
I feel like this is not a new human phenomenon. We made particularly poor choices in what tech we became dependent on, and lo and behold, the entrenched interests keep telling us it's not that bad and we should keep doing it because...reasons.
It will eventually play out the way it must; C++ might seem a lot more innocuous than asbestos, and in some ways that's true, but it resists all effort to reform it and will probably end up needing to just be phased out.
Uh, they took decades to implement a bunch of C99 features. Is that predictive? I suspect it is.
Programming languages benefit far more from a robust implementation, tooling, and good technical documentation (which may read like a standard) than from a prescriptive standard. The latter generates enormous waste, for what?
This code was originally developed in the late 1980s.
A good packaging tool would have helped a lot.
When I contributed to LibreOffice (GSoC 2012) they were still on C++03 !
Ha ha ha, that's funny. It uses pre-98 C++ code, that's set in stone because of extension/UNO APIs. Yes, you can use C++17 in a bunch of places, but not for the basic structures, classes, idioms etc.
And - that's coming from a huge LibreOffice supporter. Speak at conventions, got the T-shirts, everything.
Temporal safety is the primary hard problem from a memory safety standpoint, and RAII does nothing to solve it at least the moment a memory allocation crosses abstraction boundaries.
Background:
The Text type is used to encode different kinds of string-like values. Negative values represent a generated dictionary. Valid indexes in the string intern table (https://en.wikipedia.org/wiki/String_interning) represent a stored string. Other values represent generated variable names.const char *textToStr(Text t) - This takes a Text value and returns a pointer to a null-terminated string. If t is a string intern index, then it returns a pointer to the stored string. If t represents either a generated dictionary or generated variable name, then it calls snprintf on a static buffer and returns the buffer's address.
Problem:
The use of a static buffer in textToStr introduces a temporal safety issue when multiple calls are made in the same statement. Here’s an excerpt from a diagnostic error message, simplified for clarity:
If both e and s are generated dictionaries or variables, then each call to textToStr overwrites the static buffer used by the other. Since the evaluation order of function arguments in C++ is undefined, the result is unpredictable and depends on the compiler and runtime.Potential solutions:
* Return std::string and eat the cost of sometimes copying interned strings
* std::string with your own allocator could probably deal with the two cases fairly cheaply and transparently to the caller
* overload an operator<< or similar and put your conversion result directly into a stream, rather than a buffer that then goes into a stream
* put your generated values in a container that can keep pointers into it valid under all the operations you do on it (I think STL sets would work), keep them there for the rest of the lifetime of the program, and return your pointers to them (or to the interned constant strings)
I think many of these could be termed RAII, so I lean toward the idea in this subthread that RAII and other C++ idioms will help you stay safe, if you use them.
P.S. The function is also not safe if called by multiple threads concurrently. Maybe thread-local storage could be an easy fix for that part, but the other solutions should fix it too. [But if you have any shared storage that caches the generated values such as the last solution, it needs to be protected from concurrency.]
Two, consider the unique_ptr example then. Perhaps a library takes a reference to the contents of the unique_ptr to do some calculation. (by passing the reference into a function) Later, someone comes along and sticks a call to std::async inside the calculation function for better latency. Depending on the launch policy, this will result in a use after free either if the future is first used after the unique_ptr is dead, or if the thread the future is running on does not run until after the unique_ptr is dead.
EDIT: Originally I was just thinking of the person who inserted the std::async as being an idiot, but consider that the function might have been templated, with the implicit assumption that you would pass by value.
Even if we ignore the absolute clown move of having no bounds checks by default (and std::span doesn't have them at all), it's very easy to get into trouble with anything involving C++ iterators and references.
Let's see them.
>safe initialization (use before initialization)
These two are solved by proper use of RAII.
But you have a point with UB. That's always been an issue, though, it's part of the idiosyncrasies of C/C++; all languages have their equivalent of UB.
They're not even remotely equivalent. A single eslint rule has immediately and permanently fixed this in every Javascript project I've worked on, both for me and my coworkers' code. RAII helps, but in C++, no amount of linters and language features can fully protect you.
And C++'s object model can add additional complexity and sources of UB. In C++20 previously valid code that reads a trivially destructible thread_local after it has been destroyed became UB, even though nothing has actually happened to the backing storage yet.
> Arena management
There's nothing that stops you from using arena allocators in C++. (See pmr allocators in C++17 for handling complex non-POD types).
> The cost of RAII_
you're going to have to clean up one way or another. RAII can be zero-overhead, and usually generates less code than the C idiom of "goto Cleanup".
> Use of destructors leads toward template APIs.
Not getting that. Use of destructors leads to use of destructors. Not much else.
> If you don't use exceptions....
Why on earth would you not use exceptions? Proper error handling in C is a complete nightmare.
But even if you don't, lifetime management is a huge problem in C. Not at all trivial to clean things up when you're done. Debugging memory leaks in C code was always a nightmare. The only thing worse was debugging wild memory writes. C++ RAII: very difficult to leak things (impossible, if you're doing it right, which isn't hard), and if it ever does happen almost always related to using C apis that should have been properly wrapped with RAII in the first place.
Granted, wrapping C handles in RAII was a bit tedious in C++89; but C++17 now allows you to write a really tidy AutoClose template for doing RAII close/free of C library pointers now. Not in the standard library, but really easy to roll your own:
> C++ 20 undefined behavior of a read-after-free problem.That's not UB; that's a serious bug. And C's behavior would also be "UB" if you read after freeing a pointer.
I appreciate the education :-)
>There's nothing that stops you from using arena allocators in C++.
This is true, but arenas have two wonderful properties - if your only disposable resource is memory, you don't need to implement disposal at all; and you can free large numbers of individual allocations in constant time for immediate reuse. RAII doesn't help for either of these cases, right?
>Use of destructors leads to use of destructors
I guess what I mean is... It's totally possible and common to have a zillion copies of std::vector in your binary, even though the basic functionality for trivially copyable trivially destructible types is identical and could be serviced from the same implementation, parameterized only on size and alignment. Destruction could be handled with a function pointer. But part of the reason templates are used so heavily seems to be that there's an expectation that libraries should handle types with destructors as the common case.
>lifetime management is a huge problem in C. Not at all trivial to clean things up when you're done.
Absolutely true if you're linking pairs of many malloc and free calls. But if you have a model where a per-frame or per-request or per-operation arena is used for all allocations with the same lifetime, you don't have this problem.
>And C's behavior would also be "UB" if you read after freeing a pointer.
The specific issue I ran into was the destructor of one thread_local reading the value of another thread_local. In C++17 the way to do this was to make one of them trivially destructible, as storage for thread locals is released after all code on that thread has finished, and the lifetime for trivially destructible types ends when storage is freed. In C++20 this was changed, such that the lifetime of a thread local ends when destroyed (rather than when storage is freed) if it's trivially destructible. C thread local lifetimes are tied to storage only and don't have this problem.
Apache/nginx don't exist,
Chrome/V8 don't exist,
Firefox doesn't exist,
GCC/Clang don't exist,
MySQL doesn't exist,
TensorFlow doesn't exist,
VLC doesn't exist,
and the list goes on and on ...
All software has bugs.
Is this supposed to be news?
* Sorta, for command-line injection you have to know the way the command you are using processes flags and environmental variables in order to know that the filtering you are doing will work. It is absolutely better to use a library instead if you can get away with it.
> * Relatively modern, capable tech corporations that understand that their code is an asset. (This isn’t strictly big tech. Any sane greenfield C++ startup will also fall into this category.)
> * Everyone else. Every ancient corporation where people are still fighting over how to indent their code, and some young engineer is begging management to allow him to set up a linter.
Well said.
And because of this, a lot of the first is leaving for greener pastures.
I feel it would be best for the C++ language that its development would stop. There's no way to fix its current problems. The fact that it stayed compatible with previous iterations over so many years is an accomplishment, almost a miracle, it should be cherished. Deviating from that direction doesn't make sense. Keeping that does not make sense either.
I also think there's a place where it can easily be supplanted, but currently cross platform native software has Qt and bindings for it in other languages are mixed.
In performance critical things, Rust still doesn't feel like the final answer since you end up cloning a lot and refactors are very painful. Go obviously has it's issues since SIMD support is non-existent and there is limited control over garbage collection, though it works well for web APIs.
.NET does however offer best-in-class portable SIMD API and large API surface of platform intrinsics both of which are heavily used by CoreLib and many performance-oriented libraries. You can usually port intrinsified implementations hand-written in C++ to C# while making the code more readable and portable and not losing any performance (though sometimes you have to make different choices to make the compiler happy).
https://github.com/dotnet/runtime/blob/main/docs/coding-guid...
Autovectorization is usually very fragile and in areas where you care about it hand-written implementation always provides much better results that will not randomly break on minor changes to compiler version or the code, that must be carefully guarded against.
It would be still nice to have it eventually, and I was told that JIT team actively discusses this but there are just many more lower hanging fruits that will light up in disproportionately more instances of user code.
If it's any consolation, Clang/LLVM is not a silver bullet either and you will find situations where .NET's compiler output is competitive or even better: https://godbolt.org/z/3aKnePaez
All these new features introduce some run-time overhead, it seems.
One of C++'s core tenants is (and has been since the 90's) zero-cost abstractions. Or really, "zero-runtime-cost abstractions"; compile times tend to increase.
Obviously some abstractions necessarily require more computation (e.g. raw pointers vs reference-counted smart pointers). But in many cases new features (if implemented correctly!) give better semantics and additional compile-time safety while still compiling down to equivalent binary.
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p34...
but the current ABI _forces_ some abstractions to have unnecessary cost. For example:
"Why can a T* be passed in register, but a unique_ptr<T> cannot?" https://stackoverflow.com/q/58339165/1593077
another example is improvements in the implementation of parts of the standard library.
And that is not the only thing that prevents zero-cost abstraction. C++ does not support pointer restrction, see:
https://stackoverflow.com/tags/restrict-qualifier/info
in practice, compilers support it for some contexts.
(Anoter, minor issue is the discrepancy of "No viral annotation" and "no heavy annotation" with the need to mark things noexcept to avoid exceptio handling overhead.)
For restrict: Universally supported as `__restrict`, thus not a priority for anyone to "officially" solve. Most major performance complaints fall into this category. Eg, std::regex is bad, sure, but nobody uses std::regex so fixing it doesn't matter.
SysV/Itanium/Win64 knows nothing about the abstract difference between a 64-bit value and a 64-bit value inside a class instance. Don't see what prevents the solution from the language side.
> Universally supported as `__restrict`
1. That's not C++. If we're talking about what compilers can offer outside the language standard - that's a different discussion. We don't have to standardize, then, just get a working implementation somewhere and lobby other compiler-makers to adopt it. A compiler might implement Baxter's "safe mode" idea as a non-standard extension, for example.
2. Even the compilers supoorting `__restrict` only support it for parameters of functions - nowhere else.
The standard has nothing to say about calling convention. Calling conventions are defined by the ABI standards. unique_ptr is a class template, how a class is passed between routines is defined by ABI standards, ergo how unique_ptr is passed between routines is defined by the ABI standards. I don't know what else you're implying here. Unless you're saying we should have language-level smart pointers, not class templates, in which case yes that's an awful idea.
> That's not C++...
If you're arguing about some abstract, Platonic ideal of C++ that is divorced from the actual implementations that exist in the world, I don't know what your point is. We write code to be compiled by compilers that exist, not printed out and contemplated in a museum. The compilers support __restrict, people use it all the time, so its not a problem.
> Even the compilers...
Where else do you want it? What would its meaning even be to something like a member variable?
Restrict is pointless in any scenario that doesn't involve a potentially aliasing ABI boundary, ie, function parameters. In every other scenario it is ignored.
ABI stability in the context of the standards committee is about library ABI, specifically the standard library. When the committee updated the wording about C++'s std::string in C++11, it meant implementers needed to change the layout of a std::string, making this "new" std::string incompatible with the "old" std::string. Any libraries passing std::string across API boundaries needed to be recompiled with the "new" std::string.
This has no effect on FFIs for interop with other languages, which are not passing STL types across language boundaries to begin with (a std::string has no meaning in Python).
ABI stability for the standard library is motivated by large, old, coroporate codebases which had poor API practices, passed STL types across ABI boundaries, and subsequently lost access to the source code of those libraries and applications or otherwise cannot recompile them for some reason. Many people question the wisdom of catering to such users.