Understanding Modern C++ Memory Management: A Conversation with Patrice Roy
Inside the evolving standards, trade-offs, and tooling that shape safe and efficient memory use in modern C++.
Memory safety in C++ remains a critical challenge—and a defining strength. In this in-depth conversation, we speak with Patrice Roy, author of C++ Memory Management, about the evolving landscape of manual and automatic memory handling, the role of language features like smart pointers and allocators, and what developers should expect from upcoming C++ standards.
Patrice Roy has been teaching computer science and software engineering for nearly three decades. He is a long-standing member of the ISO C++ Standards Committee and the Programming Language Vulnerabilities Working Group. Though he teaches game programmers, his own focus lies in building performant systems and sharing deep technical insights through talks, training sessions, and now, his latest book.
You can watch the full interview below—or read on for the complete transcript.
1. In your book, you chose to address C++ memory management challenges across a wide range of domains like real-time systems, embedded devices, games, and desktop applications. What inspired you to tackle such a broad scope for memory management?
Patrice Roy: Well, it's the fault of my editor!
I've been giving classes at CppCon—and if you haven't gone to a CppCon class in your life, you should. It's really cool. There are lots of very proficient experts there. I gave a class on low-latency systems, another on how to make your software smaller (which is connected to memory management in a sense), and a memory management class in C++. I gave that one a few times, and it went very well. It was a cool class to give.
Then, about two years ago, Packt reached out saying, “We know about that class you’ve been giving—have you thought about making a book out of it?” I had never written a book myself. I’ve written my PhD thesis, lots of articles, and of course course notes—plenty of them, thousands and thousands of pages. But a book? That could be an interesting thing. So I tried to transform the class experience into a fun reading experience. And that was enlightening, because it’s really different.
So, that’s what led to the book.
2. In your book, how did you approach balancing foundational topics with advanced, domain-specific techniques? What kind of structure did you choose to follow?
Patrice Roy: That’s a cool question, really—because that’s one of the problems you face.
When you have a classroom in front of you, which I’ve been doing for 28 years now, teaching becomes something that comes naturally to you. It doesn’t start that way, but eventually it does. I see my people reacting in the class—I can see their eyes, I know what’s going on, and I can make a joke at the right time.
But when they're reading the book, you're not there. They have to be able to function without you. That was a challenge here, but I hope people who read the book will find that it's working. We'll see.
So, I don't know if I did it well, but my thoughts were: let’s start with a few chapters where we make sure we’re all on the same page. I tried to make sure we have a common vocabulary—we understand what we mean by things such as objects and pointers, because everyone thinks they understand these, but it’s not as easy as people think. There’s a specific meaning to “object” in C++. Working in the language myself helped bring these things into words. I hope I did that well.
And I took a big risk—I’m still wondering if that was the right thing to do—but readers will tell me. My second chapter is about things you shouldn’t do. Not spelled out like “don’t do this” and “don’t do that,” but more like: “There are times we’ll be far from the machine, and times we’ll be very close to the machine—low level. And sometimes we’re going to play dirty tricks, because that’s how we’re going to meet our objectives.”
So, we have to do those things right. There’s not much protection when you’re very low level. You’re getting things out of the way to control every little detail—and you can really hurt yourself if you’re not careful. I wanted to make sure that when we get to that point—where it’s kind of dangerous—we’re doing it right. So I put a chapter very early on that focuses on the dangerous stuff. I’m hoping it’s going to work, but again, the readers will tell me.
I tried to make it fun, though. I tried to explain why you don’t do these things—because the right way of doing them comes back again and again in a more cohesive framework after that.
So, I have a chapter with no example code that shows best practices—only one like that. The rest of the book does contain working examples. But in that chapter, it’s all stuff you shouldn’t do. The code in that chapter might work, but if you use it, you’re going to get yourself in trouble.
Starting from this common framework, I was hoping we could then move on to advanced topics. I tried to be gradual there—starting with stuff that’s more common knowledge, and moving towards more advanced and exotic material after that.
3. What would you say is the most important takeaway for developers who complete the book?
Patrice Roy: That one is difficult, because we have a variety of potential readers here.
I’m hoping that some people—maybe in the first third of the book—will get a good grasp of just some practices. For example, there’s a chapter—around Chapter 5 or 6, I don’t remember exactly—where we discuss how to use standard smart pointers. I think everyone should know how to do that in today’s C++, because it’s a really good way of working. You understand the pros and cons, the costs—where there are costs, and where there are none—and you just write code that works well.
Then I go one step further: we actually write academic versions of these, but reasonable ones, to understand what’s going on under the covers. So, if you want to write good code that manages resources yourself—or using standard tools—the first half of the book is going to get you there, I think.
If you’re a beginner to intermediate programmer who just wants to write smart code, it’s going to be good. If you want to learn how things work and see the nuts and springs and everything, the further you go, the more you’ll get that.
Towards the end, I have three chapters where we use containers that store stuff. The first one uses a naïve approach with two different containers—you write the kind of code that would probably come to mind naturally. But then we go deeper. If we want to compete with the speed of standard containers, which are really well done, then you have to know more. That’s where the memory management knowledge from earlier in the book comes together and helps you do a better job—avoiding unnecessary work. But of course, it’s more involved.
So, there’s a gradual path. Beginners, intermediate, and experienced programmers will get different benefits, and will probably use some chapters more than others. The more experienced ones will probably like the last part of the book more than the first. At least, that’s how I tried to build it.
4. According to you, which modern C++ features have most improved memory safety and management?
Patrice Roy: Oh, that’s a huge question. The safety trend these days is very important to everyone working on the language.
With C++23, for example, we introduced something called erroneous behavior, which makes it not just undefined but actually an error to do things like read from an uninitialized variable. Before that—prior to a proposal from my friend Thomas Köppe (I hope I’m pronouncing his name right)—reading from a variable that you hadn’t written to was what we called undefined behavior. So, the compiler could do anything around your code, because you broke the rules.
Now, we’ve made that kind of stuff into an error. We’re encouraging compilers or programs to stop execution at that point, because something’s wrong. It’s better to have a hard error in such cases than to just go on and hope for the best—even though you just broke the rules.
From a memory management perspective, the first thing you should do is avoid manual memory handling if you can. The automatic mechanisms work really well in C++. But when you do need to allocate or deallocate—or control how your containers behave with memory—we have very good tools.
You can use the type system to do that for you, in a sense. If you use smart pointers and say, “That object is the sole owner of the resource,” it’s written in your source code. Lots of bugs just go away. You don’t have to ask who will free the memory—it’s that guy. He’s responsible. It’s his job.
If you’re in a rarer case where you have co-owners of the resource and don’t know who’ll be the last to use it, we have shared pointers. You write that into your code as a type, and that removes a whole class of bugs you’d get with raw pointers, like “Who’s responsible for freeing this?”
We also have containers—like vectors—that manage memory for you and grow as needed. If you look at how those are written when you want to be efficient, you start to really respect the people who built them. It gets very complicated.
So, the book brings you to the point where, if you need to encode memory management strategies into your type system beyond the defaults, you’ll know how. For example, you can customize unique_ptr
to free your resource in a different way than just calling delete
. Of course you can—here’s how. Here’s the cost. Here’s how to do it without cost—because my customers care about cost.
If you have different semantics—like “I’m the sole owner” or “we’re co-owners”—there’s a way to express that too. We even write our own smart pointers that aren’t the standard ones but have different semantics you might need.
So, if you move from a manual approach to one that uses the type system to your advantage, your code will be safer, I think.
Another thing is exception safety. A lot of users don’t use exceptions in C++. My game programmers and embedded systems programmers are among them. But a lot of people do.
If you have exceptions in your code, you need to make sure your resources get deallocated or freed properly. The book covers that. It was suggested that I write a whole chapter on that, but instead, we used it as a common thread throughout the book. All the chapters worry about safety, to make sure the message comes through clearly.
5. What memory-related mistakes are still common in current C++ codebases? What tools or practices help reduce these errors?
Patrice Roy: That's a beautiful question because it's an actual problem.
Over time, we're reducing the number of ways you can get into trouble with C++. If you use C++ as C++, you’ll have far fewer bugs than if you use it otherwise. Again, C is a fine language—but if you're writing C++, it's better to think in C++.
The main problem we still have is probably what we call dangling references. There are ways to write code where you're referring to objects that have already gone away. Lifetime is at the core of object and resource management in C++. A lot of what we do in the language is tied to object lifetime.
There's an idiom tied to that called RAII—Resource Acquisition Is Initialization, or Responsibility Acquisition Is Initialization, depending on who you ask. We actually have a chapter on that, because it's so characteristic of C++.
I think it was Roger Orr who said, “The most beautiful line of code in C++ is the closing brace,” because so much stuff happens there—and it’s true. But yes, there are still ways to get in trouble—for example, returning a reference to a temporary or to something that’s now dead, and using it by accident. Even with very recent tools like string_view
, you can hit those issues.
We have an ongoing effort in the language and community to reduce these cases. Languages where you don’t manage memory manually don’t suffer from this particular issue—because as long as someone refers to an object, it stays alive. But in C++, nothing's free, so you have to think about that.
We reason a lot about what happens when you copy an object, when you move it, and when it goes out of existence. In C++, you control all that. It's one of the language's strengths. But if you write code that returns a temporary and then tries to refer to it, you're in trouble.
Advances in compiler technology help us find these bugs. There’s also support in the language itself. I have a proposal in flight right now for the standard, where a function can declare: “When you pass this argument to me, you shouldn’t use it anymore afterwards—because I might have done something to it.” I hope that gets into C++29.
There’s also a very big effort underway around lifetimes. Herb Sutter and Gašper Ažman—as I hope I’m pronouncing it right—are working on that. They’re trying to reduce undefined behavior and make lifetime bugs less likely. It’s a strong, ongoing effort, and lots of us are working on it together.
As for tools, we have sanitizers of all sorts. These are tools you can run at compile time or runtime that tell you if you're accessing something out of bounds or using memory that's already been freed. These are external to the compiler, but very helpful.
There’s also an effort from Louis Dionne—a fellow Québécois—on a Clang sanitizer library. They did measurements and found that by enforcing some rules for things like out-of-bounds access, they were able to get significantly safer code with very low overhead—like 0.03%. That’s really encouraging, because in C++ we're very cost-conscious. If the cost is low, it's interesting. Otherwise, we probably wouldn't do it.
So yes—there’s effort coming from the compiler side, the language side, the library side, and external tools. It’s a concerted effort by many people to make the language safer.
Also, I should mention profiles. Bjarne Stroustrup has introduced something called profiles—where you recompile your code and ask the compiler to enforce certain rules. It will then check for bugs that are important in your codebase. You can activate it if you want—or deactivate it if you don’t. A lot of my users will do that.
And we have contracts coming into C++26. These let you mark preconditions and postconditions in your functions, and assertions about what should be true at certain points. They’re written into the code—not just as prose—and depending on how you compile, they might be checked, observed, or ignored. But at least you wrote down what should be true. These are all ways to make your code better, safer, more correct.
6. When is it appropriate to avoid STL containers or smart pointers for performance reasons? What criteria justify switching to manual memory handling?
Patrice Roy: So—I tried to raise that question in the book at some point.
Take vector
, for example. It's an interesting use case. We always tell people: use vector
by default. It’s fast, efficient, and safe for the most part. And effectively, it is a great tool.
If you’re using a vector and want it to be efficient, it has to be larger than what you really need—because it pre-allocates memory so that when you add stuff, the memory is already there. You don’t have to reallocate. That’s why it’s quick. But that also means it uses more memory.
When you add something to a full vector, you have to reallocate, and that addition will cost you something. If you see it coming, you can reserve the memory ahead of time and avoid paying that cost at a critical moment. So we discuss that.
If you have a fixed-size array of known size at compile time, and it’s a reasonable size—not too big—just use an array. If it doesn’t need to grow, why pay for a vector? A vector needs to know the capacity, the number of elements, and the starting point—so there's a bit of overhead. A fixed-size array doesn’t need all that.
Now, if you're allocating memory dynamically and you know the size at runtime, but it won’t grow, you can use a unique_ptr
to an array. But then you’ll need to track the size separately, because the pointer won’t know it. So you can write an in-between container. We do that too in the book.
The point is: if your use case doesn’t require growth, then vector
might be overkill. You can build something leaner—especially if you care about memory footprint or object size.
As for smart pointers—they’re cool, but if the responsibility for the resource is clearly defined, and you just want to use it, you don’t always need one. Sometimes a raw pointer is fine, especially if the system is small.
And once you have a unique_ptr
in your hand, there’s nothing wrong with passing the raw pointer around to functions—as long as the moral contract is clear: if it’s a raw pointer, you’re not the owner, you’re just using it. The owner is the smart pointer—that’s why it’s there.
Also, sometimes even if it’s not slower at runtime, using smart pointers can slow you down during debugging. If you’re stepping through a unique_ptr
, the debugger might take you through all the member functions, and that slows you down as a human. If you don’t need the overhead—not even that—just use a raw pointer. It’s fine.
So yes—there are moments when sanitizers are great, but sometimes smart pointers and containers are more than you need. And when the responsibility is clear, raw pointers are awesome. They're fast, easy to copy, easy to move—awesome.
7. How useful are tools like AddressSanitizer or Valgrind in detecting memory issues? Do you have a relevant example?
Patrice Roy: They’re awesome—but I’m going to be the bad guy: I don’t use them much.
But they really are very good tools. Of course, there are costs to everything, but sanitizers are very effective. I think everyone should use them once in a while, just to make sure they haven’t made a mistake recently. They should be part of everyone’s test process. Sometimes you’re convinced you did something right—and you realize you did something wrong.
In the same sense, you should try to raise your warning levels when you compile. If you're using Visual Studio, try /W4
—something like that. Maybe not /Wall
with GCC, because it's too noisy, or with Clang—but raise the warning levels a bit, just to make sure. By default, they don’t necessarily tell you as much as they should. And when you do that, sometimes you discover things about your code.
I’m not a kid. I’m very experienced, so even when I raise the warning levels, I don’t get much noise, to be honest—but I think every team should try these tools. Try your sanitizers, see what they tell you. It probably won’t cost you much. Maybe it won’t cost you anything, depending on how they’re configured. But it will make you feel more confident in what you did—and that’s already something.
8. When should a developer consider implementing a custom allocator or memory pool? What measurable benefits can these techniques offer?
Patrice Roy: That’s interesting. Well, the first thing you should do is measure. Make sure the allocator or memory pool you already have doesn’t already do the job. If you're spending time on something, it has to pay off.
Your implementation might already be fast enough—and if it is, then put your effort somewhere else. We have good libraries.
But if you discover a bottleneck—say you’re working in the finance domain, and you have nanosecond-level constraints because you need to buy and sell very fast, especially if there’s a crisis—then yes, sometimes you’ll want more control over what’s going on.
Maybe you don’t want any allocation to occur during a critical phase—you want to use a buffer on the stack. That’s a permanent case.
There are two big allocator models in C++. The traditional one associates an allocator with the container’s type—it’s type-based. That’s extremely efficient. But if you have many allocator types, you end up with many container types, and sometimes it gets a bit complicated. It's efficient, but if we were designing the language from scratch, I don’t know if we’d do it that way.
The traditional allocator model also required you to write a lot of code on top of what you really wanted. That’s been simplified—mostly thanks to the folks at Bloomberg. Since C++11, allocator support has become traits-based. If you don’t provide boilerplate code, it’s done for you automatically. If you write a function, it takes yours. Otherwise, the sensible defaults are used. It’s all compile-time—so there’s no cost to that.
We cover the transition from one model to the other in the book. It was an interesting thing to write about.
Then, since C++17, we’ve had the PMR (Polymorphic Memory Resource) model—also from Bloomberg, if I remember right. It’s value-based instead of type-based. So a PMR vector has a member—a pointer to its allocator—instead of having it baked into the type. And when you're allocating or deallocating, there are only three functions to write for your custom allocator. They're virtual function calls.
Now, normally, allocation is a costly operation anyway. So the indirection of a virtual function call isn’t much of a cost—it’s already there in the background. PMR is very efficient for that. But if you're in a domain where nanoseconds matter, even that indirection might be too much. In that case, the traditional model—where there's no indirection—may be a better choice, even if you have to write more code.
So it’s a balancing act. You have to measure and pick the right tool for the job.
For example, let’s say you want to use a buffer on the stack with PMR—it’s easy. There’s something built in. You just say, “I’m using that; here’s my buffer,” and it’s done. Very easy for the user. If the cost of a virtual function call is acceptable for you, problem solved. If not, write it yourself using the traditional model—it’ll be faster, but it’s more work.
Again: measure, and then do what you need to do.
9. Are there any recent or upcoming C++ features that improve memory safety or diagnostics?
Patrice Roy: Yes. If we’re talking about diagnostics, we’ve had a number of recent additions—and more are coming.
For example, in the past, when you wrote new
or delete
, people thought new
constructed the object. That’s not really true. new
allocates memory. The constructor then creates the object on that memory. Likewise, when you use delete
, the destructor destroys the object, and then memory is freed.
There’s a concept called destroying delete, which Richard Smith and others have worked on. It lets a function handle both destroying the object and freeing the memory—knowing the full context. That can allow you to do some cool stuff. We touch on that in the book because it’s quite interesting.
For C++26, we’ve discussed something similar for new
, specifically for something called “new with type.” It’s a way to inform new
about the type that’s going to be constructed in the memory. That comes in handy when building hardened libraries—like the one Louis Dionne is working on—because knowing what type is being constructed helps improve safety.
Another recent change: we talked earlier about erroneous behavior. That’s now in the standard—so doing things like reading from an uninitialized variable is no longer just undefined behavior; it’s a runtime error. I think that’s a very good thing.
We also added trivial relocation in C++26, which is very cool. I finished writing the book in February 2025, and two weeks later we voted that feature into the language. There were two competing approaches, and one won—so I mentioned both in the book in a general sense.
Trivial relocation means that for certain objects, we can just copy the bits instead of running constructors and destructors, and still do it safely—if we manage the lifetime correctly. It’s a bit technical, but for people who care about performance, it’s a big win.
Sometimes, just recompiling your code with a newer standard will make it faster—because the standard library takes advantage of these improvements. The same thing happened with C++11 and move semantics. People recompiled and suddenly their code was 80% faster. Trivial relocation will do the same in certain cases. I love when that happens.
We’ve also had things like std::bit_cast
in C++20, which solves some low-level problems when you want to copy memory into an object and declare it valid. That’s very useful for networking or file I/O. It’s better than what we used to have.
The language keeps evolving. I’m hoping we get better standard allocators that do common tasks for us. That would make it easier for people to use allocators without needing to write them.
We just added a new container—hive
—in C++26, from Matthew Bentley in New Zealand. Game programmers will love it. There’s also a lot of benchmarking being done on it. It does what it’s supposed to do very well.
We’re also getting flat maps and flat sets—containers with different trade-offs from traditional map
and set
. They give you more predictable performance and control over layout, but they’re not ideal if you have lots of exceptions. Again, game developers will appreciate these.
So yes—many recent features and proposals are focused on improving safety, diagnostics, and performance. The language is evolving in a very exciting way.
10. What practical advice would you give experienced C++ developers on managing memory safely and efficiently?
Patrice Roy: It’s going to be the same advice for everyone.
First: you need to have a clear view of what you want to do and what your problem is. Think before you code. If you don’t know what you’re doing and you just start hacking away—sometimes it’s fun, but it’s not efficient. You might put a lot of effort into the wrong place.
Second: measure. Measure to see if the default solutions already do the job. I’ve seen too many people—myself included—spend hours, days, or even weeks optimizing something, only to realize later that it’s not called very often. So the effort was wasted. Use profilers. Use real measurements. Instrument your code. Figure out where your effort will actually pay off.
Third: test. It’s one thing to have a theoretical idea, but you need to test to see if it works. Make sure you didn’t break anything. Make sure the work brings results. I don’t test enough myself. Everyone should do that—especially regression tests.
For example, we added the [[likely]]
and [[unlikely]]
attributes in C++20. If you have an if
branch where you know one path is taken more often, you can tell the compiler to optimize for that. Compilers are good at figuring it out themselves—but sometimes you know better, because you know the data.
But even then, you need to test your assumptions often. Compiler technology evolves. Sometimes, the naïve code will outperform your optimized code, just because the compiler learned new tricks. So keep the simple version around, and recheck it from time to time.
A friend of mine writes real-time audio code and sometimes writes assembly to optimize. But he always keeps the naïve C++ version for comparison. Regularly, he re-tests it. One day, the C++ version might beat the assembly version—and then you retire the old code.
So yes:
Think clearly.
Measure what matters.
Test and re-test often.
As we get older, we learn that we’re not as good as we thought at predicting what will be fast or slow.
To explore the topics covered in this conversation in greater depth—including practical examples, performance tips, and real-world patterns—check out Patrice Roy’s book, C++ Memory Management, available from Packt:
Here is what some readers have said: