Deep Engineering #8: Gabriel Baptista and Francesco Abbruzzese on Architecting Resilience with DevOps
How culture, CI/CD, and cloud-native thinking enable speed without sacrificing stability
Welcome to the eighth issue of Deep Engineering.
The 2024 DORA Accelerate State of DevOps report, published last October, showed a sharp divide in engineering performance: just 19% of teams qualify as elite, while 25% lag far behind. The difference lies in how teams approach DevOps—not as tooling, but as a foundation for resilient architecture and continuous delivery.
To understand what this means for software architects today, we spoke with Gabriel Baptista and Francesco Abbruzzese, authors of Software Architecture with C# 12 and .NET 8. Baptista is an Azure Platform-as-a-Service (PaaS) specialist, university instructor, and advisor to early-stage startups. Abbruzzese is the creator of the MVC and Blazor Controls Toolkits and has worked across domains—from early AI decision support systems in finance to top 10 video game titles.
Together, they emphasize that DevOps is foundational to architectural resilience. “Applications can no longer afford downtime,” Baptista tells us. Abbruzzese adds: “DevOps is designed specifically to align technical outcomes with business goals.”
You can watch the full interview and read the transcript here—or scroll down for our take on how resilient delivery is being redefined at the intersection of DevOps, AI, and cloud.
Why DevOps Is Key to Architecting Resilience in the Age of AI and the Shift to Cloud-Native with Gabriel Baptista and Francesco Abbruzzese
The 2024 DORA report showed that only 19% of teams qualify as elite performers and they:
• Deploy multiple times per day
• Have a lead time of under 1 day
• Have a change failure rate of around 5%
• Have a recovery time of under 1 hour
In contrast, 25% of teams sit in the lowest performance tier, deploying as infrequently as once every six months, with failure rates nearing 40% and recovery times stretching to a month.
Teams that perform well on DORA’s Four Keys metrics also exhibit greater adaptability and lower burnout. These findings cut across industries and geographies.
The architecture landscape, meanwhile, has shifted. AI-assisted development is now widespread (75.9% of developers use AI tools for at least one task), but DORA found that higher AI usage correlates with a 1.5% drop in throughput and a 7.2% decline in release stability. Similarly, platform engineering is nearly ubiquitous (89% adoption) yet often introduces latency and fragility if developer autonomy is not preserved.
But as Baptista states:
“Applications can no longer afford downtime, especially enterprise applications that need to run 24/7.
To achieve that, you need to write good code—code that provides visibility into what’s happening, that integrates with retries, that enables better performance. A software architect has to consider these things from the very beginning—right when they start analyzing the application’s requirements.”
But how can architects ensure this level of stability and resilience that meets business needs? According to Abbruzzese:
“The best tool for this is DevOps. DevOps is designed specifically to align technical outcomes with business goals.”
Microsoft defines DevOps as:
“The integration of development, quality assurance, and IT operations into a unified culture and set of processes for delivering software.”
In their book, Baptista and Abbruzzese concur:
“Although many people define DevOps as a process, the more you work with it, the better you understand it as a philosophy.”
DevOps as Philosophy and Culture: Not just a Tool
Considering DevOps as a philosophy, the authors say, helps architects focus on service design thinking which means:
“Keeping in mind that the software you design is a service offered to an organization or part of an organization. …The highest priority (for you as an architect) is the value your software gives to the target organization. … you are not just offering working code and an agreement to fix bugs but also a solution for all the needs that your software was conceived for. In other words, your job includes everything …to satisfy those needs, such as monitoring users’ satisfaction and quickly adapting the software when the user needs change, due to issues or new requirements.”
To explain what the role of the software architect is in this context, they add:
“DevOps is a term that is derived from the combination of the words Development and Operations, and the DevOps process simply unifies actions in these two areas. However, when you start to study a little bit more about it, you will realize that just connecting these two areas is not enough to achieve the true goals of this philosophy.
We can also say that DevOps is the process that answers the current needs of people regarding software delivery.
Donovan Brown has a spectacular definition of what DevOps is: DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.
A way to deliver value continuously to our end users, using processes, people, and products: this is the best description of the DevOps philosophy. We need to develop and deliver customer-oriented software. …your task as a software architect is to present the technology that will facilitate the process of delivery.”
DORA’s recommendation to “Be relentlessly user-centric” also supports this framing of DevOps:
“teams that have a deep desire to understand and align to their users’ needs and the mechanisms to collect, track, and respond to user feedback have the highest levels of organizational performance. In fact, organizations can be successful even without high levels of software velocity and stability, as long as they are user focused.
… Teams that focus on the user make better products. Not only do products improve, but employees are more satisfied with their jobs and less likely to experience burnout.
Fast, stable software delivery enables organizations more frequent opportunities to experiment and learn. Ideally, these experiments and iterations are based on user feedback. Fast and stable software delivery allows you to experiment, better understand user needs, and quickly respond if those needs are not being met.”
Experienced Amazon technologist and Principal Resilience Architect at Arpio, Seth Eliot, enriches the framing of DevOps as a culture saying that DevOps must not be seen as a toolchain or process stack, but as a shift in mindset—one rooted in ownership, autonomy, and tight integration between historically siloed roles.
The canonical DevOps problem, he says, is:
“The wall that traditionally has existed between development and operations,” a wall that “prevented these roles... from having shared goals.”
He urges architects to remember that:
“DevOps is all about culture and the tools are based on that. The tools come after that.”
If you are wondering how such a culture can be fostered, author of The Phoenix Project and The DevOps Handbook, Gene Kim’s "Three Ways" framework offers foundational principles:
The First Way-Flow/Systems Thinking: This principle focuses on optimizing the entire system rather than individual silos. It stresses the importance of ensuring that value flows smoothly from development to IT operations, with the goal of preventing defects from passing downstream. The emphasis is on improving flow across all business value streams and gaining a deep understanding of the system to avoid local optimizations that could cause global degradation.
The Second Way-Amplify Feedback Loops: The Second Way focuses on shortening and amplifying feedback loops throughout the process. Continuous, real-time feedback is essential to make quick corrections, improve customer satisfaction, and embed knowledge where it's most needed. The principle encourages responding to feedback, both from customers and internal stakeholders, to enhance the development and operations process.
The Third Way-Culture of Continual Experimentation and Learning: This principle advocates for creating a culture that encourages experimentation, risk-taking, and learning from failure. It emphasizes the importance of mastering skills through repetition and practice, as well as introducing faults into the system deliberately to enhance resilience. Continuous improvement is fostered by allocating time for work improvements and creating rituals that reward taking risks.
These principles also emphasize a cultural shift toward continuous improvement which naturally supports the adoption of engineering practices that enable resilience and stability in software delivery.
How DevOps Practices Enable Resilience by Design
Going back to the 2024 DORA report, elite teams achieve both speed and stability through five disciplined engineering practices:
Small batch development
Automated testing
Trunk-based workflows
Continuous integration (CI)
Real-time monitoring
Baptista and Abbruzzese demonstrate these using a case study (the WWTravelClub platform) showing how to implement multi-stage pipelines, enforce quality gates, and build visibility into the delivery process. Here is a breakdown:
Small Batch Changes and Change Isolation: DORA states that “small batch sizes and robust testing mechanisms” are essential for high performance. Baptista and Abbruzzese, echo this by warning about the risks of incomplete features and unstable merges. They advise using feature flags and pull requests to ensure that “only entire features will appear to your end users.” Pull requests enable peer review, while flags control feature exposure at runtime, both critical for keeping systems stable in a CD environment.
Trunk-Based Workflows and Pull Request Gates: Trunk-based development refers to a workflow where all developers commit to a shared mainline branch. But gatekeeping is essential to ensure the safety of this workflow. Baptista and Abbruzzese show how to integrate pull request reviews and automated build validations to ensure code quality. They recommend:
Static analysis
Pre-merge tests
Consistent peer review.
They also note that many teams believe they’ve implemented CI/CD simply by enabling a build pipeline, “but this is far from saying you have CI/CD available in your solution.”
Baptista adds that adopting DevSecOps practices can further strengthen the review process.
“Instead of just implementing DevOps, why not implement DevSecOps? With DevSecOps, you can include static analysis tools that identify security issues early. These tools help architects and senior developers review the code produced by the team and ensure security practices are being followed.”
Multi-Stage Pipelines and Controlled Deployments: DORA’s report cautions that indiscriminate deployment automation, especially with large change sets, often leads to stability regressions. Baptista and Abbruzzese demonstrate a multi-stage pipeline structure to reduce deployment risk:
Development/Testing
Staging
Production
They note that “you need to create a process and a pipeline that guarantees that only good and approved software (reaches) the production stage.
Automated Testing as Early Fault Detection: DORA finds that elite teams use automation not for speed alone, but for stability. Baptista and Abbruzzese emphasize the importance of unit and functional tests integrated into CI pipelines. Failing tests prevent bad commits from reaching staging, and the presence of automated feedback loops improves developer confidence.
Continuous Feedback and Observability: DORA claims that resilience is enhanced not only by automation, but by visibility and rapid iteration. Baptista and Abbruzzese recommend integrating Application Insights and Microsoft’s Test and Feedback browser extension to close the feedback loop, capture live production behavior, and turn user feedback into structured work items.
When these practices are implemented with discipline, architecture becomes both adaptable and durable. As Abbruzzese puts it, “You don’t have to rewrite large portions of the application.” You just need to be able to change the right parts, safely, at speed.
Balancing AI-Driven Velocity and Resilience in DevOps
The 2024 DORA report confirms that roughly 75% of teams now use AI tools daily – with over one-third of respondents reporting “moderate” to “extreme” productivity gains from AI-assisted coding. Higher AI adoption correlates with modest improvements (e.g. ~7.5% better documentation, 3.4% better code quality) and faster code review times. However, these gains come with trade-offs: AI use was accompanied by an estimated –1.5% delivery throughput and –7.2% change stability.
“Improving the development process does not automatically improve software delivery”
To balance the trade-offs between AI’s benefits and challenges, DORA makes three recommendations:
Focus AI on empowering developers (e.g. automating boilerplate and documentation) rather than blindly pushing code
Establish clear guidelines and feedback loops for AI use, fostering open discussion about when AI help is appropriate
Allocate dedicated exploration time so teams can build trust in AI tools (rather than hastily deploying suggestions)
Baptista talking about how AI will impact architects says:
“As architects, we’ll be impacted by AI—positively or negatively—depending on how we work with it. Let me give two examples.
Today, it's possible to upload an architecture diagram into an AI tool like ChatGPT and discuss with it whether you’re creating a good or bad design. That’s already possible. In some cases, I’ve used AI to give me feedback or suggest changes to my designs. It can do that.
But as a software architect, you still need to be a good analyst. You need to evaluate whether the output from the AI is actually correct. Especially in enterprise systems, that’s not always easy to do. So, yes, AI will change the world, but we—as individuals—need to use our intelligence to critically analyze whether the AI output is good or not, in any context.”
Regarding how software architects can prepare for this impact he says:
“We, as architects, need to understand that a good AI solution first requires a good software architecture—because AI only works with good data. Without good data, you cannot have a good AI.
As software architects, we need to understand that we have to build architectures that will support good AI—because if you don’t provide quality data, you won’t get quality AI.”
Abbruzzese adds:
“I think AI is a valuable tool, but at least for now, it can’t completely replace the experience of a professional.
It helps save time. It can suggest choices—but sometimes those suggestions are wrong. Other times, those suggestions can be useful as a starting point for further investigation. AI can write some code, some diagrams, some designs that you might use as a base. That saves time.
Sometimes it suggests something you hadn’t thought of, or reminds you of a possibility you forgot to consider. That doesn’t mean it’s the best solution—but it’s a helpful suggestion. It’s a tool to avoid missing things and to save time.
At the moment, AI can’t replace the experience of a real professional—whether it’s an architect, a programmer, or someone else. For instance, I’ve never seen AI come up with a completely new algorithm. If you have to invent a new one, it’s not capable of doing that.
…And I think this won’t change much over time—at least not until we reach actual artificial general intelligence, something human-like.”
But AI is not the only shift architects need to prepare for. Baptista states:
“In the near future, I believe most applications will be cloud-native… This is something that everyone working in software development today needs to think about.”
The Shift to Cloud-Native
Making the case for the shift to cloud-native architecture, Baptista says:
“We’re discovering new ways to build solutions every single day. We can’t always keep up that same pace on the architecture side, which is why we need to think carefully about how to design a software architecture that can be both adaptable and resilient.”
DORA’s report also identifies a link between team success and leveraging flexible architecture:
“We see that successful teams are more likely to take advantage of flexible infrastructure than less successful teams.”
Abbruzzese adds:
“In my opinion, it’s quite simple—cloud computing basically means distributed computing, with added adaptability. It allows you to change your hardware dynamically.
Cloud computing is really about reliability and adaptability. But you have to use the right architecture—that means applying the theory behind modern microservices and cloud-native systems.
I’m talking about reliable communication, orchestrators like Kubernetes, and automatic scaling—these are all provided by cloud platforms and also by Kubernetes itself. You also have tools for collecting metrics and adjusting the software’s behavior automatically based on those metrics. This is the essence of the theory we’re dealing with.
For example, in microservices architectures, reliable communication is essential. These applications are often structured like assembly lines—processing and transferring data step by step. That means it’s unacceptable to lose data. Communication must at least eventually succeed. It can be delayed, but it has to succeed.”
However, it is important to ascertain first whether you and your team are willing to invest fully in the shift. Else this can lead to decreased organizational performance. DORA warns:
“Cloud enables infrastructure flexibility. Flexible infrastructure can increase organizational performance. However, moving to the cloud without adopting the flexibility that cloud has to offer may be more harmful than remaining in the data center. Transforming approaches, processes, and technologies is required for a successful migration.”
This is because, very much in line with adopting DevOps as a culture, accomplishing the shift to cloud-native does not simply require:
"Tools or technologies, but often an entire new paradigm in designing, building, deploying, and running applications.”
DORA recommends:
“Making large-scale changes (because they are) easier when starting with a small number of services, (and)… an iterative approach that helps teams and organizations to learn and improve as they move forward.”
The role of the architect when it comes to navigating these shifts is more crucial than ever. And the adoption of DevOps as a culture along with DevOps engineering best practices can serve both businesses -and architects well so they can better serve business needs and remain relevant.
If Baptista and Abbruzzese’s perspective on DevOps as a foundation for resilient architecture resonated with you, their book Software Architecture with C# 12 and .NET 8 goes further—connecting high-level design principles with hands-on implementation across the .NET stack.
The following excerpt—Chapter 8: Understanding DevOps Principles and CI/CD—breaks down the architectural mindset behind effective delivery pipelines. It covers core DevOps concepts, the role of CI/CD in aligning code with business needs, and how to embed quality, automation, and visibility throughout the deployment process.
Expert Insight: Understanding DevOps Principles and CI/CD by Gabriel Baptista and Francesco Abbruzzese
The complete “Chapter 8: Understanding DevOps Principles and CI/CD” from the book Software Architecture with C# 12 and .NET 8 by Gabriel Baptista and Francesco Abbruzzese (Packt, February 2024).
Although many people define DevOps as a process, the more you work with it, the better you understand it as a philosophy. This chapter will cover the main concepts, principles, and tools you need to develop and deliver your software with DevOps.
...
The following topics will be covered in this chapter:
Understanding DevOps principles: CI, CD, and continuous feedback
Understanding how to implement DevOps using Azure DevOps and GitHub
Understanding the risks and challenges when using CI/CD
Now in its fourth edition, Software Architecture with C# 12 and .NET 8, combines design fundamentals with hands-on .NET practices, covering everything from EF Core and DevOps pipelines to Blazor, OpenTelemetry, and a complete case study centered on a real-world travel agency system.
Use code DEEPENGINEER for an exclusive subscriber discount—20% off print and 30% off eBook, valid until 17th July, 2025.
Tool of the Week
OpenTofu — Terraform-Compatible Cloud Infrastructure-as-Code
OpenTofu is an open-source, community-driven IaC tool, designed as a fork of Terraform. It enables teams to define, manage, and deploy cloud infrastructure declaratively, ensuring reproducibility and resilience.
Highlights:
Multi-Cloud IaC: Works across AWS, Azure, GCP, and Kubernetes, ensuring consistent, reproducible environments.
Modular & Scalable: Reusable modules simplify complex architectures like multi-region deployments.
Collaborative: Infrastructure changes are versioned and peer-reviewed, ensuring alignment with business goals.
Fault-Tolerant: Optimized for error handling, ensuring stable infrastructure changes.
Active Development: Recently updated (v1.10.0 in June 2025) and hosted under the Linux Foundation, ensuring long-term stability.
Tech Briefs
DevOps in the Cloud: Case Studies of Amazon.com teams and their resilient architectures - Seth Eliot: Introduces the concept of DevOps in the cloud, using examples from Eliot’s experience at Amazon, where teams—organized into "two pizza teams"—take ownership of services from design to deployment, emphasizing the importance of culture over tools, and how frameworks enable resilience through continuous experimentation, chaos engineering, and disaster recovery solutions.
Why Is My Docker Image So Big? A Deep Dive with ‘dive’ to Find the Bloat by Chirag Agrawal, Senior Software Engineer | Alexa+ AI Agent Engineering: Explores how to diagnose and reduce Docker image bloat using tools like docker history and dive to pinpoint inefficiencies in image layers, with a special emphasis on AI Docker images that often become oversized due to large AI libraries and base OS components.
Azure AI Foundry Agent Service Gains Model Context Protocol (MCP) Support in Preview: Microsoft has announced the preview release of MCP support in its Azure AI Foundry Agent Service, enhancing interoperability for AI agents by simplifying integration with internal services and external APIs, and enabling seamless access to tools and resources from any compliant MCP server.
The DevOps engineer’s handbook: Covers key DevOps practices, including automation, continuous integration, continuous delivery, and the importance of evolving team culture to improve software delivery and collaboration across the development lifecycle.
That’s all for today. Thank you for reading this issue of Deep Engineering. We’re just getting started, and your feedback will help shape what comes next.
Take a moment to fill out this short survey we run monthly—as a thank-you, we’ll add one Packt credit to your account, redeemable for any book of your choice.
We’ll be back next week with more expert-led content.
Stay awesome,
Divya Anne Selvaraj
Editor-in-Chief, Deep Engineering