ProgrammingPro #27: Python 3.12, Terrible Programming Tips, and Monolith vs Microservices
Bite-sized actionable content, practical tutorials, and resources for programmers
“Professional developers strive to be the good kind of lazy. This laziness is based on putting extra care into the code so that it's not so hard to write up front, and it's easier to work with later. Writing clean code doesn't take any longer. And once you grasp the principles you can actually code more quickly because terse, expressive code that does one thing is easier to manage.”
– Cory House (2013), 7 reasons clean code matters
Welcome to this week’s issue of our programmer focused newsletter.
Python 3.12 is here! So, today, we are obviously talking about what is shiny and new but also about some things you need to watch out for with the new update.
Have you been dreaming about AI that truly understands you (i.e., your existing codebase)? Well, it’s finally here. We are also reflecting on the pressure put on developers for speedy deployments that is leading to untested code being published. And this is bad news especially if untested AI generated code is being pushed out as well because apparently ChatGPT is only as good or bad a coder as a college freshman student.
In our tutorials and learning resources this week, we are focusing on best practices for writing secure, low latency, and scalable code, along with software architecture and other stuff. Here are my top 5 picks:
🔨Storage challenges in the evolution of database architecture
💊Chronicle Services: Low Latency Java Microservices Without Pain
We also have an exclusive excerpt for you on “Which architecture to choose” from the book Python Architecture Patterns. So, scroll down and dive right in!
Stay awesome!
Divya Anne Selvaraj
Editor-in-Chief
🗞️News, 💡Opinions, and 🔎Analysis
🐍Python3.12 — what didn't make the headlines: While some exciting updates, like better error messages and formalizing f-string parsing, have gained attention, there are some underwhelming aspects in the new Python update. Read to learn about deprecations and removals, like telnetlib and distutils, which signal the need for careful migration planning.
❌Are developers pressured to push code to production without testing, circumvent security protocols, and rely on ChatGPT: In a recent survey by Sauce Labs, of the 500 developers surveyed, two-thirds admitted to pushing untested code to production, while 60% said they relied on unverified code generated by ChatGPT. Read for further findings and to understand why organizational reforms are critical to balance speed and security in software development.
🛡️The path to security champions: How Workday utilized agile learning to upskill developers: Workday’s adoption of agile learning with Secure Code Warrior enabled their developers to tackle security issues earlier in the development cycle, reducing security incidents significantly, from 4662 security issues to zero, in 18 months. Read for a detailed case study on what led to these results.
🎓ChattyG takes a college freshman C/C++ programming exam: White taking the test, ChatGPT outperformed most students in solution quality but had occasional difficulties comprehending certain logical and mathematical aspects of problems. The final verdict is that it did as well as a freshman student might, but not much better. Read to learn more about what ChatGPT struggled with the most.
🍏Apple releases macOS 14, introduces global “meet with experts” developer sessions: While Apple’s focus is on mobile apps and the upcoming Vision Pro VR headset, macOS is gaining ground, particularly in business, due to its growing market share and Apple's more power-efficient processors. Read to learn more about the "Meet with Apple Experts" program for developers.
⚒️Sisense releases new toolkit to help developers embed analytics capabilities into their apps: Sisense’s Compose SDK for Fusion, will enable easy integration of analytics in apps with components for querying and charting. Read to learn more about this developer-friendly SDK, and how it works with existing tools, and aligns with modern development practices.
💎The TLDR on Ruby's new TLDR testing framework: Ok, this is quite a long article, but it is very informative. TLDR is a new Ruby testing framework that enforces a1.8-second time limit on tests, encouraging frequent testing during development. Read to learn more about this framework and understand how frequent testing can prevent slow tests from hindering progress.
🤝Bito Launches AI that Understands Developers’ Codebase: This first of a kind solution can aid with coding, debugging, and more. It can boost productivity and enable precise, meaning-based searches. Read to learn more about how you can use Bito to access a secure, highly personalized overview of your codebase and enhance team collaboration.
🎓 Tutorials and Guides🤓
Python 3.12: Cool New Features for You to Try: This tutorial takes you through all the major enhancements like enhanced error messages with helpful suggestions, more expressive f-strings powered by Python's PEG parser, and more, with downloadable code examples. Read if you are wondering whether to upgrade and how.
An Essential Guide to Pointers in C Programming: Pointers are integral to C and also what sets the language apart from Python. Read to discover the importance of pointers in dynamic memory allocation, and how function pointers enable dynamic function execution and passing by reference.
C++ Weekly - Ep 396- emplace vs emplace_hint! What's the difference?: This short video demonstrates the difference between "emplace" and "emplace_hint" in the context of sequence containers, associative containers, and more. Watch to understand the performance implications of using hints with emplace.
How to Use Monadic Operations for `std::optional` in C++23: Monadic operations streamline code by eliminating nested checks. Read if you want to say goodbye to verbose if/else blocks and embrace concise, expressive code.
Asynchronous programming with async and await in C#: This almost delicious
🍳🥓 article draws an analogy between using async code and cooking a full breakfast of bacon, eggs, toast, coffee, and juice asynchronously. Read if you are looking for tasty ways to make applications responsive for user interfaces and server programs.
Essential Python Code Optimization Tips and Tricks: This article will help you learn how to intern strings, use peephole optimization, profile your code, leverage generators, avoid globals, utilize Clibraries, and more. Read to optimize your Python code for better performance and productivity.
Chronicle Services: Low Latency Java Microservices Without Pain: Chronicle Services by Chronicle Software minimizes complexity, uses Chronicle Queue for efficient event transmission, and handles events in a single thread. Read to learn how you can achieve this straightforward low-latency performance, while bypassing traditional microservice framework complexity.
Writing Object Shape friendly code in Ruby: Learn how to make your code Object Shape friendly and resultantly enhance performance for Ruby applications. Read to know more about how Ruby 3.2's Object Shapes optimize variable storage, caching, and lookup.
How to Integrate AI into Your Serverless App With Amazon Bedrock: This article teaches you how to explore your AI API options, while building a Holiday Planning API using the Serverless Framework and the National Park Service API. Read to learn how to enhance your AWS apps with AI capabilities.
Bridging the Gap: Understanding Adapter and Composite Patterns in Rust: Dive into real-world examples from Hyperswitch, to gain practical insights for resilient, scalable software using two essential design patterns. Read to bridge the gap between theory and practice in Rust.
🔑 Secret Knowledge: Learning Resources🔬
60 terrible tips for a C++ developer: This free to access mini book by Andrey Karpov, dissects some anti-tips, using actual code examples, providing a mix of education and entertainment. Read to understand why it is not very good to recommend "Only C++,""Tab character in string literals," "Disable warnings," and more.
Arrow function expressions in JavaScript: This article explains the limitations and benefits of arrow function expressions and highlights their concise nature. Read to understand when and how to use them correctly.
10 JavaScript concepts every Node developer must master: This article is a guide for writing efficient and scalable Node.js code and for leveraging JavaScript effectively on the server-side. Read to also understand the importance of error handling and build robust and high-performance Node.js applications.
Dangers of Cross-Site Scripting in React: This article emphasizes the importance of sanitizing user input, using Content Security Policy (CSP), and taking preventive measures to protect against these vulnerabilities in React applications. Read to learn about the two types of XSS attacks, Reflected and Stored, and best practices to deal with them.
Protecting Your Software Supply Chain: Understanding Typosquatting and Dependency Confusion Attacks: Typosquatting exploits naming errors, while dependency confusion leverages public and private repositories. Read to learn how to prevent these attacks, verify package sources, and enhance supply chain security.
Storage challenges in the evolution of database architecture: This article discusses how the Postman team tackled a storage challenge in their database architecture using a three-step strategy. Read to learn more about how they reclaimed 60TB of space and reduced ingestion rates, ensuring seamless operation for over 25 million users.
The Saga of the Closure Compiler, and Why TypeScript Won: Did you know Closure Compiler aimed to produce the tiniest JavaScript possible? It minified code dramatically, but its approach had limitations for the modern JavaScript ecosystem. Read to learn why and how TypeScript's "JavaScript + Types" concept won by focusing on developer tools, community engagement, and adaptability to JavaScript's evolution.
Azure Databricks for R developers: This guide will teach you how to import your code, run it on clusters, and explore topics like big data processing, visualizations, job automation, and machine learning. Read to discover two R interfaces to Apache Spark and learn how to choose the right cluster type for your workload, whether it's single-node for smaller tasks or distributed clusters for massive data.
🧠 Expert insights from the Packt Community📚
An exclusive excerpt from Chapter 11, Microservices vs Monolith in the book Python Architecture Patterns by Jaime Buelta.
Which architecture to choose
There's a tendency to think that a more evolved architecture, like the microservices architecture, is better, but that's an oversimplification. Each one (architecture type) has its own set of strengths and weaknesses.
… almost every small application will start as a monolithic application. This is because it is the most natural way to start a system. Everything is at hand, the number of modules is reduced, and it's an easy starting point.
Microservices, on the other hand, require the creation of a plan to divide the functionality carefully into different modules. This task may be complicated, as some designs may prove inadequate later on.
Pro Tip:
Keep in mind that no design can be totally future-proof. Any perfectly valid architectural decision may prove incorrect a year or two later when changes in the system require adjustments. While it is a good question to think about the future, trying to cover every possibility is futile. The proper balance between designing for the current feature and designing for the future vision of the system is a constant challenge in software architecture.
This requires quite a lot of work to be done beforehand, which requires an investment in the microservices architecture.
That said, as monoliths grow, they can start presenting problems just through the sheer size of the code. The main characteristic of a monolithic architecture is that all the code is found together, and it can start presenting a lot of connections that can cause developers to be confused. Complexity can be reduced by good practices and constant vigilance to ensure good internal structure, but that requires a lot of work in place by existing developers to enforce it. When dealing with a big and complex system, it may be easier to present clear and strict boundaries just by dividing different areas into different processes.
The modules can also require different specific knowledge, making it natural to assign different team members to different areas. To create a proper sense of ownership of the modules, they can have different opinions in terms of code standards, an adequate programming language for the job, ways of performing tasks, and so on; for example, a photosystem that has an interface for uploading photos and an AI system for categorizing them. While the first module will work as a web service, the abilities required for training and handling an AI model to categorize the data will be very different, making the module separation natural and productive. Both of them in the same code base may generate problems by trying to work at the same time.
Another problem of monolithic applications is the inefficient utilization of resources, as each deployment of the monolith carries over every copy of every module. For example, the RAM required will be determined for the worst-case scenario across multiple modules. When there are multiple copies of the monolith, that will waste a lot of RAM preparing for worst-case scenarios that will likely be rare. Another example is the fact that, if any module requires a connection to the database, a new connection will be created, whether that's used or not.
In comparison, using microservices can adjust each service according to its own worst-case use case, and independently control the number of replicas for each. When viewed as a whole, that can lead to big resource saves in big deployments.
Figure 9.3: Notice that using different microservices allows us to reduce RAM usage by dividing requests into different microservices, while in a monolithic application, the worst-case scenario drives RAM utilization
Deployments also work very differently between monoliths and microservices. As the monolithic application needs to be deployed in a single go, every deployment is, effectively, a task for the whole team. If the team is small, creating a new deployment and ensuring that the new features are properly coordinated between modules and not interfering incorrectly, is not very complicated. However, as the teams grow bigger, this can present a serious challenge if the code is not strictly structured. In particular, a bug in a small part of the system may bring down the whole system completely, as any critical error in the monolith affects the whole of the code.
Monolith deployments require coordination between modules, meaning that they need to work with each other, which normally leads to teams working closely together until the feature is ready to be released, and require some sort of supervision until the deployment is ready. This is noticeable when several teams are working on the same code base, with competing goals, and this blurs the ownership and responsibility of deployments.
By comparison, different microservices are deployed independently. The API should be stable and backward compatible with older releases, and that's one of the strong requisites that need to be enforced. However, the boundaries are very clear, and in the event of a critical bug, the worst that can happen is that the particular microservice goes down, while other unrelated microservices continue unaffected.
This makes the system work in a "degraded state," as compared to the "all-or-none" approach of the monolith. It limits the scope of a catastrophic failure.
Note:
Of course, certain microservices may be more critical than others, making them worthy of extra attention and care regarding their stability. But, in that case, they can be defined as critical in advance, with stricter stability rules enforced.
Of course, in both cases, solid testing techniques can be used to increase the quality of the software released.
In comparison with the monolith, microservices can be deployed independently, without coordinating closely with other services. This brings independence to the teams working on them and allows for faster, continuous deployments that require less central coordination.
Pro Tip:
The keyword here is less coordination. Coordination is still required, but the objective of a microservices architecture is necessarily that each microservice can be independently deployed and owned by a team, so the majority of changes can be dictated exclusively by the owner without requiring a process of warning other teams.
Monolithic applications, because they communicate with other modules through internal operations, mean that they typically can perform these operations much faster than through the external APIs. This allows a very high level of interaction between modules without paying a significant performance price.
There is an overhead related to the usage of external APIs and communication through a network that can produce a noticeable delay, especially if there are too many internal requests made to different microservices. Careful consideration is required to try to avoid repeating external calls and to limit the number of services that can be contacted in a single task.
Pro Tip
In some cases, the usage of tools to abstract the contact with other microservices may produce extra calls that will be absolutely necessary. For example, a task to process a document needs to obtain some user information, which requires calling a different microservice. The name is required at the start of the document, and the email at the end of it. A naïve implementation may produce two requests to obtain the information instead of requesting it all in a single go.
Another interesting advantage of microservices is the independence of technical requirements. In a monolithic application, problems may arise as a result of requiring different versions of libraries for different modules. For example, updating the version of Python requires the whole code base to be prepared for that. These library updates can be complicated as different modules may have different requirements, and one module can effectively mingle with another by requiring an upgrade of the version of a certain library that's used by both.
Microservices, on the other hand, contain their own set of technical requirements, so there's not this limitation. Because of the external APIs used, different microservices can even be programmed in different programming languages. This allows the use of specialized tools for different microservices, tailoring each one for each purpose and thereby avoiding conflicts.
Pro Tip
Just because different microservices can be programmed in different languages doesn't mean that they should. Avoid the temptation of using too many programming languages in a microservices architecture as this will complicate maintenance and make it difficult for a member of a different team to be able to help, thereby creating more isolated teams.
Having one or two default languages and frameworks available and then allowing special justified cases is a sensible way to proceed.
As we see, most of the characteristics of microservices make it more suited for a bigger operation, when the number of developers is high enough that they need to be split into different teams and coordination needs to be more explicit. The high change of pace in a big application also requires better ways to deploy and work independently, in general.
A small team can self-coordinate very well and will be able to work quickly and efficiently in a monolith.
This is not to say that a monolith can be very big. Some are. But, in a general sense, microservices architecture only makes sense if there are enough developers such that different teams are working in the same system and are required to achieve a good level of independence between them.
A side note about similar designs
While the decision of monolith versus microservices is normally discussed in the context of web services, it's not exactly a new idea and it's not the only environment where there are similar ideas and structures.
The kernel of an OS can also be monolithic. In this case, a kernel structure is called monolithic if it all operates within kernel space. A program running in kernel space in a computer can access the whole memory and hardware directly, something that is critical for the usage of an OS, while at the same time, this is dangerous as it has big security and safety implications. Because the code in kernel space works so closely with the hardware, any failure here can result in the total failure of the system (a kernel panic). The alternative is to run in user space, which is the area where a program only has access to its own data, and has to interact explicitly with the OS to retrieve information.
For example, a program in user space that wants to read from a file needs to make a call to the OS, and the OS, in kernel space, will access the file, retrieve the information, and return it to the requested program, copying to a part of the memory where the program can access.
The idea of the monolithic kernel is that it can minimize this movement and context switch between different kernel elements, such as libraries or hardware drivers.
The alternative to a monolithic kernel is called a microkernel. In a microkernel structure, the kernel part is greatly reduced and elements such as filesystems, hardware drivers, and network stacks are executed in user space instead of in kernel space. This requires these elements to communicate by passing messages through the microkernel, which is less efficient.
At the same time, it can improve the modularity and security of the elements, as any crash in user space can be restarted easily.
There was a famous argument between Andrew S. Tanenbaum and Linus Torvalds about what architecture is better, given that Linux was created as a monolithic kernel. In the long run, kernels have evolved toward hybrid models, where they take aspects of both elements, incorporating microkernel ideas into existing monolithic kernels for flexibility.
Discovering and analyzing related architectural ideas can help to improve the tools at the disposal of a good architect and improve architectural understanding and knowledge.
To get a more comprehensive preview of the book's contents, read the first chapter available for free and buy the book here or signup for a Packt subscription to access the complete book and the entire Packt digital library. To explore more, click on the button below.
🛠️ HackerHub: Tools from GitHub⚒️
novu: An open-source, multi-channel notification infrastructure for developers with a unified API, Hacktoberfest participation, and more.
build-your-own-x: a repository of step-by-step tutorials on topics like 3D rendering, Augmented Reality, Blockchain, and more, offering valuable hands-on learning opportunities.
black: a Python code formatter that offers speed, determinism, and freedom from formatting issues, ensuring that your code looks consistent regardless of the project and making code reviews faster by generating minimal diffs.
dragonfly: an efficient in-memory data store compatible with Redis and Memcached APIs, boasting 25X higher throughput, improved cache hit rates, and lower latency compared to legacy data stores, all without requiring code changes, offering a robust solution for modern application workloads.
wrongsecrets: an OWASP Wrong Secrets game with 37 challenges to improve your secrets management skills and avoid common mistakes in software development.