Deep Engineering #3: Designing for AI and Humans with MoonBit Core Contributor Zihang YE
From CLI design to AI ergonomics—MoonBit offers patterns worth borrowing
Welcome to the third issue of Deep Engineering.
AI agents are no longer just code generators, they’re becoming active users of codebases, APIs, and developer tools. From semantic documentation protocols to agent-readable APIs, the systems we design must increasingly expose structure, context, and intent. Software now needs to serve two audiences—humans and machines.
This issue explores what that means in practice, through the lens of MoonBit—a new language built from the ground up for WebAssembly (Wasm)-native performance and AI-first tooling.
Our feature article examines how MoonBit responds to this dual-audience challenge: not with flashy syntax, but with a tightly integrated toolchain and a runtime model designed to be both fast and machine-consumable. And in a companion tutorial, MoonBit core contributor
YE walks us through building a diff algorithm as a Wasm-ready CLI—an instructive example of the language’s design philosophy in action.Beyond Syntax: MoonBit and the Future of Language, Tooling, and AI Workflows
The mainstream dominance of Python, JavaScript, and Rust might suggest the age of new programming languages is over. A new breed of languages including MoonBit prove otherwise—not by reinventing syntax, but by responding to two tectonic shifts in software development: AI-assisted workflows, and the rise of Wasm-native deployment in cloud and edge environments.
In edge computing and micro-runtime environments, developers need tools that start instantly, consume minimal memory, and run predictably across platforms. MoonBit’s design responds directly to this: it produces compact Wasm binaries optimized for streaming data, making it suitable for CLI tools, embedded components, and other low-overhead tasks.
At the same time, AI workloads are exposing the limitations of dynamic languages like Python in large-scale systems. MoonBit’s founders note that Python’s “easy to learn” nature can become a double-edged sword for complex tasks. Even with optional annotations, its dynamic type system can hinder static analysis, complicating maintainability and scalability as codebases grow. In response, MoonBit introduces a statically typed, AI-aware language model with built-in tooling—formatter, package manager, VSCode integration—designed to support both human and machine agents.
Rather than replacing Python, MoonBit takes a pragmatic approach. It explicitly embraces an “ecosystem reuse” model: it uses AI-powered encapsulation to lower the barrier for cross-language calls, avoiding reinvention of existing Python tools, and it aims to “democratize” static typing by coupling a strict type system with AI code generation.
A Language is Not Enough
MoonBit is a toolchain-native languages, designed from the start to work smoothly with modern build, editing, and AI workflows. Unlike older languages that were retrofitted with new tools, MoonBit bundles its compiler, package manager, IDE, language server, and even an AI assistant as a cohesive whole. As the MoonBit team puts it, they “integrate a comprehensive toolchain from the start” to provide a streamlined coding experience.
This stands in contrast to older systems languages like C/C++ and even to modern ones like Rust, which, despite its safety guarantees, still requires extra configuration to target Wasm. MoonBit by design treats Wasm as its primary compilation target – it is “Wasm-first”, built “as easy as Golang” but generating very compact Wasm output.
Similarly, MoonBit was conceived to work hand-in-hand with AI tools. It offers built-in hooks for AI code assistance (more on this below) and even considers AI protocols like Anthropic’s Model Context Protocol (MCP) as first-class integration points. In MoonBit, the language + toolchain combo is now a single product, not an afterthought.
MoonBit is not alone. Other new languages like Grain, Roc, and Hylo (formerly Val) each explore different priorities—from functional programming for the web to safe systems-level design and simplified developer experience.
Grain prioritizes JS interop and functional ergonomics; Roc favors simplicity and speed, though it’s still pre-release; and Hylo experiments with value semantics and low-level control. MoonBit and these other languages make it clear that language design is soon going to become inseparable from its runtime, developer experience, and AI integration.
Architecture and Developer Experience
MoonBit’s architecture reflects a deliberate focus on toolchain integration and cross-platform performance. It is a statically typed, multi-paradigm language influenced by Go and Rust, supporting generics, structural interfaces, and static memory management. The compiler is designed for whole-program optimization, producing Wasm or native binaries with minimal overhead. According to benchmarks cited by the team, MoonBit compiled 626 packages in 1.06 seconds—approximately 9x faster than Rust in the same test set. Its default Wasm output is compact: a basic HTTP service compiles to ~27 KB, which compares favorably to similar Rust (~100 KB) and JavaScript (~8.7 MB) implementations. This is partly due to MoonBit’s support for Wasm GC, allowing it to omit runtime components that Rust must include.
The syntax and structure are also optimized for machine parsing. All top-level definitions require explicit types, and interface methods are defined at the top level rather than nested. This flatter structure reportedly improves LLM performance by reducing key–value cache misses during code generation. The language includes built-in support for JSON, streaming data processing via iterators, and compile-time error tracking through control-flow analysis.
Tooling is tightly coupled with the language. The moon CLI handles compilation, formatting, testing, and dependency management via the Mooncakes registry. The build system, written in Rust, supports parallel, incremental builds. A dedicated LSP server (distributed via npm) integrates MoonBit with IDEs, enabling features like real-time code analysis and completions. Debugging is supported via the CLI with commands like moon run --target js --debug, which link into source-level tools.
A browser-based IDE preview is also available. It avoids containers in favor of a parallelized backend and includes an embedded AI assistant capable of generating documentation, suggesting tests, and offering inline explanations. According to the team, this setup is designed to support both developer productivity and AI agent interaction.
MoonBit’s performance profile extends beyond Wasm. A recent release introduced an LLVM backend for native compilation. In one example published by the team, MoonBit outperformed Java by up to 15x in a numeric loop benchmark. The language also supports JavaScript as a compilation target, expanding deployment options across web and server contexts.
AI Systems as Language Consumers
LLMs are no longer just helping developers write code—they’re starting to read, run, and interact with it. This shift requires rethinking what it means for a language to be “usable.”
MoonBit anticipates this by treating AI systems as first-class consumers of code and tooling. Its team has adopted the MCP, an emerging open standard developed by Anthropic to enable LLMs to interface with external tools and data sources. MCP defines a JSON-RPC server architecture, allowing programs to expose structured endpoints that LLMs can query or invoke. MoonBit’s ecosystem includes a work-in-progress MCP server SDK written in MoonBit and compiled to Wasm, enabling MoonBit components to act as MCP-capable endpoints callable by models such as Claude.
This integration reflects a broader shift in tooling. Modern documentation tools like Mintlify now expose semantically indexed content explicitly for AI retrieval. UIs and APIs are being annotated with machine-readable metadata. Even version control is evolving: newer workflows track units of change like (prompt + schema + tests), not just line diffs, enabling intent-aware versioning usable by humans and machines alike.
MoonBit’s example agent on GitHub demonstrates this in practice, combining Wasm components (e.g. via Fermyon Spin), LLMs (such as DeepSeek), and MoonBit logic to automate development tasks. Under this model, protocols like MCP enable developers to publish AI-accessible functions directly from their codebases. MoonBit’s support for this workflow—via Wasm and first-party libraries—illustrates a growing view in language design: that AI systems are not just tools for writing code, but active consumers of it.
Wasm’s Impact on Performance and Portability
Three years ago, William Overton, a Senior Serverless Solutions Architect, said, Wasm "starts incredibly quickly and is incredibly light to run," making it well-suited to execute code across CDNs, edge nodes, and lightweight VMs with low startup latency and near-native speed. Today, the growing adoption of Wasm is reshaping expectations for both performance and cross-platform deployment.
For MoonBit, Wasm is the default compilation target—not an optional backend. Its tooling is built around producing compact, portable Wasm modules. A simple web server in MoonBit compiles to a 27 KB Wasm binary—significantly smaller than equivalent builds in Rust or JavaScript. This reduction in size translates directly to faster load times and reduced memory usage, making MoonBit viable for constrained environments like embedded systems, CLI tools, and edge deployments.
Standardized but still-emerging features like Wasm GC—and experimental ones like the Component Model—further reinforce this model. MoonBit has adopted both: its use of interface types and Wasm GC helps minimize runtime footprint. In a published comparison, MoonBit’s Wasm output was roughly an order of magnitude smaller than that of Rust, largely due to differences in memory management.
Taken together, these developments suggest that Wasm is becoming a practical universal format for lightweight applications. For teams building portable utilities or latency-sensitive services, languages with Wasm-native support—such as MoonBit—offer tangible advantages over traditional container- or VM-based approaches.
💡What this Means for You
MoonBit offers concrete lessons even if you never write MoonBit code. Key takeaways include:
Ecosystem Continuity: Instead of building isolated ecosystems, consider bridging existing ones. MoonBit demonstrates that Python libraries can be reused as external modules—wrapped, if needed, by AI-generated shims. This reduces rewrites and enables gradual migration to safer or more performant languages.
Integrated Tooling: Treat your language platform as a cohesive whole. MoonBit’s CLI (moon) unifies compilation, testing, debugging, and package management, minimizing context switches. Its build system exposes project metadata to IDEs via LSP integration. In your own tooling, aim for end-to-end flows powered by a single interface that integrates with the editor.
Wasm and Runtime Strategy: For cross-platform deployment, prioritize Wasm as a primary target. MoonBit emits Wasm, JavaScript, or native binaries from a single compiler, and leverages Wasm GC for smaller outputs. Adopt language/toolchain combinations that support compact binaries and multiple backends without sacrificing performance.
Data-Oriented Design: MoonBit’s JSON type, Iter abstraction, and pattern matching illustrate a clean model for streaming data. Architect utilities and pipelines to minimize allocations and intermediate state—use iterators, stream transforms, and statically analyzable data access patterns where possible.
AI-Friendliness: MoonBit enforces top-level type annotations and flattens scope structures to support linear token generation. If you expect AI tooling to generate, refactor, or analyze your code, avoid deep nesting and implicit state—prefer clarity and structure that LLMs can parse efficiently.
Static Checking + AI: MoonBit combines a strict type and error system with AI assistance to ease onboarding and boilerplate generation. This model lets developers write in a safe language without sacrificing velocity. For your own teams, consider pairing statically typed languages (or gradually typed ones like Python with type hints) with copilots that bridge ergonomics and enforcement.
CLI Extensibility: The moon CLI supports modular growth—commands like moon new, moon run, and moon add are extensible by design. It can even serve as an LSP or MCP server. Treat your own CLIs as platform interfaces: design for plugin support, programmatic inspection, and long-term integration with AI and editor tooling.
To see these ideas in practice—especially MoonBit’s type system, performance model, and Wasm-native tooling—Zihang YE, one of MoonBit’s core contributors, offers a hands-on walkthrough. His article walks us through the implementation of a diff algorithm using MoonBit, building a CLI tool that’s usable both by developers and AI systems via the MCP.
Expert Insight: Implementing a Diff Algorithm in MoonBit by Zihang YE
A hands-on introduction to MoonBit through the implementation of a version-control-grade diff tool.
MoonBit is an emerging programming language that has a robust toolchain and relatively low learning curve. As a modern language, MoonBit includes a formatter, a plugin supporting VSCode, an LSP server, a central package registry, and more. It offers the friendly features of functional programming languages with manageable complexity.
To demonstrate MoonBit’s capabilities, we’ll implement a core software development tool—a diff algorithm. Diff algorithms are essential in software development, helping identify changes between different versions of text or code. They power critical tools in version control systems, collaborative editing platforms, and code review workflows, allowing developers to track modifications efficiently. If you have ever used git diff then you are already familiar with such algorithms.
The most widely used approach is Eugene W. Myers Diff algorithm, proposed in the paper “An O(ND) Difference Algorithm and Its Variations”. This algorithm is widely used for its optimal time complexity. Its space-efficient implementation and ability to find the shortest edit script make it superior to alternatives like patience diff or histogram diff and make it the standard in version control systems like Git and many text comparison tools such as Meld.
In this tutorial, we’ll implement a version of the Myers Diff algorithm in MoonBit. This hands-on project is ideal for beginners exploring MoonBit, offering insight into version control fundamentals while building a tool usable by both humans and AI through a standard API.
We will start by developing the algorithm itself, then build a command line application that integrates the Component Model and the MCP, leveraging MoonBit’s WebAssembly (Wasm) backend. Wasm is a blooming technology that provides privacy, portability, and near-native performance by running assembly-like code in virtual machines across platforms —qualities that MoonBit supports natively, making the language well-suited for building efficient cross-platform tools.
By the end of this tutorial, you’ll have a functional diff tool that demonstrates these capabilities in action.
Project Setup
Let’s first create a new moonbit project by running:
moon new --lib diff
The following will be the project structure of the code. The moon.mod.json
contains the configuration for the project, while the moon.pkg.json
contains the configuration for each package. top.mbt
is the file we'll be editing throughout this post.
├── LICENSE
├── moon.mod.json
├── README.md
└── src
├── lib
│ ├── hello.mbt
│ ├── hello_test.mbt
│ └── moon.pkg.json
├── moon.pkg.json
└── top.mbt
We will be comparing two pieces of text, divided each into lines. Each line will include its content and a line number. The line number helps track the exact position of changes, providing important context about the location of changes when displaying the differences between the original and modified files.
🛠️ Tool of the Week
MCP Python SDK 1.9.2 — Structured Interfaces for AI-Native Applications
The MCP is a standard for exposing structured data, tools, and prompts to language models. The MCP Python SDK brings this to production-ready Python environments, with a lightweight, FastAPI-compatible server model and first-class support for LLM interaction patterns. The latest release, v1.9.2 (May 2025), introduces:
Streamable HTTP Support: Improved transport layer for scalable, resumable agent communication.
Lifespan Contexts: Type-safe initialization for managing resources like databases or auth providers.
Authentication: Built-in OAuth2 flows for securing agent-accessible endpoints.
Claude Desktop Integration: Direct install into Anthropic’s desktop agent environment via
mcp install
.Async Tooling: Tools, resources, and prompts can now be async functions with full lifecycle hooks.
Ideal for teams designing LLM-facing APIs, building AI-autonomous agents, or integrating prompt-based tools directly into Python services. It’s the protocol MoonBit already supports—and the interface LLMs increasingly expect.
📰Tech Briefs
Architectural Patterns for AI Software Engineering Agents by Nati Shalom, Fellow at Dell NativeEdge: Examines how modern coding agents are being structured like real-world dev teams—using patterns such as code search, AST analysis, and version-controlled prompt templates to enable disciplined, multi-agent collaboration.
A survey of agent interoperability protocols: Model Context Protocol (MCP), Agent Communication Protocol (ACP), Agent-to-Agent Protocol (A2A), and Agent Network Protocol (ANP) by Ehtesham et al.: Offers an in-depth analysis of four emerging protocols designed to enhance interoperability among AI agents, examining their architectures, communication patterns, and security models.
When the Agents Go Marching In: Five Design Paradigms Reshaping Our Digital Future by Adrian Levy, Senior UX Expert at CyberArk: Discusses how agentic UX is reshaping everything from collaboration to trust. If MoonBit is what languages might look like in this new world, Levy’s article shows how interfaces and systems are evolving to meet the same challenge, articulating the Agent Experience (AX) paradigm.
Beyond augmentation: Agentic AI for software development by the Khare et al., Infosys Knowledge Institute: A practice-oriented report on how autonomous agents are moving from coding assistants to pipeline-integrated actors—handling complex dev tasks end-to-end and delivering measurable productivity gains in database and API generation.
Emerging Developer Patterns for the AI Era by Yoko Li, Engineer: Explores how core concepts like version control, documentation, dashboards, and scaffolding are being reimagined to support AI agents as first-class participants in the software loop—not just code generators, but consumers, collaborators, and operators.
That’s all for today. Thank you for reading this issue of Deep Engineering. We’re just getting started, and your feedback will help shape what comes next.
Take a moment to fill out this short survey—as a thank-you, we’ll add one Packt credit to your account, redeemable for any book of your choice.
We’ll be back next week with more expert-led content.
Stay awesome,
Divya Anne Selvaraj
Editor-in-Chief, Deep Engineering