Extra: WebAssembly (Wasm)
While HTML, CSS, and JavaScript form the core of web development, a new technology called WebAssembly (Wasm) is changing the game for performance-critical applications. Think of it as a low-level, high-performance runtime for the web. It allows you to run pre-compiled code from languages like C, C++, and Rust directly in the browser, offering near-native speed.
Here is a simple example to illustrate how it works:
Step 1: The C code
Let's say you have a computationally intensive function in a C file, fib.c
, that you want to run efficiently in your web page.
// fib.c
int fibonacci(int n) {
if (n <= 1) {
return n;
}
return fibonacci(n - 1) + fibonacci(n - 2);
}
Step 2: Compiling to Wasm
You can use the Emscripten toolchain to compile this C code into a Wasm module and the necessary JavaScript "glue code" to load it.
emcc fib.c -o fib.js -s EXPORTED_FUNCTIONS='["_fibonacci"]' -s WASM=1
This command generates fib.wasm
, the binary Wasm module, and fib.js
, a small script to help you load it.
Step 3: Using Wasm in your HTML
Now, you can load and run the compiled Wasm function directly within your HTML file using a bit of JavaScript.
<!DOCTYPE html>
<html>
<head>
<title>WebAssembly Example</title>
</head>
<body>
<h1>Fibonacci with WebAssembly</h1>
<p>The result for Fibonacci(10) is: <span id="result"></span></p>
<script>
const resultElement = document.getElementById('result');
// Fetch and instantiate the Wasm module
WebAssembly.instantiateStreaming(fetch('fib.wasm')).then(result => {
const exports = result.instance.exports;
// Call the fibonacci() function compiled from C
const fibResult = exports.fibonacci(10);
// Display the result on the page
resultElement.textContent = fibResult;
});
</script>
</body>
</html>
This simple example demonstrates a key application of Wasm: running high-performance code on the client side, which is perfect for tasks like gaming, scientific simulations, or video encoding directly in the browser. It complements JavaScript by providing a pathway for code that requires maximum performance.
WebGPU and the future of ML in the browser
While WebAssembly (Wasm) excels at performance-critical tasks, it's designed to complement, not replace, JavaScript. Think of Wasm as a co-processor for the web: it handles the heavy lifting—like physics engines, video codecs, and complex data processing—while JavaScript manages the user interface and overall web page logic. Both languages work together, communicating through a defined API, to create powerful and highly responsive web applications.
A key limitation of running heavy computational tasks, like machine learning (ML) models, on the web has been the inability to directly access the user's GPU hardware. The web has traditionally relied on older graphics APIs like WebGL, which were not optimized for general-purpose computing. However, this is rapidly changing with the advent of WebGPU.
WebGPU is a new web standard that provides a modern, low-level API for a user's GPU, abstracting away proprietary APIs like NVIDIA's CUDA, Apple's Metal, and Microsoft's Direct3D 12. Because it's designed with compute shaders from the ground up, it's a perfect fit for the parallel workloads of ML.
This means that while you can't run raw CUDA code directly in the browser, toolchains like HipScript are emerging to compile CUDA kernels into a format compatible with WebGPU. This allows developers to load pre-trained ML models (e.g., from TensorFlow.js or Hugging Face Transformers.js) and run them entirely on the client side, using the user's GPU for acceleration. This shift from server-side to client-side inference can drastically reduce latency, lower server costs, and enhance user privacy by keeping data on the user's device. With major browser support now in Chrome, Edge, and Firefox, and recent support in Safari, this technology is bringing a revolution in AI and ML to the web.
Last updated
Was this helpful?