Announcing Irontools
Posted on by Yaroslav Tkachenko
Hello world, everyone! 👋 Today, I’m excited to introduce Irontools - a suite of Apache Flink® extensions that makes your Flink pipelines more efficient and unlocks new use cases.
The Journey
At the end of last year, I left my full-time job to pursue solopreneurship. Since then, I’ve been doing part-time consulting in the data streaming space while also dedicating time to research and experimentation.
Over time, one insight became clear: to move the data streaming industry forward, we don’t necessarily need to invent a new stream processing framework. Flink is already the most advanced and widely adopted tool in this space, and it’s only getting better. So instead of reinventing the wheel, I decided to double down on Flink and help improve what already works - making it faster, more ergonomic, and more accessible.
To that end, I’m building a set of extensions that work with any Flink deployment, no matter the environment or scale.
Project Goals
I recently wrote about how efficiency and developer experience are still the challenges that people face when building data streaming systems. Irontools won’t fix everything overnight, but it’s a step toward reducing friction, one targeted improvement at a time.
Extensions
Irontools currently offers two core extensions: Iron Serde and Iron Functions.
Iron Serde
Iron Serde is a set of drop-in Kafka serde libraries, currently focused on Avro and JSON. While (de)serialization isn’t always your bottleneck, when it is, your options are limited (and performance matters).
Iron Serde is built with performance at its core. It leverages schema information to generate highly optimized parsing logic in runtime. After extensive benchmarking, I can confidently say it provides at least a 50% speed boost over Flink’s built-in serdes, and in some scenarios, it’s much more.
Iron Functions
I’ve long believed that data streaming will hit mainstream adoption only when you can author pipelines in familiar “backend” languages, not just Java. Think: TypeScript, Python, Go, Ruby. Iron Functions brings that future closer by embedding a WebAssembly runtime inside Flink.
With Iron Functions, you can write transformation logic or User-Defined Functions (UDFs) in your language of choice, compile them to WebAssembly, and run them securely and efficiently inside a Flink pipeline.
Iron Functions supports DataStream and Table/SQL APIs:
- With DataStream API, you need to add Iron Functions to your pipeline by adding a
ProcessFunction
operator. You still need a few lines of Java to wire it into the topology, but here’s a demo to show how simple it is. - With Table/SQL API, Iron Functions allows you to package your project as a UDF JAR. You can even choose to package it as an Uber JAR with all dependencies included, which makes it portable. You can check out this demo to see how I package a UDF implemented in TypeScript and execute it in Confluent Cloud.
Share Your Pain
These first two extensions are just the beginning.
I have many more ideas in the pipeline, but I’d love to hear from you. If you’re still reading, you probably have some thoughts or frustrations about Flink or stream processing in general.
👉 Please share them in this quick survey. It helps shape what comes next.
Get in Touch
I’m looking for more design partners to work together to create the best developer experience.
Feel free to reach out at hello@irontools.dev or subscribe below for updates. Stay tuned!
Subscribe to receive updates