Training
A practical guide to the Language of Things, based on our Coreflux Cracked training session.
At Coreflux Cracked in January 2026, we ran a morning training session that started with MQTT fundamentals and ended with building actual integrations. The room had a mix of experience levels: some people knew MQTT cold, others were seeing wildcards for the first time.
We recorded it. This is the written companion to that session.
Part 1 covers everything from the protocol basics to creating actions that react to topics, transform data, and route information to multiple destinations. If you prefer watching, the full video is embedded below. If you prefer reading (or want both), keep going.
Embedded Video:[YOUTUBE EMBED: Coreflux Training Part 1: From MQTT Basics to LoT Actions]
MQTT wasn't designed for the Internet of Things. It was designed for an actual pipeline.
In the 1990s, two IBM engineers needed to move sensor data from electric valves across 500 miles of oil pipeline. Physical cabling wasn't practical. They had telephone lines and limited bandwidth.
So they built a protocol optimized for exactly that: low bandwidth, unreliable connections, many nodes reporting to one central point.
Decades later, that same protocol became the standard for industrial IoT. The constraints of a 90s oil pipeline turn out to be pretty similar to the constraints of a modern factory floor.
MQTT organizes data into topics. Topics are hierarchical, separated by forward slashes:
factory/line1/machine1/status
factory/line1/machine2/status
factory/line2/machine1/temperature
You design the hierarchy based on your use case. A factory might use
factory/line/machine/variableA building might use
building/floor/room/sensorThe structure is yours to define. This hierarchy matters because of wildcards.
Two wildcards let you subscribe to multiple topics at once.
Single-Level Wildcard (+)
The + matches exactly one level:
factory/+/machine1/statusThis subscribes to machine 1 status on every line. Line 1, line 2, line 47. One subscription, many topics.
An example:
factory/line1/machine1/status
Multi-Level Wildcard (#)
The # matches everything at and below that level:
factory/#
This catches everything under factory: all lines, all machines, all variables. Use it carefully, but when you need everything, this is how you get it.
These wildcards become powerful when combined with LoT actions (more on that below).
MQTT offers three levels of delivery guarantee:
QoS 0: Fire and Forget
Publish and move on. No confirmation. Use this for high-frequency PLC data where you don't want to block the machine's cycle waiting for acknowledgment.
QoS 1: At Least Once
Publisher waits for confirmation that the message arrived. Use this for dashboards, cloud connections, anything where you need to know the data made it.
QoS 2: Exactly Once
Full handshake ensuring no duplicates. Use this for broker bridges across geographies where duplicate messages would cause problems.
Pick based on what breaks if a message gets lost or doubled.
A standard MQTT broker moves messages from publishers to subscribers. That's it.
Coreflux adds four components that turn the broker into a complete integration platform:
Actions
Logic that runs inside the broker. Triggered by time (every 5 seconds) or topics (when this value changes). Actions transform data, route messages, and coordinate processes.
Models
Schemas that structure raw data. When a part is manufactured, a model collects the timestamp, recipe, machine ID, and any other relevant data into one coherent message.
Routes
Connectors to external systems. Routes handle Modbus, OPC UA, Siemens S7, databases, REST APIs. Each route type has its own configuration requirements, but they all work the same way from the broker's perspective.
Rules
Permission controls. Rules define which clients can publish or subscribe to which topics. If only one robot should write to a specific topic, a rule enforces that.
The goal: replace the typical architecture sprawl (agent + Node-RED + broker + Kafka + database + dashboard) with one coordinated platform.
Language of Things code lives in LoT Notebooks, which run as a VS Code extension.
A notebook combines Markdown documentation with executable LoT code. You describe what the system does in plain text, then include the actual code that does it. The documentation and implementation stay together.
This matters for maintenance. When four departments (production, engineering, IT, integrations) need to understand the same system, they can open one file and each find what they need. No separate PowerPoint that gets outdated. No hunting through repositories.
Setting Up
.lotnb fileThe notebook syncs with your broker. Upload code to deploy it. Download to pull what's currently running. Version control through Git like any other code project.
The simplest actions run on a schedule:
DEFINE ACTION Heartbeat
ON EVERY 5 sec
PUBLISH TOPIC "system/heartbeat" WITH "alive"This publishes "alive" to the heartbeat topic every 5 seconds. You can use seconds, minutes, hours, days. One customer asked about yearly actions. It works (though validation was tricky).
Time-based actions handle scheduled tasks: periodic status updates, data aggregation, cleanup routines.
More interesting actions react to data:
DEFINE ACTION ProcessInput
ON TOPIC "input/message"
PUBLISH TOPIC "output/message" WITH PAYLOAD + 1
When anything publishes to input/message, this action takes the payload, adds 1, and publishes the result.
The payload keyword gives you the message content. If someone publishes 4, you get 5. If someone publishes "test", you get "test1".
Here's where topics and actions combine:
DEFINE ACTION ProcessSensor
ON TOPIC "client/+/temperature"
SET sensorID WITH TOPIC POSITION 2
PUBLISH TOPIC "processed/temperature/" + {sensorID} WITH PAYLOAD
This single action handles every sensor. When client/sensor1/temperature publishes 33, the action extracts "sensor1" from position 2 of the topic, then publishes to processed/temperature/sensor1.
Add sensor 2? It works. Add sensor 44? It works. No code changes. The wildcard pattern already covers any sensor that matches.
This is how you build systems that scale without constant maintenance.
Actions can transform data on the fly:
DEFINE ACTION CelsiusToFahrenheit
ON TOPIC "sensors/+/celsius"
SET sensorID WITH TOPIC POSITION 2
SET fahrenheit WITH (PAYLOAD * 9 / 5) + 32
PUBLISH TOPIC "sensors/" + {sensorID} + "/fahrenheit" WITH {fahrenheit}Raw Celsius data comes in, Fahrenheit data goes out. The broker handles the conversion. No external transformation layer needed.
Real integrations often send data to multiple systems:
DEFINE ACTION RouteAlarm
ON TOPIC alarms/factory/+/+
SET lineID WITH TOPIC POSITION 3
SET machineID WITH TOPIC POSITION 4
SET context WITH "Line " + {lineID} + " Machine " + {machineID}
PUBLISH TOPIC "dashboard/alarms/latest" WITH {context} + ": " + PAYLOAD
PUBLISH TOPIC "database/alarms/factory" WITH PAYLOADOne alarm triggers two publications: a formatted message for the dashboard and raw data for the database. The action handles the routing. The broker coordinates everything.
When an action triggers, the broker captures all inputs at that exact moment.
This matters for synchronization. If you collect data from outside the broker (connecting, reading variable A, then B, then C), your "snapshot" might span multiple moments. A might have changed by the time you read C.
When the broker runs the logic internally, all GET TOPIC calls return values from the same instant. True synchronization.
DEFINE ACTION CaptureState
ON TOPIC trigger/capture
SET temp WITH GET TOPIC "sensors/temperature"
SET pressure WITH GET TOPIC "sensors/pressure"
SET status WITH GET TOPIC "machine/status"
PUBLISH TOPIC "snapshots/state" WITH {temp} + "," + {pressure} + "," + {status}
When the trigger fires, temperature, pressure, and status are captured simultaneously. The resulting snapshot reflects one moment in time, not three.
Part 1 covers the fundamentals. Part 2 (coming soon) goes deeper into Models, Routes, and Rules: how to structure data for databases, connect to PLCs and external services, and control who can access what.
Try it yourself:
The concepts here translate directly to real integrations. Start simple, add complexity as needed, and let the broker do the coordination.
Based on Hugo Vaz's training session at Coreflux Cracked, January 26, 2026.