
Estimated Time: 30-45 minutes | Difficulty Level: Intermediate
What if connecting your MQTT broker to a database took 15 minutes instead of 15 hours?
We put this to the test. Tiago Abreu from our team recorded the entire process of connecting Coreflux to MongoDB, from empty Docker containers to data flowing into the database. The result? A surprisingly quick setup that actually scales.
Watch the full 15 minute Flash Demo:
https://youtu.be/h554ybSD_oc
Now, let us dive into the complete tutorial.
MQTT brokers are essential for modern IoT infrastructure and automation systems, where the need for a centralized, unified and fast data hub is the key part for system interoperability and data exchange. Coreflux is a powerful, low code MQTT broker that expands the traditional MQTT broker to a system that provides advanced features for real time data processing, transformation, and seamless integration with databases including MongoDB, PostgreSQL, MySQL, and OpenSearch.
In this comprehensive tutorial, you will deploy a complete IoT automation pipeline using Coreflux MQTT broker integrated with MongoDB using Docker containers. This scalable storage and processing solution enables you to collect, transform, and store IoT data efficiently while maintaining enterprise grade reliability and performance.
By the end of this automation guide, you will have deployed:
Need Help? Throughout this tutorial, you can get instant assistance at docs.coreflux.org using the Ask AI feature. Simply describe what you are trying to do, and get personalized guidance on LoT syntax, troubleshooting, and best practices.
MQTT (Message Queuing Telemetry Transport) is a lightweight, publish subscribe network protocol widely adopted in IoT ecosystems. Designed for constrained devices and low bandwidth, high latency, or unreliable networks, MQTT enables efficient, real time messaging in bandwidth constrained environments.
Coreflux offers a lightweight MQTT broker to facilitate efficient, real time communication between IoT devices and applications, including real time data transformation capabilities necessary for each use case. Built for scalability and reliability, Coreflux is tailored for environments where low latency and high throughput are critical.
With Coreflux, you get:
Total: approximately 35 minutes
Before you begin this MQTT broker deployment tutorial, you will need:
Docker Compose simplifies the deployment by managing both MongoDB and Coreflux containers together with proper networking.
Create a docker-compose.yml file in your project directory:
services:
mongodb:
image: mongo:latest
container_name: mongodb
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: coreflux_password
MONGO_INITDB_DATABASE: coreflux_data
volumes:
- mongodb_data:/data/db
networks:
- iot_network
coreflux:
image: coreflux/coreflux-mqtt-broker:latest
container_name: coreflux
ports:
- "1883:1883"
- "1884:1884"
- "5000:5000"
- "443:443"
depends_on:
- mongodb
networks:
- iot_network
volumes:
mongodb_data:
networks:
iot_network:
driver: bridge
This Docker Compose configuration defines two services (mongodb and coreflux), sets up MongoDB with persistent storage using a named volume, configures MongoDB credentials, creates an initial database called coreflux_data, exposes all necessary ports for both services, creates a shared network for container communication, and ensures Coreflux starts after MongoDB is ready.
Start both containers with a single command:
docker-compose up -d
The -d flag runs containers in detached mode (in the background).
When you need to stop the services:
docker-compose down
To stop and remove all data (including the MongoDB volume):
docker-compose down -v
Check that both containers are running correctly:
docker ps
You should see both MongoDB and Coreflux containers in the list of running containers.
You can test the MongoDB connection using MongoDB Compass or the MongoDB shell. The connection string will be:
mongodb://admin:coreflux_password@localhost:27017
You can access the MQTT broker through an MQTT client like MQTT Explorer to validate access using the default credentials:
Security Note: These are default credentials for development. In production, use strong passwords and enable authentication and encryption.
The LoT (Language of Things) Notebook extension for Visual Studio Code provides an integrated low code development environment for MQTT broker programming and IoT automation.
Configure the connection to your Coreflux MQTT broker using default credentials:
Assuming no errors, you will see the status of the MQTT connectivity to the broker in the bottom bar on the left.
At this point you should have:
If any of these are not working, see the Troubleshooting section at the end.
For this use case, we will build an integration of raw data through a transformation pipeline into MongoDB. Since we are not connected to actual MQTT devices, we will use LoT capabilities to simulate device data.
In LoT, an Action is executable logic triggered by specific events such as timed intervals, topic updates, or explicit calls from other actions or system components. Actions allow dynamic interaction with MQTT topics, internal variables, and payloads, facilitating complex IoT automation workflows.
You can create an Action in LoT Notebooks files (.lotnb) inside your VS Code, that can then be deployed. You can also add Markdown cells to document your code.
Learning LoT Syntax: If you are new to the Language of Things (LoT), the Coreflux Documentation provides comprehensive guides and examples. Use the Ask AI feature to get instant answers about syntax, see working examples, or troubleshoot any issues you encounter.
To run your LoT Action, Model, and Routes, you will need to create a LoT Notebook. Similar to Jupyter Notebooks, you will be able to create LoT code and add Markdown sections to document and organize your work.
Create a yourfile.lotnb file and create your first code section.
Create an Action to generate simulated sensor data using the LoT interface:
DEFINE ACTION RANDOMIZEMachineData
ON EVERY 10 SECONDS DO
PUBLISH TOPIC "raw_data/machine1" WITH RANDOM BETWEEN 0 AND 10
PUBLISH TOPIC "raw_data/station2" WITH RANDOM BETWEEN 0 AND 60
When you run this Action, it will deploy automatically to the MQTT broker, generate simulated IoT sensor data every 10 seconds, publish real time data to specific MQTT topics, and show sync status in the LoT Notebook interface.
Models in Coreflux are used to transform, aggregate, and compute values from input MQTT topics, publishing the results to new topics. They serve as the foundation for creating the UNS (Unified Namespace) of your system.
A Model allows you to define how raw IoT data should be structured and transformed, both for a single device or for multiple devices simultaneously (through the use of the wildcard +). A model also serves as the key data schema used for storage to the database.
DEFINE MODEL MachineData WITH TOPIC "Simulator/Machine/+/Data"
ADD "energy" WITH TOPIC "raw_data/+" AS TRIGGER
ADD "energy_wh" WITH (energy * 1000)
ADD "production_status" WITH (IF energy > 5 THEN "active" ELSE "inactive")
ADD "stoppage" WITH (IF production_status EQUALS "inactive" THEN 1 ELSE 0)
ADD "maintenance_alert" WITH (IF energy > 50 THEN TRUE ELSE FALSE)
ADD "timestamp" WITH TIMESTAMP "UTC"
This standard model uses wildcard + to apply to all machines automatically, converts energy to Wh units, calculates production status based on energy levels, adds timestamps to all real time data points, and publishes structured data to the Simulator/Machine/+/Data topics.
As we generated two simulated sensors with the Action, we can see the Model structure being applied automatically to both, generating both a JSON object and individual topics.
Routes define how processed real time data flows to external systems like databases. They are defined with the following format:
DEFINE ROUTE mongo_route WITH TYPE MONGODB
ADD MONGODB_CONFIG
WITH CONNECTION_STRING "mongodb://admin:coreflux_password@mongodb:27017/coreflux_data?authSource=admin"
WITH DATABASE "coreflux_data"
Note: Replace with your MongoDB information and credentials. If you are running Coreflux outside of Docker or without container linking, replace mongodb in the connection string with localhost or your server IP address.
Modify your LoT model to use the database route for storage by adding this to the end of the Model:
STORE IN "mongo_route"
WITH TABLE "MachineProductionData"
Additionally, add a parameter named device_name with the device name to have a unique identifier for each entry. This line extracts the device name from the MQTT topic structure. For example, with topic raw_data/machine1, it extracts machine1 as the device name.
Complete Model with storage:
DEFINE MODEL MachineData WITH TOPIC "Simulator/Machine/+/Data"
ADD "energy" WITH TOPIC "raw_data/+" AS TRIGGER
ADD "device_name" WITH REPLACE "+" WITH TOPIC POSITION 2 IN "+"
ADD "energy_wh" WITH (energy * 1000)
ADD "production_status" WITH (IF energy > 5 THEN "active" ELSE "inactive")
ADD "stoppage" WITH (IF production_status EQUALS "inactive" THEN 1 ELSE 0)
ADD "maintenance_alert" WITH (IF energy > 50 THEN TRUE ELSE FALSE)
ADD "timestamp" WITH TIMESTAMP "UTC"
STORE IN "mongo_route"
WITH TABLE "MachineProductionData"
After you deploy this updated model, all data should be automatically stored in MongoDB when updated.
Connect to your MongoDB database using MongoDB Compass to verify storage:
mongodb://admin:coreflux_password@localhost:27017/coreflux_data?authSource=admin2. Navigate to the coreflux_data database
3. Check the MachineProductionData collection for stored documents
You should see real time data documents with a structure similar to:
{
"_id": {
"$oid": "695eb4e25c3f70134a8452cf"
},
"energy": 3,
"energy_wh": 3000,
"production_status": "inactive",
"stoppage": 1,
"maintenance_alert": false,
"timestamp": "2026-01-07 19:32:50",
"device_name": "station2"
}
All of the data is also available in the MQTT Broker for other uses and integrations.
Error: "connection refused on port 27017"
Solution:
Error: "cannot connect to MQTT broker"
Solution:
Error: Code runs but nothing appears in MongoDB
Solution:
Integrating Coreflux MQTT broker with MongoDB provides a powerful solution for real time IoT data processing and scalable storage. Following this tutorial, you have set up a seamless automation pipeline that allows you to collect, process, and store IoT data efficiently using low code development practices.
With Coreflux scalable architecture and MongoDB robust document storage capabilities, you can handle large volumes of real time data and gain valuable insights instantly. Whether you are monitoring industrial systems, tracking environmental sensors, or managing smart infrastructure, this IoT automation integration empowers you to make data driven decisions quickly and effectively.
The Language of Things (LoT) approach creates an IoT ready foundation that scales with your needs. Your MQTT broker deployment is now ready for production workloads and can be extended to support PostgreSQL, MySQL, OpenSearch, and other database technologies as your requirements evolve.
You have got a working pipeline. Now what?
Have questions? Use the Ask AI feature at docs.coreflux.org or drop a comment below.