Doris Manager Unified Deployment and Multi-Cluster Management
With business expansion and the diversification of product forms, there is a need to deploy multiple sets of offline and real-time data warehouse clusters. While the previous Ansible Playbook method for deploying Doris FE/BE was basically sufficient, it fell short for subsequent Doris version upgrades, scaling, visual management, and onboarding junior operations personnel. Therefore, this document was written to facilitate quick future reference.
1. Overview
1.1 Introduction to Doris Manager
Doris Manager is a graphical operation and maintenance tool launched by the Apache Doris community for unified deployment, management, and monitoring of Doris clusters. It simplifies the entire process from operating system initialization and component deployment to daily operation and maintenance management, making it particularly suitable for rapidly setting up and maintaining large-scale Doris clusters in production environments.
From the current perspective, the official open-source repository is no longer maintained. As SelectDB is a commercial company, for its own development, it has evolved Doris Manager into a free, closed-source operation and maintenance tool, though it can still be downloaded and used.
1.2 Main Features
- Cluster Deployment: Automated deployment of FE, BE, Broker, and other components.
- Node Management: Unified management of all nodes, supporting scaling out and scaling in.
- Configuration Management: Centralized configuration management and parameter adjustment.
- Monitoring and Alerting: Cluster status monitoring and alert notifications.
- Log Viewing: Centralized log viewing and analysis.
2. Prerequisites
2.1 Environment Planning
Before deployment, please confirm the following information:
| Planning Item | Description | Example |
|---|---|---|
| Node Planning | Determine FE and BE node IPs and quantities | FE: 3 nodes, BE: 10 nodes |
| Storage Planning | Plan data storage directories and disk types | /data/doris (ext4, noatime) |
| Network Planning | Confirm network connectivity between nodes and open ports | 9030, 9031, 8060, 9000+ |
| Timezone Planning | Configure timezone based on the business country | Africa/Abidjan (Côte d’Ivoire) |
| Domain Planning | Manager access domain | doris-manager.example.com |
2.2 Software Preparation
Official download address: https://selectdb.com/download/enterprise#manager
Download to the company software repository in advance (needs to be downloaded from the selectdb official website):
Repository Address: https://repo.test.com/bigdata/
| Software Package | Version | Description |
|---|---|---|
| doris-manager | 24.1.5-x64-bin | Doris Manager Management Tool |
| apache-doris | 2.1.8.1-bin-x64 | Apache Doris Community Edition |
Download Commands:
|
|
2.3 System Requirements
| Component | Minimum Configuration | Recommended Configuration |
|---|---|---|
| Operating System | CentOS 7+ / Ubuntu 18.04+ | CentOS 7.9 / Ubuntu 20.04 |
| CPU | 4 Cores | 8 Cores+ |
| Memory | 8GB | 16GB+ |
| Disk | 100GB | SSD 500GB+ |
| Network | Gigabit NIC | 10 Gigabit NIC |
3. System Initialization and Optimization
Before deploying Doris Manager and the Doris cluster, system initialization configuration is required for all nodes.
3.1 Data Disk Initialization
Important: Production environments must use an independent data disk; do not share it with the system disk.
Manual initialization (single node)
|
|
3.2 Disable Swap Partition
Reason: When Doris is under high memory pressure, swap swaps memory pages to disk, causing GC pauses, query lag, or even node pseudo-death, severely affecting performance.
|
|
3.3 Optimize Kernel Parameters
3.3.1 Increase Virtual Memory Areas (VMA)
Reason: Doris maps a large number of files to memory, requiring sufficient virtual memory areas.
3.3.2 Optimize Network Parameters (Optional)
3.4 Increase File Handle Limits
Reason: Doris BE opens a large number of files (data shards, indexes, logs, etc.), so the file descriptor limit must be increased.
|
|
3.5 Apply All Configurations and Reboot
|
|
In actual deployment, you can customize virtual machine hosts or cloud host images in advance to include basic initialization operations.
4. Deploy Doris Manager
4.1 Extraction and Installation
|
|
4.2 Start Doris Manager
Startup Verification: Service listens on port 8004
4.3 Configure Auto-start on Boot (Recommended)
Use systemd to manage the Doris Manager service, ensuring automatic startup after node restarts.
|
|
4.4 Access Manager UI
Access the Doris Manager management interface via a browser:
http://{manager-ip}:8004
First Access Steps:
- Enter the initialization user page to create the first Manager admin user.
- The Manager admin account is independent of cluster accounts and is only used for Manager permission control.
- Log in with that account after successful creation.
5. Configure Nginx Reverse Proxy
In production environments, it is recommended to configure domain names and HTTPS access via Nginx.
5.1 Nginx Configuration File
|
|
5.2 Restart Nginx
Now you can access Doris Manager via the domain name:
https://ug-test-doris-manager.test.com
6. Initialize Manager Configuration
6.1 Create Admin Account
-
Access UI Interface: Access Doris Manager via the configured domain or IP.

-
Create Admin Account:
- Set username and password.
- Service Configuration: Since external network policies may not be open, it is recommended to turn off email alerts first.
- Click to start the
doris-managerservice.

-
Enter Management Console: After successful installation, enter the main interface.

6.2 Upload Doris Installation Packages
Upload the Doris installation package on the Manager interface:
apache-doris-2.1.8.1-bin-x64.tar.gz
Or use the command line to place in the specified directory:
7. Deploy Doris Agent
Agent is the proxy program of Doris Manager on each node, responsible for executing deployment and management tasks.
7.1 Get Installation Command
On the Manager interface:
-
Go to “Cluster Management” -> “Agent Management”.
-
Click the “Add Agent” button.
-
Get the installation command (usually two lines).

Example Command:
|
|
Note: Replace the IP address with the actual Manager service IP.
7.2 Execute Installation on Target Nodes
Log in to each target node (FE, BE nodes) and execute the above Agent installation command:


7.3 Configure Agent Auto-start on Boot
Create a systemd service file to manage Agent:
|
|
7.4 Manually Create Data Directory
Manually create the data storage directory on all nodes:
|
|

7.5 Verify Agent in Manager Interface
On the “Agent Management” page of the Manager interface, you should see the installed Agent nodes with a status of “Online”.
8. Create Doris Cluster
8.1 Start Creating Cluster
-
Enter Cluster Management: Click “Cluster Management” on the Manager main interface.
-
Click “New Cluster”: Start creating a new Doris cluster.

-
Fill in Cluster Configuration:
- Cluster Name: e.g.,
prod-cluster - Cluster Version: Select
apache-doris-2.1.8.1 - Select Installation Package: Choose from the uploaded packages.

- Cluster Name: e.g.,
8.2 Add FE Nodes
-
Add FE Nodes:
- Select FE nodes from the list of nodes with Agent installed.
- It is recommended to have at least 3 FE nodes to form a high-availability cluster.
- Select FE role (Follower or Observer).
-
Configure FE Parameters:
- Ports: http_port=8030, query_port=9030, edit_log_port=9010
- Memory Allocation: Configure according to the node memory size.
8.3 Add BE Nodes
-
Add BE Nodes:
- Select BE nodes from the list of nodes with Agent installed.
- Select an appropriate number of BE nodes according to business requirements.
-
Configure BE Parameters:
- Ports: be_port=9060, webserver_port=8040
- Data Directory:
/data/doris - Memory Allocation: It is recommended to reserve system memory and allocate 70-80% of the remainder to BE.
8.4 Environment Detection
Before starting deployment, Manager will automatically perform environment detection:
- System parameter checks (file handles, VMA, swap, etc.)
- Disk space checks
- Network connectivity checks
- Port occupation checks
Wait for all detection items to pass, then click “Next” to continue deployment.

If there are failed detection items, please fix them according to the prompts and re-detect.
9. Cluster Deployment and Configuration
9.1 System Initialization Configuration
Before deploying the cluster, system parameter initialization configuration is required:


Execute the following system parameter adjustments on all nodes:
Modify System Limits (Limits):
Apply Configuration and Reboot:
Start Agent After Reboot: After the node restart is complete, you must manually start the Agent service.
|
|
9.2 Deploy Cluster
-
Cluster Settings:
- Table Name Case Sensitivity: It is recommended to select “Insensitive” (for easier application compatibility).
- Confirm all configurations are correct.
-
Start Deployment: Click the “Deploy Cluster” button.

-
Monitor Deployment Progress:
- Manager will automatically deploy FE and BE components on each node.
- You can view real-time deployment logs on the interface.
- The deployment process usually takes 5-10 minutes.
-
Deployment Complete:
- All component statuses show as “Running”.
- Enter the cluster details page to view the cluster overview.

9.2 Configure Timezone (Important)
Business Scenario: To meet the business needs of different countries, the cluster timezone needs to be adjusted according to the deployment region.





Timezone Mapping Table:
| Country/Region | Timezone Configuration | Example |
|---|---|---|
| China East 8 District | Asia/Shanghai | Mainland China |
| West Africa Côte d’Ivoire | Africa/Abidjan | CB/CDI Project |
| West Africa Uganda | Africa/Kampala | Uganda Project |
| UTC Standard Time | UTC | International General |
9.2.1 Modify FE Timezone
On the Manager interface:
- Go to “Cluster Management” -> Select Cluster -> “Configuration Management”.
- Find the FE configuration file
fe.conf. - Add or modify the following parameter:
- Save the configuration and restart the FE node.
9.2.2 Modify BE Timezone
On the Manager interface:
- Find the BE configuration file
be.confin “Configuration Management”. - Add or modify the following parameter:
- Save the configuration and restart the BE node.
9.2.3 Verify Timezone Configuration
Method 1: Verify via MySQL Client
Expected Output:
+---------------+--------+
| Variable_name | Value |
+---------------+--------+
| time_zone | AAI | # Abbreviation for Africa/Abidjan
+---------------+--------+
Method 2: Verify via Logs
10. Cluster Verification and Testing
10.1 Check Cluster Status
View in the Manager interface:
- FE Node Status: Should be “ALIVE”
- BE Node Status: Should be “ALIVE”
- Cluster Health Status: Should be “Healthy”
10.2 Verify via Command Line
|
|
10.3 Verify Timezone
11. Troubleshooting
11.1 Agent Cannot Connect to Manager
Symptom: Manager interface shows Agent as offline.
Troubleshooting Steps:
|
|
11.2 FE Node Fails to Start
Common Causes:
- Port occupied
- Metadata directory permission issues
- JAVA_HOME not configured
Troubleshooting Steps:
11.3 BE Node Fails to Start
Common Causes:
- Data disk not mounted
- Memory configuration too large
- System parameters not effective
Troubleshooting Steps:
11.4 Timezone Configuration Not Taking Effect
Troubleshooting Steps:
|
|
12. Operations Management
12.1 Cluster Scaling Out
Add BE Nodes:
- Deploy Agent on the new node (refer to Chapter 7).
- In Manager interface, select Cluster -> “Node Management” -> “Add Node”.
- Select the BE nodes to add.
- Confirm to automatically deploy and start BE.
Add FE Nodes:
- Deploy Agent on the new node.
- Add FE Follower node in Manager interface.
- Wait for FE to complete data synchronization and join the cluster.
12.2 Cluster Scaling In
Delete BE Nodes:
- Select the BE node to delete in Manager interface.
- Click the “Delete” button.
- The system will automatically perform data migration (takes a certain time).
- The node will automatically go offline after data migration is complete.
Delete FE Nodes:
- Deleting FE Follower nodes is not recommended in production environments.
- If necessary, please ensure the cluster has at least 3 FE nodes first.
12.3 Configuration Modification
You can easily modify cluster configurations in the Manager interface:
- Go to “Cluster Management” -> Select Cluster -> “Configuration Management”.
- Select the configuration file to modify (fe.conf or be.conf).
- Save after modifying parameters.
- Click “Apply Configuration” to restart the corresponding components.
Common Configuration Adjustments:
| Configuration Item | Location | Description |
|---|---|---|
| mem_limit | be.conf | BE memory limit |
| max_compaction_threads | be.conf | Maximum compaction threads |
| max_permissive_result_mem | fe.conf | Maximum memory for single query |
| enable_fuzzy_mode | fe.conf | Whether to enable fuzzy mode |
12.4 Monitoring and Alerting
Cluster Monitoring Metrics:
- CPU usage
- Memory usage
- Disk space usage
- Query QPS
- Query response time
Alert Configuration: Configure alert rules in the Manager interface:
- Go to “Cluster Management” -> “Alert Management”.
- Create alert rules (e.g., disk usage > 80%).
- Configure alert receiving methods (Email, DingTalk, WeCom).
13. Appendix
13.1 Port Mapping Table
| Component | Port | Description |
|---|---|---|
| Manager WebServer | 8004 | Manager Management Interface |
| FE http_port | 8030 | FE HTTP Service |
| FE query_port | 9030 | FE MySQL Protocol Port |
| FE edit_log_port | 9010 | FE Metadata Synchronization Port |
| BE be_port | 9060 | BE Communication Port |
| BE webserver_port | 8040 | BE Web Service |
13.2 Directory Structure
/opt/doris/
├── manager/ # Doris Manager
│ ├── webserver/
│ │ ├── bin/
│ │ └── log/
│ └── ...
├── fe/ # Frontend
│ ├── bin/
│ ├── conf/ # Configuration files
│ │ ├── fe.conf
│ │ └── fe_custom.conf
│ ├── log/ # Log directory
│ │ ├── fe.log
│ │ └── fe.warn.log
│ └── ...
├── be/ # Backend
│ ├── bin/
│ ├── conf/ # Configuration files
│ │ ├── be.conf
│ │ └── be_custom.conf
│ ├── log/ # Log directory
│ │ ├── be.INFO
│ │ └── be.out
│ └── ...
└── ...
/root/manager-agent/ # Agent working directory
├── bin/
├── conf/
└── log/
/data/doris/ # Data storage directory
├── fe/ # FE data
│ ├── meta/
│ └── ...
└── be/ # BE data
├── storage/
└── ...
13.3 Reference Documents
At this point, the deployment of Doris Manager and the setup of the Doris cluster are complete!
