A Recommended Markdown Tool for Writing Books — Possibly a GitBook Alternative

Introduction to mdBook

mdBook is a command-line tool written in Rust for creating books using Markdown. It’s ideal for crafting product or API documentation, tutorials, course materials, or any content requiring a clean, navigable, and customizable presentation. Functionally similar to GitBook, its greatest advantage lies in speed.

  • Lightweight, Markdown-based syntax
  • Built-in search functionality
  • Syntax highlighting
  • Multiple themes for customizing output appearance
  • Preprocessors support — extend markdown rendering by modifying content before processing
  • Backend support for multiple output formats
  • Speed — built with Rust, performance is excellent
  • Even supports automated testing of Rust code

Installing mdBook

Since mdBook is developed in Rust, you need to install Rust first.

Some summaries and reflections on CDN construction

The ongoing COVID-19 pandemic has repeatedly disrupted daily life and work. Over the past half year, I’ve experienced a lot—family members fell seriously ill, my grandmother passed away, and there were numerous personal matters to handle, which led me to pause blogging for six months.

Amid economic downturns, many industries have begun layoffs and business scaling back. Taking advantage of recent free time, I’m summarizing my past experience with CDN services.

Resolve Nginx file upload limits and 504 gateway timeout in Kubernetes

Recently, two recurring issues have arisen in our business operations using the Kubernetes cluster. Here’s a record of the solutions:

  • Frontend page file upload limited to 1M
  • POST requests from frontend to backend timing out with a 504 error

Solution for the first issue:
By default, Nginx limits upload size to 1M. To resolve this, add the following configuration in the http, server, or location blocks of the Nginx config:

Alibaba Cloud ACK supports both public and internal SLB.

I. Background

  • You have an ACK cluster.
  • Nginx Ingress Controller has been successfully deployed and bound to a public-facing SLB.

Note: Kubernetes clusters created via the Alibaba Cloud Container Service console automatically deploy an Nginx Ingress Controller during initialization, which is default-mounted to a public SLB instance.

II. Configuration

1. Create an Internal SLB

In the Alibaba Cloud console, create an internal SLB and bind it to your VPC.

2. Configure Nginx Ingress Controller

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# my-nginx-ingress-slb-intranet.yaml
# intranet nginx ingress slb service
apiVersion: v1
kind: Service
metadata:
  # Name the service as nginx-ingress-lb-intranet.
  name: nginx-ingress-lb-intranet
  namespace: kube-system
  labels:
    app: nginx-ingress-lb-intranet
  annotations:
    # Specify the SLB instance type as internal.
    service.beta.kubernetes.io/alicloud-loadbalancer-address-type: intranet
    # Replace with your internal SLB instance ID.
    service.beta.kubernetes.io/alicloud-loadbalancer-id: <YOUR_INTRANET_SLB_ID>
    # Whether to automatically create SLB port listeners (overrides existing ones); can also be configured manually.
    #service.beta.kubernetes.io/alicloud-loadbalancer-force-override-listeners: 'false'
spec:
  type: LoadBalancer
  # Route traffic to other nodes
  externalTrafficPolicy: "Cluster"
  ports:
  - port: 80
    name: http
    targetPort: 80
  - port: 443
    name: https
    targetPort: 443
  selector:
    # Select pods with app=ingress-nginx
    app: ingress-nginx

Apply the service resource:

Aggregating Prometheus Alert Messages Using Prometheus Alertmanager

Deploy PrometheusAlert

1
2
3
4
git clone https://github.com/feiyu563/PrometheusAlert.git
cd PrometheusAlert/example/helm/prometheusalert
# Update config/app.conf to set login user info and database configuration
helm install -n monitoring .

Create a WeChat Work Group Robot

After creating a WeChat Work group, right-click the group → “Add Group Robot”. This will generate a webhook URL for the robot. Record this URL for later use.

Develop a Kubernetes cluster backup strategy

For backups, every internet company’s technical team must handle this task, and we are no exception. Today, I’ll share my own strategies for backing up production Kubernetes clusters.

My primary goals for Kubernetes backups are to prevent:

  • Accidental deletion of a namespace within the cluster
  • Accidental misconfiguration causing resource anomalies (e.g., deployments, configmaps)
  • Accidental deletion of partial resources in the cluster
  • Loss of etcd data

Backing Up etcd

Backing up etcd prevents catastrophic failures at the cluster level or loss of etcd data, which could render the entire cluster unusable. In such cases, only full cluster recovery can restore services.

How to quickly set up a Greenplum cluster

Recently, our internal project has been supporting a big data initiative, requiring the simulation of customer scenarios using Greenplum (older version 4.2.2.4). Below is a record of the Greenplum cluster setup process—note that the procedure for higher versions of GP remains largely identical.

Building Base Image

CentOS 6 Dockerfile:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
FROM centos:6

RUN mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
RUN curl -o /etc/yum.repos.d/CentOS-Base.repo https://www.xmpan.com/Centos-6-Vault-Aliyun.repo
RUN yum -y update; yum clean all
RUN yum install -y \
    net-tools \
    ntp \
    openssh-server \
    openssh-clients \
    less \
    iproute \
    lsof \
    wget \
    ed \
    which; yum clean all
RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ''
RUN groupadd gpadmin
RUN useradd gpadmin -g gpadmin
RUN echo gpadmin | passwd gpadmin --stdin
ENTRYPOINT ["/usr/sbin/sshd", "-D"]

Build image:

Alibaba Cloud Shared GPU Solution Testing

I. Deploy GPU Sharing Plugin in Kubernetes

Before deployment, ensure that nvidia-driver and nvidia-docker are installed on your Kubernetes nodes, and Docker’s default runtime has been set to nvidia.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# cat /etc/docker/daemon.json
{
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
    }
  },
  "default-runtime": "nvidia"
}

1. Install gpushare-device-plugin via Helm

1
2
3
$ git clone https://github.com/AliyunContainerService/gpushare-scheduler-extender.git
$ cd gpushare-scheduler-extender/deployer/chart
$ helm install --name gpushare --namespace kube-system --set masterCount=3 gpushare-installer

2. Label GPU Nodes

1
2
$ kubectl label node sd-cluster-04 gpushare=true
$ kubectl label node sd-cluster-05 gpushare=true

3. Install kubectl-inspect-gpushare

Ensure kubectl is already installed (omitted here).