Github Actions automatically deploys hugo to Github Pages

Preface

I recently planned to upgrade the theme of my blog. Since I was playing around with Github Actions, I planned to use it to implement CICD. After writing and submitting the article, the static files generated by hugo will be automatically deployed to Github pages, Netlify, Vercel, Cloud flare Pages and other third-party platforms. Today, I will record the automatic deployment process of Github pages, which will save a lot of trouble.

One Week to CKA: Experience Sharing

I’ve long wanted to obtain a cloud-native & k8s related certification, and the sooner the better. Due to work, I kept postponing it until this year, and I watched its price increase twice (which is a bit painful to mention).

Recently, I finally had a week free to fully prepare for the exam. Since I have been using Kubernetes in my production environment at work, my plan was to absorb more exam points and do a lot of practice questions before the exam.

Open Source WAF Security Protection Solution

Recently, after adding a CDN to the website, there have been a lot of junk requests in the early morning, some are scans, some have large model UserAgents, and some are black spiders.

To save on CDN costs and prevent various injection attacks, I started researching open-source WAF solutions (which is sufficient for my small site). For enterprise use, it is still recommended to use commercial versions, such as Alibaba Cloud’s DCDN, Tencent’s EdgeOne, or overseas options like Cloudflare (preferred for overseas business).

Using ollama and open-webui to Play with Open-Source Large Models

I have been using large models for over a year now, starting from the initial ChatGPT 3.5/4 to the current domestic and open-source models, which are becoming increasingly powerful, such as Llama and SD. Today, I will introduce two tools that I have been using for a long time: Ollama and Open-WebUI.

Ollama

Ollama is an open-source deep learning framework designed for convenient deployment and running of large language models (LLMs) on local machines. It provides a complete deep learning toolchain, including data preprocessing, model building, training, evaluation, and deployment functions.

Generate a Sitemap with a Python Script

Recently, I was helping a friend with a CMS cluster and found that the CMS did not have a sitemap feature. Since Python 3 was already installed on the server, I decided to temporarily generate a sitemap using a script. Here is a record of it.

Script Content:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
#!/usr/bin/env python

import datetime
import mysql.connector
import xml.etree.cElementTree as ET
from lxml import etree

# Database connection parameters
config = {
    'host': '10.80.0.3', # For example: '192.168.1.100'
    'user': 'wnote_r',
    'password': 'Wnote#Pss2024',
    'database': 'wnote',
    'raise_on_warnings': True
}

# Sitemap path
sdir = '/opt/wwwroot/'

def mselect(site, n1, n2):
    # Connect to the database
    cnx = mysql.connector.connect(**config)
    cursor = cnx.cursor()

    # Get article IDs
    article_ids = []
    tag_ids = []

    # Query all records in the article table
    query1 = "select id from article order by newstime desc  limit {};".format(n1)
    cursor.execute(query1)
    for row in cursor:
        article_ids.append(row[0])

    # Query all tags
    query2 = "select id from phome_ecms_book order by newstime desc limit {};".format(n2)
    cursor.execute(query2)
    for row in cursor:
        tag_ids.append(row[0])

    # Generate URL lists
    article_urls = [f"https://{site}/article/{id}.html" for id in article_ids]
    tag_urls = [f"https://{site}/tags/{id}.html" for id in tag_ids]

    # Merge URL lists
    urls = article_urls + tag_urls

    # Create the XML tree structure for the sitemap
    root = ET.Element('urlset',
                  {'xmlns': 'http://www.sitemaps.org/schemas/sitemap/0.9',
                   'xmlns:xsi': 'http://www.w3.org/2001/XMLSchema-instance',
                   'xmlns:mobile': 'http://www.baidu.com/schemas/sitemap-mobile/1/',
                   'xsi:schemaLocation': 'http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd'})

    # Generate the sitemap file and build URL elements
    for url in urls:
        s_url = ET.SubElement(root, 'url')
        s_loc = ET.SubElement(s_url, 'loc')
        s_loc.text = url
        s_lastmod = ET.SubElement(s_url, 'lastmod')
        s_lastmod.text = datetime.datetime.now().strftime('%Y-%m-%dT%H:%M:%S+08:00')
        s_changefreq = ET.SubElement(s_url, 'changefreq')
        s_changefreq.text = 'always'
        s_priority = ET.SubElement(s_url, 'priority')
        s_priority.text = '0.95'

    # Generate the XML string
    sitemap_str = ET.tostring(root, encoding='unicode')

    # Save to file
    with open(spfile, "w", encoding="utf-8") as f:
        f.write(sitemap_str)

    # Close the database connection
    cursor.close()
    cnx.close()

with open('sites.list', 'r') as files:
    for tmp in files:
        site = tmp.strip().split()[0]
        spfile = sdir + site + '/' + tmp.strip().split()[1]
        n1 = tmp.strip().split()[2]
        n2 = tmp.strip().split()[3]
        print(site, spfile)
        mselect(site, n1, n2)

The content of sites.list is:

A Powerful Tool for Operating Baidu Cloud on Linux

Recently, I was performing a private deployment of our company’s SAAS product at the customer site. Since the customer’s network cannot access the internet, data can only be transferred through a Linux jump server. In addition to the k8s offline deployment program and images, there are also several hundred GB of pre-cut video data, which can only be transferred via Baidu Netdisk. I wondered if it was possible to synchronize Baidu Netdisk data via the command line. A quick Google search revealed that it is indeed possible. Below, I will briefly introduce the use of bypy.

Recommended Windows Package Management Tool

Having been used to the convenience of using Homebrew to install software packages on Mac, I recently installed a Windows system on my company’s computer and wanted to set it up similarly. The solution is Scoop, which I am recommending today.

Scoop is an open-source project that primarily uses commands to install Windows software packages. It effectively avoids permission pop-ups, hides GUI wizard installations, automatically finds and installs dependencies, and automates the installation process.

Achieving Multi-User Access on a Single GPU Card Using Alibaba Cloud's Open Source Solution

A new AI project has recently been launched, primarily providing online AI experiments for universities. The project has also purchased a GPU server, but it only has one Nvidia Tesla T4 card, which needs to support multiple students doing experiments online simultaneously.

The current online experiment system runs on Kubernetes, so we need to consider GPU sharing in the k8s environment. We have previously tested the Alibaba Cloud GPU card sharing solution; here, I will just record the steps for using it: