首页
留言
友链
关于
Search
1
思源笔记docker私有化部署及使用体验分享
2,421 阅读
2
windows11 远程提示:为安全考虑,已锁定该用户帐户,原因是登录尝试或密码更改尝试过多。
1,110 阅读
3
Pointer-Focus:一款功能强大的教学、录屏辅助软件
615 阅读
4
解决 nginxProxyManager 申请证书时的SSL失败问题
610 阅读
5
使用cspell对项目做拼写规范检查
581 阅读
Web前端
CSS
JavaScript
交互
Vue
小程序
后端
运维
项目
生活
其他
转载
软件
职场
登录
Search
标签搜索
docker
DevOps
magic-boot
Linux
酷壳
RabbitMQ
gitlab
Node
git
工具
MybatisPlus
clickhouse
Syncthing
规范
前端
产品
nginx
markdown
axios
H5
朱治龙
累计撰写
139
篇文章
累计收到
7
条评论
首页
栏目
Web前端
CSS
JavaScript
交互
Vue
小程序
后端
运维
项目
生活
其他
转载
软件
职场
页面
留言
友链
关于
搜索到
46
篇与
运维
的结果
2024-11-22
docker 版 gitlab 配置邮件推送
背景介绍作为一个老派程序员,偶尔能接到一些私单,不少私单涉及到代码安全问题,就不是很适合用Gitlab、Gitee 等在线代码托管服务了,于是在我的开发服务器上自己搭建了一个 git 服务,可选的git 服务有很多,如::Gitea、Gogs、Gitlab 等,由于公司环境基本上用 Gitlab,且Gitlab 功能足够强大,所以便选择了 Gitlab 作为服务端,Gitlab 在国内成立了极狐公司(https://gitlab.cn/)专门运营国内的 Gitlab,在部署的时候也就选择了 Gitlab 的极狐版本,部署的完整 docker-compose.yml 如下:services: gitlab: image: 'registry.gitlab.cn/omnibus/gitlab-jh:latest' # image: 'registry.gitlab.cn/omnibus/gitlab-jh:16.11.3' #image: 'registry.gitlab.cn/omnibus/gitlab-jh:16.7.7' restart: always container_name: gitlab hostname: 'git.work.zhuzhilong.com' environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://git.work.zhuzhilong.com' # Add any other gitlab.rb configuration here, each on its own line alertmanager['enable']=false networks: - net-zzl ports: - '8007:80' - '2223:22' volumes: - './config:/etc/gitlab' - './logs:/var/log/gitlab' - './data:/var/opt/gitlab' - ../hosts:/etc/hosts - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro shm_size: '256m' networks: net-zzl: name: bridge_zzl external: true在开发过程中,涉及多成员协作,而 Gitlab 自带的邮件服务还是很有必要的,所以本次我们就将我们的 gitlab 服务搭上邮件的快车。配置过程如下:1、进入 docker 容器docker exec -it gitlab /bin/bash2、编辑 /etc/gitlab/gitlab.rb 文件{alert type="info"}为避免出错,可在更改配置前备份系统,备份命令为:gitlab-rake gitlab:backup:create{/alert}修改涉及邮件发送相关的服务,主要配置信息如下:### GitLab email server settings ###! Docs: https://docs.gitlab.com/omnibus/settings/smtp.html ###! **Use smtp instead of sendmail/postfix.** gitlab_rails['smtp_enable'] = true gitlab_rails['smtp_address'] = "smtp.feishu.cn" gitlab_rails['smtp_port'] = 465 gitlab_rails['smtp_user_name'] = "sender@zhuzhilong.com" gitlab_rails['smtp_password'] = "xxxxxxxx" gitlab_rails['smtp_domain'] = "mail.feishu.cn" gitlab_rails['smtp_authentication'] = "login" # gitlab_rails['smtp_enable_starttls_auto'] = true gitlab_rails['smtp_tls'] = true gitlab_rails['smtp_pool'] = false ###! **Can be: 'none', 'peer', 'client_once', 'fail_if_no_peer_cert'** ###! Docs: http://api.rubyonrails.org/classes/ActionMailer/Base.html # gitlab_rails['smtp_openssl_verify_mode'] = 'none' # gitlab_rails['smtp_ca_path'] = "/etc/ssl/certs" # gitlab_rails['smtp_ca_file'] = "/etc/ssl/certs/ca-certificates.crt" ### Email Settings gitlab_rails['gitlab_email_enabled'] = true ##! If your SMTP server does not like the default 'From: gitlab@gitlab.example.com' ##! can change the 'From' with this setting. gitlab_rails['gitlab_email_from'] = 'xxxx@zhuzhilong.com' gitlab_rails['gitlab_email_display_name'] = '朱治龙git' gitlab_rails['gitlab_email_reply_to'] = 'reply@zhuzhilong.com' # gitlab_rails['gitlab_email_subject_suffix'] = '' # gitlab_rails['gitlab_email_smime_enabled'] = false # gitlab_rails['gitlab_email_smime_key_file'] = '/etc/gitlab/ssl/gitlab_smime.key' # gitlab_rails['gitlab_email_smime_cert_file'] = '/etc/gitlab/ssl/gitlab_smime.crt' # gitlab_rails['gitlab_email_smime_ca_certs_file'] = '/etc/gitlab/ssl/gitlab_smime_cas.crt' 3、使配置生效并重启服务gitlab-ctl reconfigure && gitlab-ctl restart4、验证邮件发送服务可在个人资料 -> 电子邮件 中添加新的邮件地址:添加后,对应的邮箱会收到如下验证邮件即表示配置成功了:
2024年11月22日
8 阅读
0 评论
0 点赞
2024-09-29
一款开源、免费、简单 Linux 服务器管理面板:mdserver-web安装及使用初体验
背景使用宝塔面板已有一段时间,它确实极大地简化了 Linux 系统的日常维护工作,实现了傻瓜式操作。然而,宝塔面板在用户信息收集和资源占用方面存在一些争议。此外,我也尝试过 1Panel,虽然它的界面现代且美观,但遗憾的是,资源占用反而高于宝塔面板,且在易用性上跟宝塔面板比起来还略逊一筹。最近,我在公众号上了解到一款开源、仿宝塔界面的MDServer-Web,截止当前(2024-09-29)在 Github 已获得 4k+ 的 star,便心生一试的想法。简介一款开源、免费、简单 Linux 服务器管理面板,MDServer-Web 与著名的宝塔面板相似,但更安全且支持多种功能。它安装简单,作者承诺不卖数据、不监控用户、不注入病毒,可以放心使用。软件特性:✅ SSH 终端工具✅ 面板收藏功能✅ 网站子目录绑定✅ 网站备份功能✅ 插件方式管理✅ 自动更新优化✅ 支持 OpenResty、PHP 5.2-8.1、MySQL、MongoDB、Memcached、Redis 等✅ 更多...安装安装命令从官网拷贝如下地址进行安装curl --insecure -fsSL https://cdn.jsdelivr.net/gh/midoks/mdserver-web@latest/scripts/install.sh | bash安装完成后显示如下信息:starting mw-tasks... done .stopping mw-tasks... done stopping mw-panel... cli.sh: line 20: 31687 Killed python3 task.py >> ${DIR}/logs/task.log 2>&1 done starting mw-tasks... done starting mw-panel... .........done ================================================================== MW-Panel default info! ================================================================== MW-Panel-Url: http://43.134.39.206:54724/4tt3ow6j username: tmsvus16 password: koaumgpj Warning: If you cannot access the panel. release the following port (54724|80|443|22) in the security group. ================================================================== Time consumed: 6 Minute!修改端口提供跟宝塔类似的命令行查看及修改面板信息root@VM-12-13-ubuntu:/data/dockerRoot# mw ===============mdserver-web cli tools================= (1) 重启面板服务 (2) 停止面板服务 (3) 启动面板服务 (4) 重载面板服务 (5) 修改面板端口 (10) 查看面板默认信息 (11) 修改面板密码 (12) 修改面板用户名 (13) 显示面板错误日志 (20) 关闭BasicAuth认证 (21) 解除域名绑定 (22) 解除面板SSL绑定 (23) 开启IPV6支持 (24) 关闭IPV6支持 (25) 开启防火墙SSH端口 (26) 关闭二次验证 (27) 查看防火墙信息 (100) 开启PHP52显示 (101) 关闭PHP52显示 (200) 切换Linux系统软件源 (201) 简单速度测试 (0) 取消 ====================================================== 请输入命令编号:5 请输入新的面板端口:21181 stopping mw-panel... done starting mw-panel... ...done nw================================================================== MW-Panel default info! ================================================================== MW-Panel-Url: http://43.134.39.206:21181/4tt3ow6j username: tmsvus16 password: koaumgpj Warning: If you cannot access the panel. release the following port (21181|80|443|22) in the security group. ==================================================================初体验根据提供的端口及登录信息,使用浏览器访问打开如下所示的主界面:应用安装:链接Github 开源地址:https://github.com/midoks/mdserver-web官网地址:http://www.midoks.icu/ 广告有些多,且没什么内容(2024-09-29)论坛:https://bbs.midoks.icu/
2024年09月29日
193 阅读
0 评论
0 点赞
2024-09-11
记一次升级gitlab的记录(16.3.3~17.3.1)
开发服务器的 gitlab 使用 docker 搭建,docker-compose.yml 文件内容如下:services: gitlab: image: 'registry.gitlab.cn/omnibus/gitlab-jh:latest' restart: always container_name: gitlab hostname: 'git.work.zhuzhilong.com' environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://git.work.zhuzhilong.com' # Add any other gitlab.rb configuration here, each on its own line networks: - net-zzl ports: - '8007:80' - '2223:22' volumes: - './config:/etc/gitlab' - './logs:/var/log/gitlab' - './data:/var/opt/gitlab' - ../hosts:/etc/hosts shm_size: '256m' networks: net-zzl: name: bridge_zzl external: true由于该 gitlab 搭建后一直未更新过,今天尝试个升级,升级前,我们先看看系统环境信息:由于 gitlab 用的镜像是 registry.gitlab.cn/omnibus/gitlab-jh:latest,我先尝试直接拉取一下最新的镜像看是否可以直接升级,拉取命令:docker compose pull,拉取过程如下:zhuzl@zhuzl-M9-PRO:/data/dockerRoot/apps/gitlab$ docker compose pull [+] Pulling 10/10 ✔ gitlab 9 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 62.1s ✔ 857cc8cb19c0 Already exists 0.0s ✔ d388127601d7 Pull complete 1.0s ✔ c973ce60899e Pull complete 1.6s ✔ d47067d54097 Pull complete 1.2s ✔ b37f526cb6d4 Pull complete 1.4s ✔ e3e25c0883d4 Pull complete 6.6s ✔ 38326bc1340c Pull complete 7.6s ✔ dc916e282a43 Pull complete 6.6s ✔ 84388f622dc9 Pull complete 44.1s zhuzl@zhuzl-M9-PRO:/data/dockerRoot/apps/gitlab$ 然后,我们使用 docker compose up -d 启动服务,启动服务后,使用docker logs -f gitlab命令看看日志: zhuzl@zhuzl-M9-PRO:/data/dockerRoot/apps/gitlab$ docker logs -f gitlab Thank you for using GitLab Docker Image! Current version: gitlab-jh=17.3.1-jh.0 Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file And restart this container to reload settings. To do it use docker exec: docker exec -it gitlab editor /etc/gitlab/gitlab.rb docker restart gitlab For a comprehensive list of configuration options please see the Omnibus GitLab readme https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md If this container fails to start due to permission problems try to fix it by executing: docker exec -it gitlab update-permissions docker restart gitlab Cleaning stale PIDs & sockets It seems you are upgrading from 16.3.3-jh to 17.3.1. It is required to upgrade to the latest 16.11.x version first before proceeding. Please follow the upgrade documentation at https://docs.gitlab.com/ee/update/index.html#upgrading-to-a-new-major-version Thank you for using GitLab Docker Image! Current version: gitlab-jh=17.3.1-jh.0 Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file And restart this container to reload settings. To do it use docker exec: docker exec -it gitlab editor /etc/gitlab/gitlab.rb docker restart gitlab For a comprehensive list of configuration options please see the Omnibus GitLab readme https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md If this container fails to start due to permission problems try to fix it by executing: docker exec -it gitlab update-permissions docker restart gitlab Cleaning stale PIDs & sockets It seems you are upgrading from 16.3.3-jh to 17.3.1. It is required to upgrade to the latest 16.11.x version first before proceeding. Please follow the upgrade documentation at https://docs.gitlab.com/ee/update/index.html#upgrading-to-a-new-major-version Thank you for using GitLab Docker Image! Current version: gitlab-jh=17.3.1-jh.0 Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file And restart this container to reload settings. To do it use docker exec: docker exec -it gitlab editor /etc/gitlab/gitlab.rb docker restart gitlab For a comprehensive list of configuration options please see the Omnibus GitLab readme https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md If this container fails to start due to permission problems try to fix it by executing: docker exec -it gitlab update-permissions docker restart gitlab Cleaning stale PIDs & sockets It seems you are upgrading from 16.3.3-jh to 17.3.1. It is required to upgrade to the latest 16.11.x version first before proceeding. Please follow the upgrade documentation at https://docs.gitlab.com/ee/update/index.html#upgrading-to-a-new-major-version zhuzl@zhuzl-M9-PRO:/data/dockerRoot/apps/gitlab$ 从日志中的提示可知,系统检测到了,我们是从16.3.3-jh 升级到 17.3.1.但是要升级到17.x 必须先升级到 16.11.x 。于是将 docker-compose.yaml 文件中的image值改为:registry.gitlab.cn/omnibus/gitlab-jh:16.11.3,重新拉取,然后运行,观察日志:zhuzl@zhuzl-M9-PRO:/data/dockerRoot/apps/gitlab$ docker logs -f gitlab Thank you for using GitLab Docker Image! Current version: gitlab-jh=16.11.3-jh.0 Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file And restart this container to reload settings. To do it use docker exec: docker exec -it gitlab editor /etc/gitlab/gitlab.rb docker restart gitlab For a comprehensive list of configuration options please see the Omnibus GitLab readme https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md If this container fails to start due to permission problems try to fix it by executing: docker exec -it gitlab update-permissions docker restart gitlab Cleaning stale PIDs & sockets It seems you are upgrading from 16.3.3-jh to 16.11.3. It is required to upgrade to the latest 16.7.x version first before proceeding. Please follow the upgrade documentation at https://docs.gitlab.com/ee/update/#upgrade-paths Thank you for using GitLab Docker Image! Current version: gitlab-jh=16.11.3-jh.0 Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file And restart this container to reload settings. To do it use docker exec: docker exec -it gitlab editor /etc/gitlab/gitlab.rb docker restart gitlab For a comprehensive list of configuration options please see the Omnibus GitLab readme https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md If this container fails to start due to permission problems try to fix it by executing: docker exec -it gitlab update-permissions docker restart gitlab Cleaning stale PIDs & sockets It seems you are upgrading from 16.3.3-jh to 16.11.3. It is required to upgrade to the latest 16.7.x version first before proceeding. Please follow the upgrade documentation at https://docs.gitlab.com/ee/update/#upgrade-paths zhuzl@zhuzl-M9-PRO:/data/dockerRoot/apps/gitlab$根据日志提示,要升级到16.11.3 的话,需要先升级到 16.7.x , 于是我们继续修改docker-compose.yaml 中的 image 为:registry.gitlab.cn/omnibus/gitlab-jh:16.11.3,继续拉取镜像并启动服务,经过漫长等待后,登录gitlab,查看版本发现第一步升级到16.7.x已完成:然后继续升级到16.11.x,修改docker 镜像后,重启服务,经过漫长的等待后,成功升级到16.11.3:然后我们将image 修改为:registry.gitlab.cn/omnibus/gitlab-jh:latest`,重新启动服务,登录后,已经升级到最新的 17.3.1 版本:升级完成后,查看容器日志,发现大量类似这样的错误日志输出:2024-09-11_04:05:55.21146 ts=2024-09-11T04:05:55.211Z caller=main.go:181 level=info msg="Starting Alertmanager" version="(version=0.27.0, branch=master, revision=0aa3c2aad14cff039931923ab16b26b7481783b5)" 2024-09-11_04:05:55.21148 ts=2024-09-11T04:05:55.211Z caller=main.go:182 level=info build_context="(go=go1.22.5, platform=linux/amd64, user=GitLab-Omnibus, date=, tags=unknown)" 2024-09-11_04:05:55.21183 ts=2024-09-11T04:05:55.211Z caller=cluster.go:179 level=warn component=cluster err="couldn't deduce an advertise address: no private IP found, explicit advertise addr not provided" 2024-09-11_04:05:55.21286 ts=2024-09-11T04:05:55.212Z caller=main.go:221 level=error msg="unable to initialize gossip mesh" err="create memberlist: Failed to get final advertise address: No private IP address found, and explicit IP not provided"简单搜了下,找到这个网址:https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4556按照提示,配置信息添加如下信息即可:alertmanager['enable'] = false。由此,我进一步调整 docker-compose.yml,文件,最终的内容如下:services: gitlab: image: 'registry.gitlab.cn/omnibus/gitlab-jh:latest' # image: 'registry.gitlab.cn/omnibus/gitlab-jh:16.11.3' # image: 'registry.gitlab.cn/omnibus/gitlab-jh:16.7.7' restart: always container_name: gitlab hostname: 'git.work.zhuzhilong.com' environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://git.work.zhuzhilong.com' # Add any other gitlab.rb configuration here, each on its own line alertmanager['enable']=false networks: - net-zzl ports: - '8007:80' - '2223:22' volumes: - './config:/etc/gitlab' - './logs:/var/log/gitlab' - './data:/var/opt/gitlab' - ../hosts:/etc/hosts shm_size: '256m' networks: net-zzl: name: bridge_zzl external: true
2024年09月11日
94 阅读
2 评论
1 点赞
2024-09-10
一款超级好用的端口映射工具:rinetd
2014年的时候参与江西交通厅的项目,由于厅里只有一台跳板机可以访问省政府的在线办事内网服务,而内网机房也只有一台服务器能访问跳板机,而我们的项目又需要访问相关服务,然后就在内网的服务器上部署了个端口映射的软件,当时一直记得这款软件超级简单易用。近期一台开发服务器上的网络环境有同类的需求,但是时隔近10年,连名字都忘了,还好平常有记录工作日志的习惯,从10年前的工作日志里找到了当时的记录,就是它了: rinetd软件介绍rinetd 是一个简单易用的端口映射/转发/重定向工具。它通常用于将网络流量从一个端口转发到另一个端口,或者从一个IP地址转发到另一个IP地址。rinetd特别适用于那些需要将服务请求从一个网络地址或端口转发到另一个不同地址或端口的情况。rinetd的特点:简单性:配置简单,通过一个配置文件就可以完成设置。轻量级:rinetd自身占用的系统资源非常少。支持IPv4:它支持IPv4网络连接的重定向。安全性:可以设置允许哪些IP地址进行转发,从而提供一定程度的网络访问控制。软件使用安装rinetd:在大多数 Linux 发行版中,rinetd可以通过包管理器安装。例如,在基于Debian的系统(如Ubuntu)中,可以使用以下命令安装:sudo apt-get update sudo apt-get install rinetd以下是源码安装代码:windows: rinetd-win.ziplinux: rinetd.tar.gz配置rinetd:rinetd 的配置文件位于 /etc/rinetd.conf。以下是配置文件的一个基本示例:# # this is the configuration file for rinetd, the internet redirection server # # you may specify global allow and deny rules here # only ip addresses are matched, hostnames cannot be specified here # the wildcards you may use are * and ? # # allow 192.168.2.* # deny 192.168.2.1? # # forwarding rules come here # # you may specify allow and deny rules after a specific forwarding rule # to apply to only that forwarding rule # # bindadress bindport connectaddress connectport # for rocketMQ # 将所有发往本机18090端口的连接重定向到192.168.150.250的18080端口。 0.0.0.0 18090 192.168.150.250 18080 0.0.0.0 18091 192.168.150.250 18081 0.0.0.0 18092 192.168.150.250 18082 #0.0.0.0 18093 192.168.150.250 18083 0.0.0.0 18076 192.168.150.250 19876 # logging information logfile /var/log/rinetd.log # uncomment the following line if you want web-server style logfile format # logcommon日常运维命令# 启动服务 sudo systemctl restart rinetd # 重启服务 sudo systemctl restart rinetd # 查看运行状态 sudo systemctl status rinetd # 设为开机自启动 sudo systemctl enable rinetd特别说明:确保系统的防火墙规则允许 rinetd 进行必要的网络通信。
2024年09月10日
41 阅读
0 评论
0 点赞
2024-08-12
K8S简单使用记录——使用kubectl 查看应用日志及简单管理
背景说明近期公司的 devIstio 环境已整体迁移到火山引擎,该环境SRE主要采用K8S进行运维,迁移后相关的应用日志当前只能使用 kubectl 进行查看,所以就根据SRE提供的指导材料做了些了解,当前对K8S没有做深入了解,没有全局观,只能从一个开发者角度做到浅尝辄止。环境准备安装 kubectl开发机用的 Windows 11,可使用下面任意一个方法进行安装:通过 PowerShell 中安装:& $([scriptblock]::Create((New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/coreweave/kubernetes-cloud/master/getting-started/k8ctl_setup.ps1')))通过 Chocolatey 安装:choco install kubernetes-cli通过 Scoop 安装:scoop install kubectl本地配置访问 K8S 集群安装 kubectlho 后,将 SRE 提供的 kubeconfig 文件,复制到系统用户目录下的.kube/cotx/目录(缺省为:%USERPROFILE%.kubeconfig)。如果不是在缺省目录,请在系统的环境变量中添加 KUBECONFIG 变量,如下图所示:开始使用基本使用获取所有 podkubectl get pods -A示例效果:$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE console blsc-ui-594555ff77-z49ln 1/1 Running 0 32d console consent-forward-8499796656-qr9mc 1/1 Running 0 49d console consent-login-8678cdb6f4-2wjh8 1/1 Running 0 49d console console-api-entry-76d7d8bbc-v7kcm 1/1 Running 0 17d console console-batch-f88bf9dd6-62fwm 1/1 Running 0 49d console console-biz-84f458cfcd-v6pjj 1/1 Running 0 2d21h console console-blsc-kbs-7b66548df6-8lwh7 1/1 Running 0 49d console console-cluster-5ddb65ff54-2k4lf 1/1 Running 0 49d console console-config-server-5d6cf99b9b-wt7s9 1/1 Running 0 46h console console-data-sync-556dc4dc8-z2d9t 1/1 Running 0 18d console console-dev-guide-6bf4bcb79c-48crp 1/1 Running 0 6d5h console console-gateway-fbbf58dd-rcw8f 1/1 Running 0 32d console console-kbs-67cc85848f-7tg9h 1/1 Running 0 49d console console-mobile-blsc-ui-846f5f9795-jbhwc 1/1 Running 0 48d console console-mobile-server-8555b7b5cf-x9l26 1/1 Running 0 49d console console-mobile-ui-79df845ffc-445s9 1/1 Running 0 32d console console-notice-5b8dc5fd68-vmtf7 1/1 Running 0 39d console console-order-66b54dcf8-h4rv2 0/1 CrashLoopBackOff 536 (2m59s ago) 46h console console-prototypes-68c984cd88-mpzjb 1/1 Running 0 38d console console-singleton-64b7d87877-qp5x4 1/1 Running 0 28d console console-ui-56c8746566-8bbf5 1/1 Running 0 3d2h console dmc-api-doc-7cf59d7f5d-gjxbs 1/1 Running 0 49d console dmc-core-5dcb7dc65b-fb2gv 1/1 Running 0 49d console dmc-gateway-59fc6b767f-hzvgf 1/1 Running 0 49d console dmc-magic-boot-5d89987468-r5m7r 1/1 Running 0 49d console dmc-magic-boot-naive-8669fdc878-td7n8 1/1 Running 0 49d console dmc-screen-5f5cb57d89-cfskh 1/1 Running 0 49d console dmc-show-kbs-57b67545f5-4cc9k 1/1 Running 0 126m console dmc-ui-d8f5bd4b-fhmb7 1/1 Running 0 49d console dmc-worktime-backend-5dc69ccc6b-qf97p 1/1 Running 0 49d console dmc-worktime-ui-7db6df7469-d4fxt 1/1 Running 0 49d kube-system cello-7qv8s 2/2 Running 2 (49d ago) 111d kube-system cello-b8d6p 2/2 Running 3 (49d ago) 111d kube-system cello-mrch7 2/2 Running 2 (49d ago) 111d kube-system cello-pp5wz 2/2 Running 2 (49d ago) 111d kube-system coredns-58cd886448-clswr 1/1 Running 0 49d kube-system coredns-58cd886448-vlrwd 1/1 Running 0 49d kube-system metrics-server-7769c76b67-rfnvm 1/1 Running 0 49d sup sup-db-query-fd7675849-sjp8g 1/1 Running 0 49d sup sup-mq-5bc8ccf978-5x5mc 1/1 Running 0 49d sup sup-nginx-8575c78879-27gxs 1/1 Running 0 31d sup sup-oauth2-85ccd96db7-76rtw 1/1 Running 0 49d 查看日志kubectl logs [podname] -f示例效果$ kubectl logs dmc-ui-d8f5bd4b-fhmb7 -f /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh /docker-entrypoint.sh: Configuration complete; ready for start up 2024/06/24 03:06:27 [notice] 1#1: using the "epoll" event method 2024/06/24 03:06:27 [notice] 1#1: nginx/1.22.1 2024/06/24 03:06:27 [notice] 1#1: built by gcc 11.2.1 20220219 (Alpine 11.2.1_git20220219) 2024/06/24 03:06:27 [notice] 1#1: OS: Linux 5.10.135-6-velinux1u1-amd64 2024/06/24 03:06:27 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2024/06/24 03:06:27 [notice] 1#1: start worker processes 2024/06/24 03:06:27 [notice] 1#1: start worker process 22 2024/06/24 03:06:27 [notice] 1#1: start worker process 23 2024/06/24 03:06:27 [notice] 1#1: start worker process 24 2024/06/24 03:06:27 [notice] 1#1: start worker process 25 2024/06/24 03:06:27 [notice] 1#1: start worker process 26 2024/06/24 03:06:27 [notice] 1#1: start worker process 27 2024/06/24 03:06:27 [notice] 1#1: start worker process 28 2024/06/24 03:06:27 [notice] 1#1: start worker process 29 查看最近 10 行日志kubectl logs --tail 1000 -f [podname]示例效果$ kubectl logs --tail 10 dmc-ui-d8f5bd4b-fhmb7 172.17.16.38 - - [26/Jul/2024:00:50:52 +0000] "GET /favicon.ico HTTP/1.0" 200 16958 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-" 172.17.16.38 - - [31/Jul/2024:17:09:02 +0000] "GET / HTTP/1.0" 200 625 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-" 172.17.16.38 - - [31/Jul/2024:17:09:02 +0000] "GET / HTTP/1.0" 200 625 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_0_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36" "-" 172.17.16.38 - - [31/Jul/2024:17:09:02 +0000] "GET /assets/vendor.9901ae39.js HTTP/1.0" 200 1395988 "https://dmc-ui.console.dev.paratera.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_0_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36" "-" 172.17.16.38 - - [31/Jul/2024:17:09:02 +0000] "GET /assets/index.2058bab1.js HTTP/1.0" 200 100982 "https://dmc-ui.console.dev.paratera.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_0_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36" "-" 172.17.16.38 - - [31/Jul/2024:17:09:07 +0000] "GET /favicon.ico HTTP/1.0" 200 16958 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-" 172.17.16.38 - - [01/Aug/2024:11:08:22 +0000] "GET / HTTP/1.0" 200 625 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36" "-" 172.17.16.38 - - [01/Aug/2024:11:08:28 +0000] "GET /favicon.ico HTTP/1.0" 200 16958 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36" "-" 172.17.16.38 - - [01/Aug/2024:11:08:28 +0000] "GET /assets/index.2058bab1.js HTTP/1.0" 200 100982 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36" "-" 172.17.16.38 - - [02/Aug/2024:01:35:56 +0000] "GET / HTTP/1.0" 200 625 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" "-"-n 指定命名空间$ kubectl get pods -n aps NAME READY STATUS RESTARTS AGE aps-api-entry-9c74586db-7djmp 1/1 Running 0 11d aps-config-server-544bb7677c-nwr6x 1/1 Running 0 5d22h aps-data-etl-649b79cb79-mbwr7 1/1 Running 0 46m重启应用$ kubectl rollout restart deployment aps-data-etl -n aps deployment.apps/aps-data-etl restarted
2024年08月12日
20 阅读
0 评论
0 点赞
2024-05-29
使用 Docker + Syncthing 同步文件
背景介绍很久之前我就用Syncthing 搭建过文件同步服务(见 使用 Syncthing 同步文件 ),只是当初是直接下载的可执行文件,在宿主机启服务实现文件同步的,对宿主机有一定的侵入性,且当时只是记录了怎么把服务跑起来,没有说明清楚服务跑起来后怎么配置,最终实现文件同步的。最近帮朋友使用 ChestnutCMS 搭建了个公司产品网站,正好需要在两台服务器器之间实现文件同步服务,一共有两台服务器:Web 服务器内网IP: 10.0.12.12配置:CPU - 4核 | 内存 - 8GB | 系统盘 - SSD云硬盘 180GB | 流量包 - 2000GB/月(带宽:12Mbps)主要部署 Docker + NginxProxyManager + KKFilePreview应用服务器内网IP: 10.0.16.13配置:CPU - 8核 | 内存 - 16GB | 系统盘 - SSD云硬盘 270GB | 流量包 - 3500GB/月(带宽:18Mbps)主要部署公司的云上业务系统 + ChestnutCMS,由于预算有限,数据库、Redis 等服务也装在这台服务器两台服务器处于相同的内网环境,ChestnutCMS 主要用于内容管理及网站静态发布,然后使用 Syncthing 将静态化的文件同步到 Web 服务器,使用 NginxProxyManager 发布到外网访问。应用部署及配置部署发布端使用 Docker 启动应用我们在应用服务器上部署发布端,由于应用服务器上也提前部署了 Docker, 我们使用 docker来部署,docker-compose.yaml 文件示例如下:services: app: image: syncthing/syncthing container_name: syncthing privileged: true restart: always volumes: - /data:/data - ../po-cms/wwwroot_release:/var/syncthing - ../hosts:/etc/hosts networks: - net-zzl ports: - 8102:8384 - 22000:22000 networks: net-zzl: name: bridge_zzl external: true使用 docker compose up -d 启动服务开放端口从上面的配置文件中,我们开启了 8102 和 22000 两个端口,由于我们的应用服务器不直接对外提供服务,需要使用 Web服务器代理,所以 需要将这两个端口对Web服务器开放:如果你的应用服务器本身启用了防火墙之类的机制,记得也要放行。配置 Syncthing UI界面对外可访问在 NginxProxyManager 中添加对外访问,相关的 DNS 解析请自行提前处理。Syncthing 配置第一次访问的时候系统会通过警告提示来引导用户做一些安全性相关的配置:发送匿名报告数据提示引导配置提示我主要配置的信息如下:Syncthing 常规配置Syncthing 图形用户界面设置至此 应用服务器的配置基本OK了。web服务器配置部署 Syncting我们还是使用 Docker 来部署,基本信息差不多,我只是结合web服务器的环境改了下挂载目录及端口:services: app: image: syncthing/syncthing container_name: syncthing privileged: true restart: always volumes: - /data:/data - ../nginx-proxy-manager/wwwroot_release:/var/syncthing - ../hosts:/etc/hosts networks: - net-zzl ports: - 8013:8384 - 22000:22000 networks: net-zzl: name: bridge_zzl external: true对应用服务器开放端口Syncthing 配置基本配置跟应用服务器保持一致配置同步一、添加远程设备在应用服务器添加web服务器的syncthing在 Web 服务器同意添加:二、应用服务器配置同步文件夹单击主界面的「添加文件夹」按钮:三、WEB服务器同意添加共享web服务器接收到添加共享提示配置信息配置完成后会自动进行同步同步完成后可在指定目录下看到同步后的文件参考链接Syncthing 官网:https://syncthing.net/Syncthing 官网文档:https://docs.syncthing.net/Syncthing Github 开源地址:https://github.com/syncthing/syncthing
2024年05月29日
399 阅读
0 评论
0 点赞
2024-05-19
解决 nginxProxyManager 申请证书时的SSL失败问题
背景介绍使用 NginxProxyManager 搭建 Web服务器,添加SSL证书时出错,报错界面如下:异常提示信息如下提示:CommandError: The 'certbot_dns_dnspod.dns_dnspod' plugin errored while loading: No module named 'zope'. You may need to remove or update this plugin. The Certbot log will contain the full error details and this should be reported to the plugin developer. Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/certbot-log-pid4c1ie/log or re-run Certbot with -v for more details. at /app/lib/utils.js:16:13 at ChildProcess.exithandler (node:child_process:430:5) at ChildProcess.emit (node:events:519:28) at maybeClose (node:internal/child_process:1105:16) at ChildProcess._handle.onexit (node:internal/child_process:305:5)问题调研该问题在 NginxProxyManager 的github Issue 中有说明,连接为:https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2440其中,该回复直击要害:https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2440#issuecomment-1380036390我们就根据提示,到容器里安装一下缺失的依赖即可。解决步骤一、进入容器:sudo docker exec -it nginxProxyManager /bin/bash进入后显示如下提示信息:ubuntu@VM-12-12-ubuntu:/data/dockerRoot/apps/nginx-proxy-manager$ sudo docker exec -it nginxProxyManager /bin/bash _ _ _ ____ __ __ | \ | | __ _(_)_ __ __ _| _ \ _ __ _____ ___ _| \/ | __ _ _ __ __ _ __ _ ___ _ __ | \| |/ _` | | '_ \\ \/ / |_) | '__/ _ \ \/ / | | | |\/| |/ _` | '_ \ / _` |/ _` |/ _ \ '__| | |\ | (_| | | | | |> <| __/| | | (_) > <| |_| | | | | (_| | | | | (_| | (_| | __/ | |_| \_|\__, |_|_| |_/_/\_\_| |_| \___/_/\_\\__, |_| |_|\__,_|_| |_|\__,_|\__, |\___|_| |___/ |___/ |___/ The 'certbot_dns_dnspod.dns_dnspod' plugin errored while loading: No module named 'zope'. You may need to remove or update this plugin. The Certbot log will contain the full error details and this should be reported to the plugin developer. Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/certbot-log-fimx1ahi/log or re-run Certbot with -v for more details. Version 2.11.2 (12d77e3) 2024-05-10 14:36:51 UTC, OpenResty 1.21.4.3, debian 12 (bookworm), Certbot Base: debian:bookworm-slim, linux/amd64 Certbot: nginxproxymanager/nginx-full:latest, linux/amd64 Node: nginxproxymanager/nginx-full:certbot, linux/amd64 [root@docker-1f03724f8d16:/app]二、安装 zopepip install zope安装后由于多次没下载到自动重试了数次,见如下安装日志:[root@docker-1f03724f8d16:/app]# pip install zope Collecting zope Downloading Zope-5.10-py3-none-any.whl.metadata (32 kB) Collecting AccessControl>=5.2 (from zope) Downloading AccessControl-6.3-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (9.8 kB) Collecting Acquisition (from zope) Downloading Acquisition-5.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (24 kB) Collecting BTrees (from zope) Downloading BTrees-5.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (20 kB) Collecting Chameleon>=3.7.0 (from zope) Downloading Chameleon-4.5.4-py3-none-any.whl.metadata (51 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 51.3/51.3 kB 7.4 kB/s eta 0:00:00 Collecting DateTime (from zope) Downloading DateTime-5.5-py3-none-any.whl.metadata (33 kB) Collecting DocumentTemplate>=4.0 (from zope) Downloading DocumentTemplate-4.6-py3-none-any.whl.metadata (7.6 kB) Collecting ExtensionClass (from zope) Downloading ExtensionClass-5.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.2 kB) Collecting MultiMapping (from zope) Downloading MultiMapping-5.0-py3-none-any.whl.metadata (1.8 kB) Collecting PasteDeploy (from zope) Downloading PasteDeploy-3.1.0-py3-none-any.whl.metadata (2.7 kB) Collecting Persistence (from zope) Downloading Persistence-4.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB) Collecting RestrictedPython (from zope) Downloading RestrictedPython-7.1-py3-none-any.whl.metadata (12 kB) Collecting ZConfig>=2.9.2 (from zope) Downloading ZConfig-4.1-py3-none-any.whl.metadata (17 kB) Collecting ZODB (from zope) Downloading ZODB-6.0-py3-none-any.whl.metadata (24 kB) Requirement already satisfied: setuptools>=36.2 in /opt/certbot/lib/python3.11/site-packages (from zope) (66.1.1) Collecting transaction>=2.4 (from zope) Downloading transaction-4.0-py3-none-any.whl.metadata (14 kB) Collecting waitress (from zope) Downloading waitress-3.0.0-py3-none-any.whl.metadata (4.2 kB) Collecting zExceptions>=3.4 (from zope) Downloading zExceptions-5.0-py3-none-any.whl.metadata (3.4 kB) Collecting z3c.pt (from zope) Downloading z3c.pt-4.3-py3-none-any.whl.metadata (47 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 47.6/47.6 kB 42.6 kB/s eta 0:00:00 Collecting zope.browser (from zope) Downloading zope.browser-3.0-py3-none-any.whl.metadata (4.4 kB) Collecting zope.browsermenu (from zope) Downloading zope.browsermenu-5.0-py3-none-any.whl.metadata (4.1 kB) Collecting zope.browserpage>=4.4.0.dev0 (from zope) Downloading zope.browserpage-5.0-py3-none-any.whl.metadata (5.4 kB) Collecting zope.browserresource>=3.11 (from zope) Downloading zope.browserresource-5.1-py3-none-any.whl.metadata (9.3 kB) Collecting zope.component (from zope) Downloading zope.component-6.0-py3-none-any.whl.metadata (18 kB) Collecting zope.configuration (from zope) Downloading zope.configuration-5.0.1-py3-none-any.whl.metadata (11 kB) Collecting zope.container (from zope) Downloading zope.container-5.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (15 kB) Collecting zope.contentprovider (from zope) Downloading zope.contentprovider-5.0-py3-none-any.whl.metadata (5.9 kB) Collecting zope.contenttype (from zope) Downloading zope.contenttype-5.1-py3-none-any.whl.metadata (5.3 kB) Collecting zope.datetime (from zope) Downloading zope.datetime-5.0.0-py3-none-any.whl.metadata (4.3 kB) Collecting zope.deferredimport (from zope) Downloading zope.deferredimport-5.0-py3-none-any.whl.metadata (5.1 kB) Collecting zope.event (from zope) Downloading zope.event-5.0-py3-none-any.whl.metadata (4.4 kB) Collecting zope.exceptions (from zope) Downloading zope.exceptions-5.0.1-py3-none-any.whl.metadata (8.4 kB) Collecting zope.globalrequest (from zope) Downloading zope.globalrequest-2.0-py3-none-any.whl.metadata (3.3 kB) Collecting zope.i18n[zcml] (from zope) Downloading zope.i18n-5.1-py3-none-any.whl.metadata (13 kB) Collecting zope.i18nmessageid (from zope) Downloading zope.i18nmessageid-6.1.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.7 kB) Collecting zope.interface>=3.8 (from zope) Downloading zope.interface-6.4-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (42 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 42.2/42.2 kB 25.4 kB/s eta 0:00:00 Collecting zope.lifecycleevent (from zope) Downloading zope.lifecycleevent-5.0-py3-none-any.whl.metadata (6.0 kB) Collecting zope.location (from zope) Downloading zope.location-5.0-py3-none-any.whl.metadata (9.7 kB) Collecting zope.pagetemplate>=4.0.2 (from zope) Downloading zope.pagetemplate-5.1-py3-none-any.whl.metadata (9.6 kB) Collecting zope.processlifetime (from zope) Downloading zope.processlifetime-3.0-py3-none-any.whl.metadata (3.9 kB) Collecting zope.proxy (from zope) Downloading zope.proxy-5.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) Collecting zope.ptresource (from zope) Downloading zope.ptresource-5.0-py3-none-any.whl.metadata (4.4 kB) Collecting zope.publisher (from zope) Downloading zope.publisher-7.0-py3-none-any.whl.metadata (21 kB) Collecting zope.schema (from zope) Downloading zope.schema-7.0.1-py3-none-any.whl.metadata (24 kB) Collecting zope.security (from zope) Downloading zope.security-6.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (24 kB) Collecting zope.sequencesort (from zope) Downloading zope.sequencesort-5.0-py3-none-any.whl.metadata (3.9 kB) Collecting zope.site (from zope) Downloading zope.site-5.0-py3-none-any.whl.metadata (20 kB) Collecting zope.size (from zope) Downloading zope.size-5.0-py3-none-any.whl.metadata (4.6 kB) Collecting zope.tal (from zope) Downloading zope.tal-5.0.1-py3-none-any.whl.metadata (7.5 kB) Collecting zope.tales>=5.0.2 (from zope) Downloading zope.tales-6.0-py3-none-any.whl.metadata (6.6 kB) Collecting zope.testbrowser (from zope) Downloading zope.testbrowser-6.0-py3-none-any.whl.metadata (19 kB) Collecting zope.testing (from zope) Downloading zope.testing-5.0.1-py3-none-any.whl.metadata (19 kB) Collecting zope.traversing (from zope) Downloading zope.traversing-5.0-py3-none-any.whl.metadata (12 kB) Collecting zope.viewlet (from zope) Downloading zope.viewlet-5.0-py3-none-any.whl.metadata (5.9 kB) Collecting multipart (from zope) Downloading multipart-0.2.4-py3-none-any.whl.metadata (1.1 kB) Collecting AuthEncoding (from AccessControl>=5.2->zope) Downloading AuthEncoding-5.0-py3-none-any.whl.metadata (2.7 kB) Collecting RestrictedPython (from zope) Downloading RestrictedPython-7.2a1.dev0-py3-none-any.whl.metadata (12 kB) Collecting roman (from DocumentTemplate>=4.0->zope) Downloading roman-4.2-py3-none-any.whl.metadata (3.6 kB) Collecting zope.structuredtext (from DocumentTemplate>=4.0->zope) Downloading zope.structuredtext-5.0-py3-none-any.whl.metadata (4.3 kB) Collecting persistent>=4.1.1 (from Persistence->zope) Downloading persistent-5.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (21 kB) Collecting zope.hookable>=4.2.0 (from zope.component->zope) Downloading zope.hookable-6.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.8 kB) Requirement already satisfied: pytz in /opt/certbot/lib/python3.11/site-packages (from DateTime->zope) (2024.1) Collecting zc.lockfile (from ZODB->zope) Downloading zc.lockfile-3.0.post1-py3-none-any.whl.metadata (6.2 kB) Collecting zodbpickle>=1.0.1 (from ZODB->zope) Downloading zodbpickle-3.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (13 kB) Collecting zope.cachedescriptors (from zope.container->zope) Downloading zope.cachedescriptors-5.0-py3-none-any.whl.metadata (17 kB) Collecting zope.dottedname (from zope.container->zope) Downloading zope.dottedname-6.0-py3-none-any.whl.metadata (3.4 kB) Collecting zope.filerepresentation (from zope.container->zope) Downloading zope.filerepresentation-6.0-py3-none-any.whl.metadata (4.6 kB) Collecting python-gettext (from zope.i18n[zcml]->zope) Downloading python_gettext-5.0-py3-none-any.whl.metadata (4.6 kB) Collecting zope.deprecation (from zope.i18n[zcml]->zope) Downloading zope.deprecation-5.0-py3-none-any.whl.metadata (5.1 kB) Collecting zope.annotation (from zope.site->zope) Downloading zope.annotation-5.0-py3-none-any.whl.metadata (6.6 kB) WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/webtest/ Collecting WebTest>=2.0.30 (from zope.testbrowser->zope) Downloading WebTest-3.0.0-py3-none-any.whl.metadata (1.8 kB) Collecting BeautifulSoup4 (from zope.testbrowser->zope) Downloading beautifulsoup4-4.12.3-py3-none-any.whl.metadata (3.8 kB) Collecting SoupSieve>=1.9.0 (from zope.testbrowser->zope) Downloading soupsieve-2.5-py3-none-any.whl.metadata (4.7 kB) Collecting WSGIProxy2 (from zope.testbrowser->zope) Downloading WSGIProxy2-0.5.1-py3-none-any.whl.metadata (2.7 kB) Requirement already satisfied: cffi in /opt/certbot/lib/python3.11/site-packages (from persistent>=4.1.1->Persistence->zope) (1.16.0) Collecting WebOb>=1.2 (from WebTest>=2.0.30->zope.testbrowser->zope) Downloading WebOb-1.8.7-py2.py3-none-any.whl.metadata (10 kB) Requirement already satisfied: pycparser in /opt/certbot/lib/python3.11/site-packages (from cffi->persistent>=4.1.1->Persistence->zope) (2.22) Downloading Zope-5.10-py3-none-any.whl (3.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 99.6 kB/s eta 0:00:00 Downloading AccessControl-6.3-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (193 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 193.4/193.4 kB 1.7 MB/s eta 0:00:00 Downloading Chameleon-4.5.4-py3-none-any.whl (88 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 88.2/88.2 kB 1.8 MB/s eta 0:00:00 Downloading DocumentTemplate-4.6-py3-none-any.whl (87 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 87.0/87.0 kB 1.8 MB/s eta 0:00:00 Downloading ExtensionClass-5.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (92 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 92.6/92.6 kB 1.8 MB/s eta 0:00:00 Downloading Persistence-4.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24 kB) Downloading RestrictedPython-7.2a1.dev0-py3-none-any.whl (26 kB) Downloading transaction-4.0-py3-none-any.whl (46 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.6/46.6 kB 1.7 MB/s eta 0:00:00 Downloading ZConfig-4.1-py3-none-any.whl (131 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 131.5/131.5 kB 1.8 MB/s eta 0:00:00 Downloading zExceptions-5.0-py3-none-any.whl (17 kB) Downloading zope.browserpage-5.0-py3-none-any.whl (32 kB) Downloading zope.browserresource-5.1-py3-none-any.whl (40 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.6/40.6 kB 1.7 MB/s eta 0:00:00 Downloading zope.component-6.0-py3-none-any.whl (68 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 68.8/68.8 kB 1.8 MB/s eta 0:00:00 Downloading zope.contenttype-5.1-py3-none-any.whl (14 kB) Downloading zope.interface-6.4-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (249 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 249.5/249.5 kB 1.5 MB/s eta 0:00:00 Downloading zope.pagetemplate-5.1-py3-none-any.whl (44 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.0/45.0 kB 2.0 MB/s eta 0:00:00 Downloading zope.publisher-7.0-py3-none-any.whl (119 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 119.3/119.3 kB 1.9 MB/s eta 0:00:00 Downloading zope.security-6.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (182 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 182.7/182.7 kB 2.1 MB/s eta 0:00:00 Downloading zope.proxy-5.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (71 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.9/71.9 kB 2.1 MB/s eta 0:00:00 Downloading zope.schema-7.0.1-py3-none-any.whl (85 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 85.9/85.9 kB 2.1 MB/s eta 0:00:00 Downloading zope.tal-5.0.1-py3-none-any.whl (135 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 135.9/135.9 kB 2.0 MB/s eta 0:00:00 Downloading zope.tales-6.0-py3-none-any.whl (30 kB) Downloading zope.traversing-5.0-py3-none-any.whl (47 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 47.6/47.6 kB 1.9 MB/s eta 0:00:00 Downloading zope.location-5.0-py3-none-any.whl (19 kB) Downloading Acquisition-5.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (122 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 122.8/122.8 kB 2.1 MB/s eta 0:00:00 Downloading BTrees-5.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 2.7 MB/s eta 0:00:00 Downloading DateTime-5.5-py3-none-any.whl (52 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 52.6/52.6 kB 3.3 MB/s eta 0:00:00 Downloading MultiMapping-5.0-py3-none-any.whl (4.3 kB) Downloading multipart-0.2.4-py3-none-any.whl (7.4 kB) Downloading PasteDeploy-3.1.0-py3-none-any.whl (16 kB) Downloading waitress-3.0.0-py3-none-any.whl (56 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.7/56.7 kB 3.1 MB/s eta 0:00:00 Downloading z3c.pt-4.3-py3-none-any.whl (40 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.2/40.2 kB 3.5 MB/s eta 0:00:00 Downloading ZODB-6.0-py3-none-any.whl (417 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 417.8/417.8 kB 4.0 MB/s eta 0:00:00 Downloading zope.browser-3.0-py3-none-any.whl (7.6 kB) Downloading zope.browsermenu-5.0-py3-none-any.whl (30 kB) Downloading zope.configuration-5.0.1-py3-none-any.whl (79 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.0/79.0 kB 3.7 MB/s eta 0:00:00 Downloading zope.container-5.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (114 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 114.9/114.9 kB 3.8 MB/s eta 0:00:00 Downloading zope.lifecycleevent-5.0-py3-none-any.whl (18 kB) Downloading zope.contentprovider-5.0-py3-none-any.whl (11 kB) Downloading zope.datetime-5.0.0-py3-none-any.whl (43 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 43.4/43.4 kB 3.0 MB/s eta 0:00:00 Downloading zope.deferredimport-5.0-py3-none-any.whl (10.0 kB) Downloading zope.event-5.0-py3-none-any.whl (6.8 kB) Downloading zope.exceptions-5.0.1-py3-none-any.whl (19 kB) Downloading zope.globalrequest-2.0-py3-none-any.whl (5.7 kB) Downloading zope.i18nmessageid-6.1.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (28 kB) Downloading zope.processlifetime-3.0-py3-none-any.whl (5.9 kB) Downloading zope.ptresource-5.0-py3-none-any.whl (7.6 kB) Downloading zope.sequencesort-5.0-py3-none-any.whl (11 kB) Downloading zope.site-5.0-py3-none-any.whl (30 kB) Downloading zope.size-5.0-py3-none-any.whl (7.9 kB) Downloading zope.testbrowser-6.0-py3-none-any.whl (63 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.7/63.7 kB 3.5 MB/s eta 0:00:00 Downloading zope.testing-5.0.1-py3-none-any.whl (37 kB) Downloading zope.viewlet-5.0-py3-none-any.whl (33 kB) Downloading persistent-5.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (234 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 234.4/234.4 kB 4.0 MB/s eta 0:00:00 Downloading soupsieve-2.5-py3-none-any.whl (36 kB) Downloading WebTest-3.0.0-py3-none-any.whl (31 kB) Downloading zodbpickle-3.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (299 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 299.3/299.3 kB 4.0 MB/s eta 0:00:00 Downloading zope.hookable-6.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23 kB) Downloading zope.i18n-5.1-py3-none-any.whl (798 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 798.8/798.8 kB 3.7 MB/s eta 0:00:00 Downloading AuthEncoding-5.0-py3-none-any.whl (8.7 kB) Downloading beautifulsoup4-4.12.3-py3-none-any.whl (147 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 147.9/147.9 kB 4.4 MB/s eta 0:00:00 Downloading python_gettext-5.0-py3-none-any.whl (13 kB) Downloading roman-4.2-py3-none-any.whl (5.5 kB) Downloading WSGIProxy2-0.5.1-py3-none-any.whl (9.2 kB) Downloading zc.lockfile-3.0.post1-py3-none-any.whl (9.8 kB) Downloading zope.annotation-5.0-py3-none-any.whl (14 kB) Downloading zope.cachedescriptors-5.0-py3-none-any.whl (13 kB) Downloading zope.deprecation-5.0-py3-none-any.whl (10 kB) Downloading zope.dottedname-6.0-py3-none-any.whl (6.4 kB) Downloading zope.filerepresentation-6.0-py3-none-any.whl (8.3 kB) Downloading zope.structuredtext-5.0-py3-none-any.whl (92 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 92.5/92.5 kB 4.0 MB/s eta 0:00:00 Downloading WebOb-1.8.7-py2.py3-none-any.whl (114 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 115.0/115.0 kB 4.2 MB/s eta 0:00:00 Installing collected packages: multipart, zope.testing, zope.structuredtext, zope.sequencesort, zope.interface, zope.i18nmessageid, zope.hookable, zope.event, zope.dottedname, zope.deprecation, zope.datetime, zope.contenttype, zope.cachedescriptors, zodbpickle, ZConfig, zc.lockfile, WebOb, waitress, SoupSieve, roman, RestrictedPython, python-gettext, PasteDeploy, ExtensionClass, Chameleon, AuthEncoding, zope.tales, zope.tal, zope.size, zope.schema, zope.proxy, zope.processlifetime, zope.lifecycleevent, zope.exceptions, zope.component, zope.browser, WSGIProxy2, transaction, persistent, MultiMapping, DateTime, BeautifulSoup4, Acquisition, zope.location, zope.i18n, zope.filerepresentation, zope.deferredimport, zope.configuration, WebTest, Persistence, BTrees, zope.testbrowser, zope.security, zope.annotation, ZODB, zope.publisher, zope.traversing, zope.contentprovider, zExceptions, zope.pagetemplate, zope.globalrequest, zope.container, zope.browserresource, z3c.pt, AccessControl, zope.site, zope.ptresource, zope.browserpage, zope.browsermenu, DocumentTemplate, zope.viewlet, zope Successfully installed AccessControl-6.3 Acquisition-5.2 AuthEncoding-5.0 BTrees-5.2 BeautifulSoup4-4.12.3 Chameleon-4.5.4 DateTime-5.5 DocumentTemplate-4.6 ExtensionClass-5.1 MultiMapping-5.0 PasteDeploy-3.1.0 Persistence-4.1 RestrictedPython-7.2a1.dev0 SoupSieve-2.5 WSGIProxy2-0.5.1 WebOb-1.8.7 WebTest-3.0.0 ZConfig-4.1 ZODB-6.0 multipart-0.2.4 persistent-5.2 python-gettext-5.0 roman-4.2 transaction-4.0 waitress-3.0.0 z3c.pt-4.3 zExceptions-5.0 zc.lockfile-3.0.post1 zodbpickle-3.3 zope-5.10 zope.annotation-5.0 zope.browser-3.0 zope.browsermenu-5.0 zope.browserpage-5.0 zope.browserresource-5.1 zope.cachedescriptors-5.0 zope.component-6.0 zope.configuration-5.0.1 zope.container-5.2 zope.contentprovider-5.0 zope.contenttype-5.1 zope.datetime-5.0.0 zope.deferredimport-5.0 zope.deprecation-5.0 zope.dottedname-6.0 zope.event-5.0 zope.exceptions-5.0.1 zope.filerepresentation-6.0 zope.globalrequest-2.0 zope.hookable-6.0 zope.i18n-5.1 zope.i18nmessageid-6.1.0 zope.interface-6.4 zope.lifecycleevent-5.0 zope.location-5.0 zope.pagetemplate-5.1 zope.processlifetime-3.0 zope.proxy-5.2 zope.ptresource-5.0 zope.publisher-7.0 zope.schema-7.0.1 zope.security-6.2 zope.sequencesort-5.0 zope.site-5.0 zope.size-5.0 zope.structuredtext-5.0 zope.tal-5.0.1 zope.tales-6.0 zope.testbrowser-6.0 zope.testing-5.0.1 zope.traversing-5.0 zope.viewlet-5.0 [root@docker-1f03724f8d16:/app]#由于要用到dnspod 申请SSL,我们多安装一下dnspod的依赖,避免这个包没安装成功导致其他问题:pip install certbot-dns-dnspod执行后,提示如下信息表面该依赖之前一安装成功:[root@docker-1f03724f8d16:/app]# pip install certbot-dns-dnspod Requirement already satisfied: certbot-dns-dnspod in /opt/certbot/lib/python3.11/site-packages (0.1.0) Requirement already satisfied: acme>=0.15.0 in /opt/certbot/lib/python3.11/site-packages (from certbot-dns-dnspod) (2.10.0) Requirement already satisfied: certbot>=0.15.0 in /opt/certbot/lib/python3.11/site-packages (from certbot-dns-dnspod) (2.10.0) Requirement already satisfied: cryptography>=3.2.1 in /opt/certbot/lib/python3.11/site-packages (from acme>=0.15.0->certbot-dns-dnspod) (42.0.7) Requirement already satisfied: josepy>=1.13.0 in /opt/certbot/lib/python3.11/site-packages (from acme>=0.15.0->certbot-dns-dnspod) (1.14.0) Requirement already satisfied: PyOpenSSL!=23.1.0,>=17.5.0 in /opt/certbot/lib/python3.11/site-packages (from acme>=0.15.0->certbot-dns-dnspod) (24.1.0) Requirement already satisfied: pyrfc3339 in /opt/certbot/lib/python3.11/site-packages (from acme>=0.15.0->certbot-dns-dnspod) (1.1) Requirement already satisfied: pytz>=2019.3 in /opt/certbot/lib/python3.11/site-packages (from acme>=0.15.0->certbot-dns-dnspod) (2024.1) Requirement already satisfied: requests>=2.20.0 in /opt/certbot/lib/python3.11/site-packages (from acme>=0.15.0->certbot-dns-dnspod) (2.31.0) Requirement already satisfied: setuptools>=41.6.0 in /opt/certbot/lib/python3.11/site-packages (from acme>=0.15.0->certbot-dns-dnspod) (66.1.1) Requirement already satisfied: ConfigArgParse>=1.5.3 in /opt/certbot/lib/python3.11/site-packages (from certbot>=0.15.0->certbot-dns-dnspod) (1.7) Requirement already satisfied: configobj>=5.0.6 in /opt/certbot/lib/python3.11/site-packages (from certbot>=0.15.0->certbot-dns-dnspod) (5.0.8) Requirement already satisfied: distro>=1.0.1 in /opt/certbot/lib/python3.11/site-packages (from certbot>=0.15.0->certbot-dns-dnspod) (1.9.0) Requirement already satisfied: parsedatetime>=2.4 in /opt/certbot/lib/python3.11/site-packages (from certbot>=0.15.0->certbot-dns-dnspod) (2.6) Requirement already satisfied: six in /opt/certbot/lib/python3.11/site-packages (from configobj>=5.0.6->certbot>=0.15.0->certbot-dns-dnspod) (1.16.0) Requirement already satisfied: cffi>=1.12 in /opt/certbot/lib/python3.11/site-packages (from cryptography>=3.2.1->acme>=0.15.0->certbot-dns-dnspod) (1.16.0) Requirement already satisfied: charset-normalizer<4,>=2 in /opt/certbot/lib/python3.11/site-packages (from requests>=2.20.0->acme>=0.15.0->certbot-dns-dnspod) (3.3.2) Requirement already satisfied: idna<4,>=2.5 in /opt/certbot/lib/python3.11/site-packages (from requests>=2.20.0->acme>=0.15.0->certbot-dns-dnspod) (3.7) Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/certbot/lib/python3.11/site-packages (from requests>=2.20.0->acme>=0.15.0->certbot-dns-dnspod) (2.2.1) Requirement already satisfied: certifi>=2017.4.17 in /opt/certbot/lib/python3.11/site-packages (from requests>=2.20.0->acme>=0.15.0->certbot-dns-dnspod) (2024.2.2) Requirement already satisfied: pycparser in /opt/certbot/lib/python3.11/site-packages (from cffi>=1.12->cryptography>=3.2.1->acme>=0.15.0->certbot-dns-dnspod) (2.22) [root@docker-1f03724f8d16:/app]#三、功能验证安装依赖后,我们再次安装就没有异常错误提示信息了:loading完后申请成功:四、后记为保证后续能稳定的使用修复后的功能,避免出现容器删除后再次运行出现同样的问题,将修正后的容器另存为一个镜像并修改 docker-compose 的镜像:sudo docker commit nginxProxyManager zhuzl/nginx-proxy-manager:2.11.1-ssl
2024年05月19日
610 阅读
0 评论
1 点赞
2024-04-07
基于 Magic-api + Clickhouse 实现业务数据更新的项目记录
背景介绍项目有用到 Clickhouse 作为数仓,存储一些用户日常业务产生的大数据,下面先简单介绍一下我们这个任务的需求背景:我们的每个用户都会归属于某个用户组,并基于用户所在的计费组织实现产品使用过程中的消费等情况。而按照系统的设定用户初始注册时是没有归属用户组的,计费组织的主账号可以在控制台将用户绑定到该计费组下,也可以解绑,解绑后也可以绑定到其他用户组。为了更好的这个变更情况,我们在 Clickhouse 添加了一张名为 user_type 的表,每次该数据变更都会新增一条记录,该表的结构如下:CREATE TABLE user_type ( `user_id` Nullable(String), `present_type` Nullable(String), `pay_type` Nullable(String), `group_type` Nullable(String), `start_date` Nullable(Date), `end_date` Nullable(Date), `uni_key` Nullable(String) ) ENGINE = Log;实现方案本项目初期由使用 dbt + Clickhouse 的方式来实现,但是经实践运行一段时间后发现 dbt 做数据同步很方便,但是要添加一些业务逻辑就显得很棘手。为了解决 dbt 的问题,我们使用已搭建的 magic-api 来实现这个数据的更新,由于相关数据仅需一天一更新即可,所以我们可以直接利用 magic-api 自带的定时任务机制来实现更新。技术细节为便于相关业务逻辑在接口和定时任务中复用,我们将核心代码写在函数模块中:相关步骤核心代码如下步骤:1、从计费系统获取最新的userType信息var statSQL = `select e.*, CONCAT(e.user_id,'-',e.present_type,'-',e.pay_type,'-',e.group_type,'-',date_format(e.start_date,'%Y-%m-%d')) as uni_key FROM( SELECT a.user_id, CASE WHEN EXISTS ( SELECT 1 FROM ( SELECT t1.user_id user_id from b_contract t1 LEFT JOIN b_contract_item t2 ON t1.id = t2.contract_id WHERE t2.is_present = 0 and t2.received_payments > 0 GROUP BY t1.user_id UNION SELECT u2.user_id user_id from b_user as u1, b_user as u2 where u1.group_id=u2.group_id AND u1.user_id != u2.user_id AND EXISTS( SELECT 1 FROM (SELECT t1.user_id from b_contract t1 LEFT JOIN b_contract_item t2 ON t1.id = t2.contract_id WHERE t2.is_present = 0 and t2.received_payments > 0 GROUP BY t1.user_id) c WHERE c.user_id = u1.user_id ) ) d WHERE a.user_id = d.user_id ) then 'pay' else 'no pay' END as present_type, CASE WHEN EXISTS( SELECT 1 FROM( SELECT t1.user_id FROM b_user t1 , b_group t2 WHERE t1.user_id=t2.pay_user_id AND t2.pay_user_id IS NOT NULL )b WHERE a.user_id = b.user_id ) THEN 'master' ELSE 'slave' END as pay_type, CASE WHEN EXISTS(SELECT 1 FROM(SELECT t1.user_id FROM b_user t1 WHERE t1.group_id IS NOT NULL)b WHERE a.user_id = b.user_id) THEN 'group' ELSE 'no group' END as group_type, CURRENT_DATE as start_date, DATE(null) as end_date FROM b_user a )e` return db['NB'].select(statSQL)2、将上一步获取到的信息存储到Clickhouse 的一张临时表import log; import cn.hutool.core.date.DateUtil; import '@/statForProduction/userTypeStat/getLatestUserTypeData' as getLatestUserType; // ------------------- 一、创建临时表 ------------------- const TEMP_TABLE_NAME = 'user_type_temp' var checkExistRes = db['CH'].select(`SELECT 1 FROM system.tables WHERE database = 'dw' AND name = '${TEMP_TABLE_NAME}'`) log.info(checkExistRes.size() + '') // 不存在表的话就基于 user_type 表创建一张临时表 if (checkExistRes.size() === 0) { var initTemporaryTableSQL = `CREATE TABLE ${TEMP_TABLE_NAME} as user_type` db['CH'].update(initTemporaryTableSQL) } else { // 临时表存在则先清空临时表的数据,便于下一步将输入存入临时表 var truncateTemporaryTableSQL = `truncate table ${TEMP_TABLE_NAME}` db['CH'].update(truncateTemporaryTableSQL) } // ------------------- 二、获取最新的用户类型数据 ------------------- log.info(`============ 开始从计费系统获取最新的用户类型数据,该操作耗时较长,请耐心等待 ============`) var timer = DateUtil.timer() const userTypeList = getLatestUserType() log.info(`getLatestUserType cost time: ${timer.intervalPretty()}.`) // ------------------- 三、将数据存入临时表 ------------------- const BATCH_INSERT_COUNT = 1000 // 分批次入临时表,一次插入记录条数 var timer = DateUtil.timer() const allDataCount = userTypeList.size() if (allDataCount > 0) { log.info(`开始导入数据到临时表,待导入的总记录数为:${allDataCount},预计分${Math.ceil(allDataCount/BATCH_INSERT_COUNT)::int}批导入。`) const willInsertArr = [] var insertSQL = `insert into ${TEMP_TABLE_NAME}(user_id,present_type,pay_type,group_type,start_date,end_date,uni_key)` // 分批次插入临时表 for (index,userTypeItem in userTypeList) { willInsertArr.push(`('${userTypeItem.userId}','${userTypeItem.presentType}','${userTypeItem.payType}','${userTypeItem.groupType}','${userTypeItem.startDate}', null,'${userTypeItem.uniKey}')`) if (willInsertArr.size() === BATCH_INSERT_COUNT) { db['CH'].update(`${insertSQL} values${willInsertArr.join(',')}`) // 清空数据 willInsertArr.clear() log.info('Batch insert:' + index) } } // 不满整批次数据单独处理 if (willInsertArr.size() > 0) { db['CH'].update(`${insertSQL} values${willInsertArr.join(',')}`) // 清空数据 willInsertArr.clear() } } log.info(`insert latest user type to Temporary Table cost time: ${timer.intervalPretty()}.`) return true3、将临时表数据跟前一次最新的用户数据对比后,将有变更和新增的数据写入user_type表import log; import cn.hutool.core.date.DateUtil; const LATEST_TABLE_NAME = 'user_type_latest' // 用户最新类型数据表 const TEMP_TABLE_NAME = 'user_type_temp' // 该表存储从计费表获取到用户当前的用户类型数据,已在上一步获取数据完毕 // 一、从user_type表获取所有用户最新的用户类型数据并插入到用于计算的临时表 // 1.1 新建临时表,用于存储每个用户user_type 表中最新的用户类型数据 var checkExistRes = db['CH'].select(`SELECT 1 FROM system.tables WHERE database = 'dw' AND name = '${LATEST_TABLE_NAME}'`) log.info(checkExistRes.size() + '') // 不存在表的话就基于 user_type 表创建一张临时表 if (checkExistRes.size() === 0) { var initTemporaryTableSQL = `CREATE TABLE ${LATEST_TABLE_NAME} as user_type` db['CH'].update(initTemporaryTableSQL) } else { // 临时表存在则先清空临时表的数据,便于下一步将输入存入临时表 var truncateTemporaryTableSQL = `truncate table ${LATEST_TABLE_NAME}` db['CH'].update(truncateTemporaryTableSQL) } // 1.2 将最新数据写入临时表 // 该方式在数据量较大的情况下极有可能导致内存溢出,拟采取其他方案:在user_type 数据初始化的时候,将最新的用户类型数据存储到user_type_latest表,对比更新完成后将临时表的数据更新到user_type_latest便于下次对比 // const insertLatestDataSQL = `insert into ${LATEST_TABLE_NAME} SELECT user_type.user_id uid,user_type.present_type ,user_type.pay_type ,user_type.group_type,user_type.start_date,user_type.end_date,user_type.uni_key // FROM user_type, (SELECT user_type.user_id uid2,max(user_type.start_date) AS latestDate FROM user_type GROUP BY user_type.user_id) AS temp // WHERE user_type.start_date = temp.latestDate and uid = temp.uid2` // db['CH'].update(insertLatestDataSQL) // 二、两个临时表的数据做对比,并将最新数据更新到 user_type var timer = DateUtil.timer() // 2.1 更新有变更的数据 const changedInsertSQL = `insert into user_type select tuts.* from ${LATEST_TABLE_NAME} tutl left join ${TEMP_TABLE_NAME} tuts on tutl.user_id =tuts.user_id where tutl.present_type != tuts.present_type or tutl.pay_type != tuts.pay_type or tutl.group_type != tuts.group_type` timer.start("insertChangeData") db['CH'].update(changedInsertSQL) // 2.2 新增用户数据直接插入 timer.start("insertNewData") const insertNewUserSQL = `insert into user_type select * from ${TEMP_TABLE_NAME} tuts where tuts.user_id not in (select tutl.user_id from ${LATEST_TABLE_NAME} tutl) ` db['CH'].update(insertNewUserSQL) // 三、如果有数据更新,则将临时表的数据替换latest表 // 3.1 清理已有的数据 const truncateLatestTableSQL = `truncate table ${LATEST_TABLE_NAME}` db['CH'].update(truncateLatestTableSQL) // 3.2 从临时表导入最新的数据 const initialLatestTableDataSQL = `insert into ${LATEST_TABLE_NAME} select * from ${TEMP_TABLE_NAME}` db['CH'].update(initialLatestTableDataSQL) log.info(`insertChangeData cost time: ${timer.intervalPretty('insertChangeData')}`) log.info(`insertNewUser cost time: ${timer.intervalPretty('insertNewData')}`) // 四、清理临时表 const dropTempTableSQL = `drop table ${TEMP_TABLE_NAME}` db['CH'].update(dropTempTableSQL) return true 定义好相关函数后,我们可以直接在接口中用起来了,为此我定义了两个接口,一个接口用于数据初始化,一个接口用于手动更新数据:接口定义01数据初始化import log; import '@/statForProduction/userTypeStat/maintenance/clearUserTypeData' as clearUserTypeData import '@/statForProduction/userTypeStat/saveToTemporaryTable' as saveToTemporaryTable const LATEST_TABLE_NAME = 'user_type_latest' // 用户最新类型数据表 const TEMP_TABLE_NAME = 'user_type_temp' // 该表存储从计费表获取到用户当前的用户类型数据 // 一、清空所有user_type表的数据 clearUserTypeData() // 二、一次性写入所有 saveToTemporaryTable() // 三、将临时表的所有数据一次性写入user_type 表作为初始数据 const initialUserTypeDataSQL = `insert into user_type select * from ${TEMP_TABLE_NAME}` db['CH'].update(initialUserTypeDataSQL) // 四、将数据写入最新用户类型表,便于下一次做数据比对 // 4.1 基于 user_type 表 创建 user_type_latest 表 var checkExistRes = db['CH'].select(`SELECT 1 FROM system.tables WHERE database = 'dw' AND name = '${LATEST_TABLE_NAME}'`) log.info(checkExistRes.size() + '') // 不存在表的话就基于 user_type 表创建一张 if (checkExistRes.size() === 0) { var createLatestTableSQL = `CREATE TABLE ${LATEST_TABLE_NAME} as user_type` db['CH'].update(createLatestTableSQL) } else { // 表存在则先清空表的数据,便于下一步将最新的用户类型数据存入该表 var truncateLatestTableSQL = `truncate table ${LATEST_TABLE_NAME}` db['CH'].update(truncateLatestTableSQL) } // 4.2 插入该表的初始数据 const initialLatestTableDataSQL = `insert into ${LATEST_TABLE_NAME} select * from ${TEMP_TABLE_NAME}` db['CH'].update(initialLatestTableDataSQL) // 五、清理临时表 const dropTempTableSQL = `drop table ${TEMP_TABLE_NAME}` db['CH'].update(dropTempTableSQL) 02手工同步用户类型数据/** * 本接口用于手工临时同步数据用,日常使用定时任务自动同步操作即可 */ import '@/statForProduction/userTypeStat/saveToTemporaryTable' as saveToTemporaryTable import '@/statForProduction/userTypeStat/updateUserTypeData' as updateUserTypeData saveToTemporaryTable() updateUserTypeData()添加定时任务本任务用到的部分 Clickhouse SQL-- 判断数据表是否存在 SELECT 1 FROM system.tables WHERE database = 'dw' AND name = 'temp_user_type_session' -- 根据user_type 表创建一张名为 temp_user_type_session 的临时表 CREATE TABLE temp_user_type_session as user_type; -- 清空某数据表中的所有内容 truncate table temp_user_type_session; -- 查询所有用户最新的用户类型数据 SELECT user_type.user_id uid,user_type.present_type ,user_type.pay_type ,user_type.group_type,user_type.start_date,user_type.end_date,user_type.uni_key FROM user_type, (SELECT user_type.user_id uid2,max(user_type.start_date) AS latestDate FROM user_type GROUP BY user_type.user_id) AS temp WHERE user_type.start_date = temp.latestDate and uid = temp.uid2; -- 获取有差异的数据 select tutl.*,tuts.user_id user_id2, tuts.present_type present_type2, tuts.pay_type pay_type2, tuts.group_type group_type2, tuts.start_date start_date2,tuts.uni_key uni_key2 from temp_user_type_latest tutl left join temp_user_type_session tuts on tutl.user_id =tuts.user_id where tutl.present_type != tuts.present_type or tutl.pay_type != tuts.pay_type or tutl.group_type != tuts.group_type;
2024年04月07日
75 阅读
0 评论
0 点赞
2023-12-13
Jeepay开源版使用过程中踩过的坑
1、商户系统登录问题添加商户的时候有设置登录名,但是没有设置账号密码的位置,好不容易找到对商户重置密码的地方,但是那个勾选重置密码的复选框又超级容易被理解为用户下次登录需重置密码的配置项。勾选后有提示重置为默认密码,但是又没有说明默认密码是什么,然后非得要查看源码才知道是通过常量设置的默认密码为:jeepay6662、证书文件不存在问题好不容易登录商户系统了,进行支付测试的功能验证,提示证书文件不存在:整个应用部署过程完全是基于官方提供的 docker-compose.yml 文件,最后发现默认配置的 /home/jeepay/upload 目录根本就没有挂载到宿主机,修改 docker-compose.yml ,payment、manager、merchant 应用的 volumes 应用均挂载 /home/jeepay 目录,如: volumes: - ./jeepayData:/home/jeepay3、支付测试不显示二维码的问题支付测试时不显示支付二维码,发现HTTP请求中有个 404 请求:检查代码,确定应用存在对应的接口路径:查看docker 日志复现如下 error 信息:基于该信息可得知,nginx在接收到二维码图片请求时根本就没有请求到 jeepay-payment 这个后端服务,而是直接请求了root 目录中的文件,由此我们调整一下 代理的api接口的优先级,修改 jeepay-ui 根目录下的 default.conf.template 文件,在/api/ 前添加 ^~ ,nginx的路径匹配规则如下:/:通用匹配,任何请求都可以匹配=:用于不含正则表达式的uri前,要求请求字符串与uri严格匹配,如果匹配成功,就停止继续向下搜索并立即处理该请求。~:用于表示uri包含正则表达式,并且区分大小写。~*:用于表示uri包含正则表达式,并且不区分大小写。^~:用于不包含正则表达式的uri前,要求nginx服务器找到标识uri和请求字符串匹配度最高的location后,立即使用此location处理请求,而不再使用location块中的正则uri与请求字符串做匹配。!~和!~*:分别表示区分大小写不匹配和不区分大小写不匹配的正则优先级:= --> ^~ --> /* #当有多个包含/进行正则匹配时,选择正则表达式最长的location配置执行。多个location配置的情况下匹配顺序为: 首先匹配 =,其次匹配^~, 其次是按文件中顺序的正则匹配,最后是交给 /通用匹配。当有匹配成功时候,停止匹配,按当前匹配规则处理请求。注意:如果uri包含正则表达式,则必须要有 ~ 或者 ~ 标识。修改后的 default.conf.template 文件如下所示:server { listen 80; listen [::]:80; server_name localhost; root /workspace/; try_files $uri $uri/ /index.html; location ^~ /api/ { proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass http://$BACKEND_HOST; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } # favicon.ico location = /favicon.ico { log_not_found off; access_log off; } # robots.txt location = /robots.txt { log_not_found off; access_log off; } # assets, media location ~* \.(?:css(\.map)?|js(\.map)?|jpe?g|png|gif|ico|cur|heic|webp|tiff?|mp3|m4a|aac|ogg|midi?|wav|mp4|mov|webm|mpe?g|avi|ogv|flv|wmv)$ { expires 7d; access_log off; } # svg, fonts location ~* \.(?:svgz?|ttf|ttc|otf|eot|woff2?)$ { add_header Access-Control-Allow-Origin "*"; expires 7d; access_log off; } # gzip gzip on; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml; } 4、公众号/小程序支付的URL多了一级/cashier应用部署完毕,进行支付测试时,采用「微信支付二维码」的方式已支付成功,但是采用「公众号/小程序」的支付方式,在扫码后,发现扫码后的页面显示空白,进一步排查,发现是由于页面的css和js资源文件404导致的,进一步排查,是由于请求的资源多了一级path,一下是问题排查过程:系统配置中的支付网关地址填写的是 https://jeepay-cashier.work.zhuzhilong.com:但是在使用支付测试功能,支付方式采用「公众号/小程序」进行支付测试时,生成的二维码如下:二维码识别后的地址为:https://jeepay-cashier.work.zhuzhilong.com/cashier/index.html#/hub/78d439f3140fe4047c7f8f6cda1048313636890021b83c0c167270dbce4fc2ff根据应用部署情况,比预期的访问路径多了 /cashier,查看源代码后,发现这个路径是写死在 com.jeequan.jeepay.core.model.DBApplicationConfig.java中的:去掉相关方法中的 /cashier 后,根据 docker-compose.yml 重新构建镜像及重启服务后,可正常支付。
2023年12月13日
119 阅读
0 评论
0 点赞
2023-10-30
docker compose 部署Umami
很长一段时间是用的cnzz做的网站访问统计,功能强大,分析结果对于小白用户也超级友好,自从被阿里收购后,整合成Umeng的一部分勉强还能用,但是自从开启收费(收割用户)模式后,高昂的价格,无疑把我们这种小白个人用户完全隔离在外了。然后用了一段时间的百度统计,感觉也是不尽如人意,然后就只好另辟蹊径,调研了市面主流的流量统计工具(也就调研了Matomo 和Umami)后,选择了Umami 作为个人流量统计工具,主要是Matomo不少明细数据是存储的Binary数据,不便于通过SQL直观的查看,相对于Matomo而言,Umami 算是轻量级别的,UI 界面也更现代化。Umami 支持 PostgreSQL 和 MySQL 两种数据库,分别对应不同的Docker 镜像。ProstgreSQL:docker pull ghcr.io/umami-software/umami:postgresql-latestMySQL:docker pull ghcr.io/umami-software/umami:mysql-latest由于我的服务器上面已安装 MySQL 客户端,就直接采用 MySQL 的镜像。ghcr.io 是 GitHub 的 Docker 镜像仓库,国内环境可能在pull 时会碰到些网络方面的问题,我是通过一台境外的服务器pull 后,然后 push 到本人的 Docker 私服进行下载的,也可以采用导出备份后在导入的方式。如果你在pull过程中也存在这方面网络的问题的话,也推荐使用这个方式。docker-compose.ymlversion: "3.8" services: umami: image: ghcr.io/umami-software/umami:mysql-latest # image: hub.work.zhuzhilong.com/apps/umami:mysql container_name: umami restart: unless-stopped volumes: - ../hosts:/etc/hosts environment: - DATABASE_URL=mysql://DB_USERNAME:DB_PASSWORD@DB_HOST:DB_PORT/umami - DATABASE_TYPE=mysql - APP_SECRET=umami2023 - TZ=Asia/Shanghai networks: - net-zzl ports: - 8202:3000 networks: net-zzl: name: bridge_zzl external: true 使用 docker compose up -d 启动后,可使用默认的管理员账号登录:用户名:admin密码:umami登录后即可修改密码及添加站点了。以下是整合后的部分界面截图:顺便说下,umami的表结构比较简单,访问用户的IP信息都没有存表,如果有复杂运营场景的话,还是推荐使用 Matomo 之类的功能更强大的工具。当前版本(2.8.0)只有11张表:{mtitle title="2023-12-18更新"/}应用升级近日登录umami时提示最新发布了2.9.0 版本,而根据更新日志中的内容有提到可以查看访客的城市信息了,便及时更新了下,使用docker compose 的方式更新超级简单,主要执行如下命令:docker compose pull docker compose up --force-recreate提示数据库更新成功:然后重启应用即可
2023年10月30日
103 阅读
0 评论
0 点赞
2023-10-30
docker compose 部署 中微子代理(NeutrinoProxy)
近期在开源中国有看到 Neutrino-proxy 的一些介绍,了解到 NeutrinoProxy 是一款基于 Netty 的内网穿透工具,官方的介绍信息如下:基本介绍中微子代理 (neutrino-proxy) 是一款基于 netty 的内网穿透神器。该项目采用最为宽松的 MIT 协议,因此您可以对它进行复制、修改、传播并用于任何个人或商业行为。Gitee 地址:https://gitee.com/dromara/neutrino-proxy官网地址:http://neutrino-proxy.dromara.org服务端管理后台截图:主要特点:1、流量监控:首页图表、报表管理多维度流量监控。全方位掌握实时、历史代理数据。2、用户 / License:支持多用户、多客户端使用。后台禁用实时生效。3、端口池:对外端口统一管理,支持用户、License 独占端口。4、端口映射:新增、编辑、删除、禁用实时生效。5、Docker:服务端支持 Docker 一键部署。6、SSL 证书:支持 SSL,保护您的信息安全。7、域名映射:支持绑定子域名,方便本地调试三方回调8、采用最为宽松的 MIT 协议,免去你的后顾之忧之前一直有在用 FRP 作为内网穿透工具,用了很多年也确实非常好用,不过 FRP 在可视化管理方面比较欠缺,虽然提供了dashboard,但是只提供了代理端口查看及浏览统计方面的功能,不能提供多用户方面的管控。而 Neutrino 正好弥补了这方面的不足。以下是使用 docker compose 部署相关的代码,仅作为记录。服务端:docker-compose.ymlversion: '3.8' services: app: image: registry.cn-hangzhou.aliyuncs.com/asgc/neutrino-proxy:latest container_name: neutrino-proxy restart: always networks: - net-zzl ports: - 9000-9200:9000-9200/tcp - 9201:8888 volumes: - ./config:/root/neutrino-proxy/config networks: net-zzl: name: bridge_zzl external: true ./config/app.ymlneutrino: data: db: type: mysql # 自己的数据库实例,创建一个空的名为'neutrino-proxy'的数据库即可,首次启动服务端会自动初始化 url: jdbc:mysql://DB_HOST:3306/neutrino-proxy?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&useAffectedRows=true&useSSL=false driver-class: com.mysql.jdbc.Driver # 数据库帐号 username: DB_USERNAME # 数据库密码 password: DB_PASSWORD客户端官网文档不推荐使用docker的方式部署,但是考虑到要部署Java 环境之类的,对宿主机而言是挺麻烦的,还是试着通过docker 的方式部署的客户端,经过验证也是 OK 的。docker-composeversion: '3.8' services: app: image: aoshiguchen/neutrino-proxy-client:latest container_name: neutrino-proxy-client restart: always network_mode: host volumes: - ./config:/root/neutrino-proxy/config./config/app.ymlneutrino: proxy: logger: # 日志级别 level: ${LOG_LEVEL:info} tunnel: # 线程池相关配置,用于技术调优,可忽略 thread-count: 50 # 隧道SSL证书配置 key-store-password: ${STORE_PASS:123456} jks-path: ${JKS_PATH:classpath:/test.jks} # 服务端IP,这里替换成主机IP或域名 server-ip: ${SERVER_IP:proxy.xxx.com} # 服务端端口(对应服务端app.yml中的tunnel.port、tunnel.ssl-port) server-port: ${SERVER_PORT:9002} # 是否启用SSL(注意:该配置必须和server-port对应上) ssl-enable: ${SSL_ENABLE:true} # 客户端连接唯一凭证,这里替换成key license-key: ${LICENSE_KEY:ec7e9906cXXXXXX6430895c37fec75cd4e11} # 客户端唯一身份标识(可忽略,若不设置首次启动会自动生成) client-id: ${CLIENT_ID:workServer} # 是否开启隧道传输报文日志(日志级别为debug时开启才有效) transfer-log-enable: ${CLIENT_LOG:false} # 重连设置 reconnection: # 重连间隔(秒) interval-seconds: 10 # 是否开启无限重连(未开启时,客户端license不合法会自动停止应用,开启了则不会,请谨慎开启) unlimited: false client: udp: # 线程池相关配置,用于技术调优,可忽略 boss-thread-count: 5 work-thread-count: 20 # udp傀儡端口范围 puppet-port-range: 10000-10500 # 是否开启隧道传输报文日志(日志级别为debug时开启才有效) transfer-log-enable: ${CLIENT_LOG:false}实现后的效果截图:
2023年10月30日
113 阅读
0 评论
0 点赞
2023-08-27
修改gitlab账号密码
由于本人的gitlab为个人所用,隔久了未用,之前设置的密码不能登录了,便试着在数据库层面重置密码,本文主要对重置gitlab管理员密码的过程进行记录
2023年08月27日
17 阅读
0 评论
0 点赞
2023-08-17
自定义 docker 网络
docker 容器如果没有指定网络,在启动时会默认生成一个桥接的网络,如下图所示:启动的 docker 容器多了,再启动其他容器的时候,就会出现Error response from daemon: Pool overlaps with other one on this address space 的提示。为解决这个问题,可以采用手工创建桥接网络,然后在 docker-compose.yml 中指定网络的方式解决。{card-describe title="本地 docker 环境信息"}jiuzilong@jiuzilong:/data/dockerRoot/apps/chat2db$ docker -v Docker version 24.0.4, build 3713ee1 jiuzilong@jiuzilong:/data/dockerRoot/apps/chat2db$ docker compose version Docker Compose version v2.19.1 jiuzilong@jiuzilong:/data/dockerRoot/apps/chat2db$ {/card-describe}一、创建桥接网络:docker network create --subnet=192.168.100.0/16 --gateway=192.168.100.1 --opt "com.docker.network.bridge.name"="bridge_zzl" bridge_zzl也可以补充些其他网络配置信息:docker network create --subnet=172.66.0.0/16 --gateway=172.66.0.1 --opt "com.docker.network.bridge.default_bridge"="false" --opt "com.docker.network.bridge.name"="bridge_zzl" --opt "com.docker.network.bridge.enable_icc"="true" --opt "com.docker.network.bridge.enable_ip_masquerade"="true" --opt "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" --opt "com.docker.network.driver.mtu"="1500" bridge_zzl{message type="info" content="2024-09-11 更新:subnet 使用 192.168.100.0/16 网段,发现映射端口后仅宿主机可访问,换成 172.*.0.0/16 后,宿主机及局域网内的其他终端也都可访问,不知道是什么原因导致的。"/}如果创建失败,极有可能是网络数量超标了,可以停掉一些容器再创建。失败截图如下:如果创建成功,会响应创建成功的 NETWORK ID,如下图所示:二、docker-compose.yml 配置网络配置示例:version: '3' services: app: # image: 'jc21/nginx-proxy-manager:2.9.22' # image: 'chishin/nginx-proxy-manager-zh:latest' image: 'zhuzl/nginx-proxy-manager:ssl' restart: unless-stopped container_name: nginxProxyManager ports: - '80:80' - '81:81' - '443:443' networks: - net-zzl volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt networks: net-zzl: name: bridge_zzl external: true 核心是networks: net-zzl: name: bridge_zzl external: true然后在 docker 应用中通过 networks 指定 networks: - net-zzl{alert type="warning"}特别注意:在拷贝上面代码到 docker-compose.yml 文件时,需要特别留意缩进问题{/alert}
2023年08月17日
99 阅读
0 评论
0 点赞
2023-08-11
ubuntu环境安装Harbor记录
Harbor 是一款优秀的开源企业容器镜像仓库。包括了基于web界面的权限管理(RBAC)、LDAP、审计、安全漏洞扫描、镜像验真、管理界面、自我注册、HA 等企业必需的功能,同时针对中国用户的特点,设计镜像复制和中文支持等功能。官网链接:https://goharbor.io/为方便应用开发部署,想要自己搭建Docker 镜像仓库,以下是部署过程。{card-describe title="Ubuntu版本情况"}zhuzl@zhuzl-M9-PRO:/data/software/harbor$ uname -a Linux zhuzl-M9-PRO 6.2.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 13 16:27:29 UTC 2 x86_64 x86_64 x86_64 GNU/Linux{/card-describe}一、下载安装包根据Harbor官网的引导,可通过github的发布页面下载离线安装包:https://github.com/goharbor/harbor/releases由于国内网络下载速度堪忧,本次下载使用 ghproxy.com 进行代理加速下载,也就是在下载地址前添加 https://ghproxy.com/。下载记录如下:zhuzl@zhuzl-M9-PRO:/data/software$ wget https://ghproxy.com/https://github.com/goharbor/harbor/releases/download/v2.8.4/harbor-offline-installer-v2.8.4.tgz --2023-08-18 08:48:42-- https://ghproxy.com/https://github.com/goharbor/harbor/releases/download/v2.8.4/harbor-offline-installer-v2.8.4.tgz 正在解析主机 ghproxy.com (ghproxy.com)... 192.9.132.155 正在连接 ghproxy.com (ghproxy.com)|192.9.132.155|:443... 已连接。 已发出 HTTP 请求,正在等待回应... 200 OK 长度: 608175520 (580M) [application/octet-stream] 正在保存至: ‘harbor-offline-installer-v2.8.4.tgz’ harbor-offline-installer-v2.8.4.tgz 100%[==============================================================================================>] 580.00M 4.40MB/s 用时 2m 9s 2023-08-18 08:50:52 (4.49 MB/s) - 已保存 ‘harbor-offline-installer-v2.8.4.tgz’ [608175520/608175520]) zhuzl@zhuzl-M9-PRO:/data/software$ 二、解压安装包文件下载下来是一个以 .tgz 格式结尾的压缩文件,我们可以直接使用 tar 解压,解压命令为(需替换X.Y.Z为下载对应的版本):tar -zxvf harbor-offline-installer-vX.Y.Z.tgz解压记录如下:zhuzl@zhuzl-M9-PRO:/data/software$ tar -zxvf harbor-offline-installer-v2.8.4.tgz harbor/harbor.v2.8.4.tar.gz harbor/prepare harbor/LICENSE harbor/install.sh harbor/common.sh harbor/harbor.yml.tmpl zhuzl@zhuzl-M9-PRO:/data/software$ 三、配置及安装搜了下网上的课程,大多都是从配置证书开始的,我部署应用时一般都是应用开启http服务,使用nginx代理的时候再使用SSL证书对外提供https的访问。3.1 配置拷贝解压出来的 harbor.yml.tmpl 文件为 harbor.yml。修改这个配置文件。主要有如下信息需要修改的地方:hostname: 修改为域名https: 根据需要配置,我本处是直接注释掉了harbor_admin_password:管理员密码data_volume: harbor数据目录,这是宿主机的文件目录修改后执行初始化:./prepare初始化日志记录日下:zhuzl@zhuzl-M9-PRO:/data/dockerRoot/apps/harbor$ ./prepareprepare base dir is set to /data/dockerRoot/apps/harborWARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to httpsGenerated configuration file: /config/portal/nginx.confGenerated configuration file: /config/log/logrotate.confGenerated configuration file: /config/log/rsyslog_docker.confGenerated configuration file: /config/nginx/nginx.confGenerated configuration file: /config/core/envGenerated configuration file: /config/core/app.confGenerated configuration file: /config/registry/config.ymlGenerated configuration file: /config/registryctl/envGenerated configuration file: /config/registryctl/config.ymlGenerated configuration file: /config/db/envGenerated configuration file: /config/jobservice/envGenerated configuration file: /config/jobservice/config.ymlGenerated and saved secret to file: /data/secret/keys/secretkeySuccessfully called func: create_root_certGenerated configuration file: /compose_location/docker-compose.ymlClean up the input dirzhuzl@zhuzl-M9-PRO:/data/dockerRoot/apps/harbor$初始化完成后,会在当前目录生成 `docker-compose.yml`文件,此时,我们可以使用 `docker compose` 启动docker 服务啦,日志记录如下:zhuzl@zhuzl-M9-PRO:/data/dockerRoot/apps/harbor$ sudo docker compose up -d[+] Running 9/9 ✔ Container harbor-log Started 0.3s ✔ Container redis Started 0.7s ✔ Container registry Started 1.1s ✔ Container registryctl Started 0.9s ✔ Container harbor-portal Started 0.8s ✔ Container harbor-db Started 1.1s ✔ Container harbor-core Started 1.5s ✔ Container nginx Started 2.0s ✔ Container harbor-jobservice Started 2.1szhuzl@zhuzl-M9-PRO:/data/dockerRoot/apps/harbor$ 对了,执行`docker compose`的时候一定要用`sudo`,否则会出现文件权限相关的报错。 启动完成后,可以通过 `nginx` 配置代理的方式对外提供服务,配置好后,访问出现如下图所示的登录界面: ![harbor 登录界面](https://blog.zhuzhilong.cn/usr/uploads/2023/08/2381568293.png) 在登录界面可以使用管理员账号 admin,进行登录,密码为在`harbor.yml` 文件中`harbor_admin_password` 配置的默认密码。 登录成功后,主界面如下图所示: ![Harbor主界面](https://blog.zhuzhilong.cn/usr/uploads/2023/08/3334557343.png)
2023年08月11日
78 阅读
0 评论
0 点赞
2023-07-13
通过Docker Compose安装Jira
由于数据库在服务器上已提前安装好,本处省略MySQL的安装流程。环境说明Jira 相关的文件统一放到 /data/dockerRoot/jira 目录。docker-compose.yml 文件内容如下:version: '3.9' services: jira: container_name: jira image: atlassian/jira-software:latest restart: "no" ports: - 18080:8080 environment: CATALINA_OPTS: -javaagent:/opt/atlassian/jira/atlassian-agent.jar volumes: - ./jira_data:/var/atlassian/application-data/jira - ./libs/atlassian-agent.jar:/opt/atlassian/jira/atlassian-agent.jar - ./libs/mysql-connector-java-8.0.30.jar:/opt/atlassian/jira/lib/mysql-connector-java.jar - ../hosts:/etc/hosts - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:rolibs 目录的文件打包如下:libs.zip启动服务sudo docker compose up -d设置JiraJira 容器启动完毕后,可通过 http://localhost:18080 访问,会自动调整到如下图所示的初始化向导页面:1、设置为中文单击 右上角的「Language」2、我将设置它自己在第二步选择「我将设置它自己」3、数据库连接配置连接数据库选择「其他数据库」,数据库类型根据本地环境进行选择,选择对应的数据库,要提前引入对应数据库的驱动 jar 包。4、设置应用程序属性5、设置许可证如果本地有Java 环境,可以在本地生成许可证,没有的话进入Jira 容器生成也可以执行如下命令,替换对应的服务器ID:java -jar atlassian-agent.jar -d -m test@test.com -n BAT -p jira -o lewis2951 -s B87T-QH0H-UBTM-IU5Q以上命令相关说明如下:java -jar atlassian-agent.jar \ -m zh_season@163.com # Licence Emali \ -n atlassian # Licence Name \ -o atlassian # Licence organization \ -p crowd # Licence product, support: crowd, conf, jira, bitbucket \ -s <copy from website> # License server id以上通过本地环境生成,复制许可证内容到输入框。6、设置管理员根据自己需求设置,Email 可以是一个不存在的,但是建议使用真实Email。7、设置电子邮件通知8、完成部署到此,Jira 就部署完成了。进入欢迎页面,可以创建一个示例项目,9、查看许可证在管理 → 应用程序下可查看许可证信息
2023年07月13日
38 阅读
0 评论
0 点赞
2023-07-01
常用docker 命令
该文档内容主要用于日常记录,会逐步添加重启 docker 服务sudo systemctl daemon-reload sudo systemctl restart docker停止所有运行中的容器docker ps -q | xargs docker stop将当前用户添加到docker用户组,那样就不用每次执行docker命令都加sudosudo usermod -aG docker $USER复制容器中的目录到本地:sudo docker cp <CONTAINER_ID>:/usr/local/tomcat/webapps/ROOT ./temp进入容器sudo docker exec -it mongodb /bin/bash创建网络docker network create --driver=bridge --subnet=192.168.0.0/16 bridge_zzl构建镜像docker build -f ./Dockerfile.devIstio -t console-mobile-ui:0.0.1 .将容器保存为新镜像sudo docker commit nginxProxyManager zhuzl/nginx-proxy-manager:2.11.1-ssl将其他仓库的 docker 镜像推送到本地私服一般用于本地下载外网镜像超级慢的情况,可找台外网的机器 pull,然后 push 到 Docker 私服docker pull ghcr.io/huolalatech/page-spy-web:release docker tag ghcr.io/huolalatech/page-spy-web:release xxx.yyy.zhuzhilong.com/apps/page-spy-web:release docker push xxx.yyy.zhuzhilong.com/apps/page-spy-web:release删除所有未运行的容器;运行的删除不了docker rm $(docker ps -a -q)根据容器的状态删除状态为Exited的容器docker rm $(docker ps -qf status=exited)查看docker日志占用情况及日志清理# /etc/docker/daemon.json 中的 "data-root": "/data/dockerRoot/dataRoot" sudo ls -lh $(sudo find /data/dockerRoot/dataRoot/containers/ -name *-json.log) cat /dev/null > /data/dockerRoot/dataRoot/containers/e876d8da919db8905dece519a81ecc182bc918c20397e5212f2b49e06ec03a01/e876d8da919db8905dece519a81ecc182bc918c20397e5212f2b49e06ec03a01-json.log删除所有tag中带 “none” 关键字的镜像#!/bin/bash # docker rmi $(docker images | grep "none" | awk '{print $3}') TAG=`docker images | grep none| awk '{print $3}'` for tag in $TAG do docker rmi -f $tag done exit 使用 prune 命令删除不再使用的 docker 对象。删除所有未被 tag 标记和未被容器使用的镜像docker image prune
2023年07月01日
61 阅读
0 评论
0 点赞
2023-06-29
处理360路由器不能通过代理在外网访问的问题
缘起2020年的时候,感觉家里的路由实在是太慢了,正好360新出的360WiFi6全屋路由 天穹 V6 路由感觉还蛮不错,就入手了。 使用至今整体还算稳定,功能也能满足大部分使用场景,而且系统自带「自定义hosts」功能,在内网想要使用域名访问部署的服务也就方便多了,虽说不支持泛域名解析,一个一个配有些繁琐,但好歹也省去另外部署DNS的繁琐操作了。只是官方自带的 app 在功能上有不少阉割,像「功能扩展」下的大部分功能就只能通过PC 网页端访问。今年618屯了台配置还不错的迷你主机,准备放在家里长期开机做内网开发服务器用。当然作为一个程序员长期开机的机器,内网穿透肯定是要部署的。上下求索所有准备工作就绪后,想着方便随时配置路由,便把路由通过内网穿透在外网可以随时远程访问。配置好FRP和nginx代理后,登录页面可以正常访问,但是登录后,跳转到主界面,闪一下便又回到登录界面了。问题既然出了,先看看有没有碰到同样问题的小伙伴。便通过360路由官方的社区链接,看看有没有用户反馈相同的问题,于是找到下面这几个反馈同类问题的帖子:360路由P2还是不能通过外网 wan口登录管理?外网怎么访问路由器后台如何在外网下访问路由器的登录界面P1好像不能在外网登录管理页面嘛!!看了一圈,始终没有一个能解决问题的答复,而且看一些标注为「产品答疑师」的回复都不能解决实际反馈的问题,看来是时候施展混迹IT圈多年所学的三脚猫功夫啦。毕竟原理上都是HTTP访问,浏览器发起的HTTP请求只要跟内网请求头信息一致,理论上都是可行的。开始折腾于是通过nginx 代理的地址,真发现如下问题:登录后,发起了「GET /router/get_router_device_capability.cgi」ajax 请求,但是,该接口响应的是302,又重定向到/login.htm登录页面。初步以为是后端对 Host 或 Referer这些头信息做了校验,遂配置nginx相关头信息: proxy_set_header Host http://192.168.0.1; proxy_set_header Referer http://192.168.0.1/login_pc.htm;重启 nginx 再次访问,发现问题依旧。进一步发现请求头中有一个token_id的头信息,值为 undefined,根据 token_id 字符串搜了下网页代码,发现从url获取参数存在问题:该导致了请求的时候根本没获取到用户认证所需的 token_id,导致请求失败。进一步深挖,发现头信息中的 Cookie 是存在 token_id 的:这样的话,是不是可以考虑nginx代理的时候,从Cookie中获取token_id然后设置一个 名为token_id的头请求后端呢?说干就干,一翻摸索后,在nginx配置信息中加入如下信息:set $TOKEN_ID ""; if ($http_cookie ~* "token_id=(.+?)(?=;|$)") { set $TOKEN_ID "$1"; } proxy_set_header token_id "$TOKEN_ID";完整nginx虚拟主机配置信息如下:server { listen 80; server_name router.home.zhuzhilong.cn; location / { proxy_set_header Referer http://192.168.0.1/login_pc.htm; proxy_pass http://192.168.0.1; set $TOKEN_ID ""; if ($http_cookie ~* "token_id=(.+?)(?=;|$)") { set $TOKEN_ID "$1"; } proxy_set_header token_id "$TOKEN_ID"; } }重启路由器,验证成功!!圆满收场最终实现后的效果录屏如下:{dplayer src="/usr/uploads/2023/06/4228503511.mp4"/}
2023年06月29日
264 阅读
0 评论
0 点赞
2023-06-06
Ubuntu 添加FRP客户端自启动
背景介绍今年618 的时候购置了一台迷你主机,主要用于家庭内部服务器使用,主要基于Docker 部署其他应用,而 Ubuntu 作为 docker 原生支持最好的操作系统,而且还有漂亮的桌面,当然也就成了本迷你主机的操作系统首选。装完系统后,部署的很多应用只能内部使用,为了方便,当然不能只局限于家庭内部环境使用。结合之前不熟的FRP服务端,完全可以对外提供WEB服务,于是便有了本期的教程。下载 FRPFRP 是服务端和客户端打包在一个压缩包文件里的,可以直接从github下载就好。FRP 发布地址:https://github.com/fatedier/frp/releases本处直接下载最新的0.49.0 版本,根据操作系统,本处选择frp_0.49.0_linux_amd64.tar.gz 进行下载:wget https://github.com/fatedier/frp/releases/download/v0.49.0/frp_0.49.0_linux_amd64.tar.gz下载后 解压文件tar -zxvf frp_0.49.0_linux_amd64.tar.gz将解压的文件移动到当前用户有权限的目录,本案例中移动到 /data/apps/frp目录编辑 frpc.ini 文件本处结合实际情况,修改内容如下,部分涉密数据做了调整:[common] server_addr = SERVER_IP server_port = 7000 # for authentication token = TOKEN log_file = /data/apps/frp/frpc.log log_level = info log_max_days = 30 [home_ssh] type = tcp local_ip = 127.0.0.1 local_port = 22 remote_port = 4000 [home_web_pan] type = http local_ip = 127.0.0.1 local_port = 80 http_user = zhuzl http_pwd = PASSWORD subdomain = pan [home_web_kod] type = http local_ip = 127.0.0.1 local_port = 80 http_user = zhuzl http_pwd = PASSWORD subdomain = kod 配置完成后,可直接运行frpc 验证是否OK../frpc若有问题,可检查frpc.ini相关配置信息是否正确配置frpc自启动配置自启动过程中,为避免权限相关问题,本处直接切换为 root 账号添加 frpc.servicevi /etc/systemd/system/frpc.service输入如下服务配置内容[Unit] Description=Frp Client Service After=network.target [Service] Type=simple User=jiuzilong Restart=on-failure RestartSec=5s ExecStart=/data/apps/frp/frpc -c /data/apps/frp/frpc.ini [Install] WantedBy=multi-user.target 启用服务# 启用服务 systemctl enable frpc.service # 禁用服务 systemctl disable frpc.service重启服务systemctl daemon-reload systemctl start frpc验证服务启动状态systemctl status frpc参考链接FRP官方文档: https://gofrp.org/docs/overview/FRP服务端安装并设置开机自启动: https://blog.zhuzhilong.cn/software/install-frps-as-service.html推荐另一种docker compose 的启动方式:version: '3.3' services: frpc: restart: always network_mode: host volumes: - './frpc.ini:/etc/frp/frpc.ini' container_name: frpc image: snowdreamtech/frpc
2023年06月06日
30 阅读
0 评论
0 点赞
2023-06-05
部署Cloudreve
参考链接:https://docs.cloudreve.org/getting-started/install#docker-composedocker-compose.ymlversion: "3.8" services: cloudreve: container_name: cloudreve image: cloudreve/cloudreve:latest restart: unless-stopped ports: - "8004:5212" volumes: - temp_data:/data - ./cloudreve/uploads:/cloudreve/uploads - ./cloudreve/conf.ini:/cloudreve/conf.ini - ./cloudreve/cloudreve.db:/cloudreve/cloudreve.db - ./avatar:/cloudreve/avatar depends_on: - aria2 aria2: container_name: aria2 image: ddsderek/aria2-pro restart: unless-stopped environment: - RPC_SECRET=your_aria_rpc_token - RPC_PORT=6800 - DOWNLOAD_DIR=/data - PUID=1000 - PGID=1000 - UMASK_SET=022 - TZ=Asia/Shanghai volumes: - ./aria2/config:/config - temp_data:/data volumes: temp_data: driver: local driver_opts: type: none device: ./data o: bind 在当前目录创建相关目录及文件mkdir -vp cloudreve/{uploads,avatar} \ && touch cloudreve/conf.ini \ && touch cloudreve/cloudreve.db \ && mkdir -p aria2/config \ && mkdir -p data/aria2 \ && chmod -R 777 data/aria2启动容器docker compose up -d启动后从日志中查看登录账号信息:
2023年06月05日
11 阅读
0 评论
0 点赞
2023-06-05
部署spug
参考: https://www.spug.cc/docs/install-docker创建 docker-compose.ymlversion: '3.9' services: spug: image: openspug/spug-service container_name: spug privileged: true restart: always volumes: - ./service:/data/spug - ./repos:/data/repos ports: - 8002:80 environment: - MYSQL_DATABASE=spug - MYSQL_USER=spug - MYSQL_PASSWORD=Passw0rd - MYSQL_HOST=192.168.1.200 - MYSQL_PORT=3306启动容器docker compose up -d初始化以下操作会创建一个用户名为 zhuzl 密码为 Passw0rd 的管理员账户,可自行替换管理员账户/密码。docker exec spug init_spug zhuzl Passw0rd
2023年06月05日
17 阅读
0 评论
0 点赞
1
2
3