0%

防火墙配置

iptables

1
2
3
4
5
6
7
8
#查看防火墙状态
systemctl status iptables.service
#查看现有防火墙规则,以及是否生效
iptables -L -n
#开放9000端口
iptables -I INPUT -p tcp --dport 9000 -m state --state NEW -j ACCEPT
#生效后保存iptables
iptables-save > /etc/sysconfig/iptables

防火墙端口配置需要放到哪两句之前

解决Linux:No route to host

firewall

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 添加端口7000-7005/17000-17005
firewall-cmd --zone=public --add-port=7000/tcp --permanent
# 重载配置
firewall-cmd --reload
# 检查防火墙规则
firewall-cmd --list-all
# ports: 7000/tcp 7001/tcp 7002/tcp 7003/tcp 7004/tcp 7005/tcp 17005/tcp 17004/tcp 17003/tcp 17002/tcp 17001/tcp 17000/tcp
# 查看防火墙状态
firewall-cmd --state
# 临时关闭防火墙,重启后会重新自动打开
systemctl restart firewalld
#关闭
systemctl stop firewalld.service
开机禁用
systemctl disable firewalld.service

linux挂载相关命令

注意挂载操作不要在挂载目录里面操作

1
2
3
4
5
6
7
8
9
10
11
12
#查看磁盘分区情况
lsblk
#查看磁盘详情
fdisk -l
#挂载 ,提前建好挂在目录,这句诗挂载sdc设备的第五个分区
mount /dev/sdc5 /mnt/udisk
# 挂载ntfs的系统需要先安装ntfs-3g
yum install ntfs-3g
# 查看是否挂载成功
df -h
# 卸载
umount

hyper-v centos挂载其他vhd 硬盘

mount: unknown filesystem type 'LVM2_member'

1
2
3
fdisk -l
mount /dev/mapper/centos-root /mnt/disk
umount /mnt/disk

腾讯云初始化和挂载硬盘

  1. fdisk -l查看磁盘, 如果没有输出硬盘检查云盘状态是否已挂载

    1
    Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
  2. fdisk /dev/vdb创建新分区依次输入“n”(新建分区)、“p”(新建主分区)、“1”(使用第1个主分区),两次回车(使用默认配置),输入“wq”(保存分区表)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    Welcome to fdisk (util-linux 2.23.2).

    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.

    Device does not contain a recognized partition table
    Building a new DOS disklabel with disk identifier 0x45f0094c.

    Command (m for help): n
    Partition type:
    p primary (0 primary, 0 extended, 4 free)
    e extended
    Select (default p): p
    Partition number (1-4, default 1): 1
    First sector (2048-209715199, default 2048):
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-209715199, default 209715199):
    Using default value 209715199
    Partition 1 of type Linux and of size 100 GiB is set

    Command (m for help): wq
    The partition table has been altered!

    Calling ioctl() to re-read partition table.
    Syncing disks.
  3. fdisk -l 检查

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0x000c5e30

    Device Boot Start End Blocks Id System
    /dev/vda1 * 2048 104857599 52427776 83 Linux

    Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0x45f0094c

    Device Boot Start End Blocks Id System
    /dev/vdb1 2048 209715199 104856576 83 Linux
  4. mkdir /data如果没有创建挂载目录

  5. mkfs.ext3 /dev/vdb1 格式化硬盘

  6. mount /dev/vdb1 /data设置挂载

  7. vim /etc/fstab设置开机启动自动挂载,fstab追加行

    1
    /dev/vdb1            /data                ext3       defaults              0 0

青云centos7.8挂载扩容docker目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 分区 n ——>p ——>1 ——>回车 ——>回车 ——>w
fdisk /dev/sdc
# 格式化
mkfs.ext4 /dev/sdc
# 创建挂载目录
mkdir -p /var/lib/docker
# 挂载
mount /dev/sdc /var/lib/docker
# 查看
df -h
#---------------------
# 设置重启自动挂载
# 查看dev/sdc硬盘对应UUID
blkid /dev/sdc
# 在该文件/etc/fstab追加一行,修改UUID的值为上个命令对应的UUID
echo 'UUID=36ef3867-0b8a-4e99-8c0e-ffd8ebc1a226 /var/lib/docker ext4 defaults 0 0' >>/etc/fstab

参考

How to Mount a NTFS Drive on CentOS / RHEL / Scientific Linux

修改fstab文件磁盘标识方式为UUID

mybatis使用基础

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<mapper namespace="com.willson.service.mapper.infrared.InfraredPictureMapper">
<resultMap id="BaseResultMap" type="com.willson.facade.pojo.infrared.InfraredPicture">
<id column="r_id" />
<result column="id" jdbcType="BIGINT" property="id" />
<association property="resource" javaType="com.willson.facade.pojo.sys.Resource" columnPrefix="r_" >
<id column="id" property="id" jdbcType="BIGINT"/>
<result column="name" property="name" jdbcType="VARCHAR"/>
</association>
<collection property="soil" ofType="com.willson.facade.pojo.plot.Soil" columnPrefix="s_">
<id column="id" property="id" jdbcType="BIGINT"/>
<result column="plot_num" jdbcType="VARCHAR" property="plotNum" />
</collection>
</resultMap>

<sql id="Base_Column_List">
t.id,
r.id r_id
</sql>

解释:

<id column="r_id" /> 一般主键id,如果id存在相同(例如一对多时),id相同的就只会显示一个,因此在多一对多是,关联字段也要加别名

<association > 对应实体类object ,一对一

<collection >对应list< Object > ,一对多

geoserver绘制形状

绘制矩形图形(POLYGON)

注意事项:第一个点和最后一个点必须相同,因此矩形至少是5个点

geoserver

1
2
3
4
5
--添加面
SET @g = 'POLYGON((114.34845 25.48141, 114.34845 25.28141, 114.51599 25.28141, 114.51599 25.48141, 114.34845 25.48141))';
INSERT INTO test(shape) VALUES (ST_PolygonFromText(@g));
--添加点
SET @g = ST_GeomFromText('POINT(114.44845 25.38141)'); INSERT INTO test(shape) VALUES (@g);

表结构:

类型
id int
shape geometry
name varchar

常见语句

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

-- 插入
SET @g = ST_GeomFromText('POINT(109.49097 19.06798)',1);
INSERT INTO infcamer(shape) VALUES (@g);
-- 更新
UPDATE `功能分区面` set SHAPE=ST_PolygonFromText(@g,1) WHERE OGR_FID=1;
-- 查询坐标是否正确设置
SELECT * FROM infcamer WHERE ST_Contains(SHAPE, ST_GeomFromText( 'POINT(109.49097 19.06798)',0))
-- 查询空间坐标相关设置
SELECT * FROM spatial_ref_sys LIMIT 0, 50;
-- geoserver数据库
GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]]
-- test数据库
GEOGCS["GCS_WGS_1984",DATUM["WGS_1984",SPHEROID["WGS_1984",6378137.0,298.257223563]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]]

GEOGCS["GCS_WGS_1984",DATUM["WGS_1984",SPHEROID["WGS_1984",6378137.0,298.257223563]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433],METADATA["World",-180.0,-90.0,180.0,90.0,0.0,0.0174532925199433,0.0,1262]]

镂空面

数据格式为

POLYGON((a a, b b,a a),(c c,d d, c c))

常见问题

  1. 点坐标查询提示[Err] 3033 - Binary geometry function st_contains given two geometries of different srids: 0 and 1, which should have been identical.

    分析:由于插入时的srid不一致,用SELECT * FROM infcamer WHERE ST_Contains(SHAPE, ST_GeomFromText( 'POINT(109.49097 19.06798)'))查询时没有指定srid,所以报错提示有不同的srid

    解决1:查询时指定srid例如:SELECT * FROM infcamer WHERE ST_Contains(SHAPE, ST_GeomFromText( 'POINT(109.49097 19.06798)',0))

    解决2:插入时指定srid,指定的srid最好和原有记录的srid一致,这样就不会存在different srids: 0 and 1,例如:SET @g = ST_GeomFromText('POINT(109.49097 19.06798)',0); INSERT INTO infcamer(shape) VALUES (@g);

  2. Navicat客户端看不到完整数据,最好导出看

参考

Mysql官方文档

Mysql的空间扩展 较全,值得一看

mysql ogr2ogr error

docker 统一存储之ceph

知识

ceph核心服务

  1. MonItor(mon) 监视器

    维护集群状态的映射,包括监视器映射,管理器映射,OSD映射和CRUSH映射。这些映射是Ceph守护进程相互协调所需的关键集群状态。监视器还负责管理守护进程和客户端之间的身份验证。冗余和高可用性通常至少需要三个监视器。

  2. Managers(mgr) 管理器

    守护程序(ceph-mgr)负责跟踪运行时指标和Ceph集群的当前状态,包括存储利用率,当前性能指标和系统负载。 Ceph Manager守护进程还托管基于python的插件来管理和公开Ceph集群信息,包括基于Web的Ceph Manager Dashboard和REST API。高可用性通常至少需要两个管理器。

  3. OSDs(osd_ceph_disk) 对象存储守护进程

    存储数据,处理数据复制,恢复,重新平衡,并通过检查其他Ceph OSD守护进程来获取心跳,为Ceph监视器和管理器提供一些监视信息。冗余和高可用性通常至少需要3个Ceph OSD。

  4. MDSs(mds) Ceph元数据服务器

    代表Ceph文件系统存储元数据(即,Ceph块设备和Ceph对象存储不使用MDS)。 Ceph元数据服务器允许POSIX文件系统用户执行基本命令(如ls,find等),而不会给Ceph存储集群带来巨大负担。

问题

  1. 断电关机重启问题,如果是安装在容器里,面临自动挂载和卸载问题

    如果挂载了关机时,容器先关闭,导致卸载出问题,一直关不了机

    开机时重新挂载,看不到数据问题

  2. 集群部署,osd服务的 privileged: true特权模式不支持,导致不能操作mount相关

  3. 采用docker plugin install rexray/rbd插件模式挂载,服务的挂载目录不能更改,且外部需要安装ceph基本组件(考虑是否部分服务安装主机上,可解决123问题)

安装

重新部署执行

docker run -d --privileged=true -v /dev/:/dev/ -e OSD_DEVICE=/dev/sda ceph/daemon zap_device

并清理目录

  1. 三台机执行,其中MON_IP替换本机ip

    1
    2
    3
    4
    5
    6
    7
    8
    docker run -d \
    --name=mon \
    --net=host \
    -v /etc/ceph:/etc/ceph \
    -v /var/lib/ceph/:/var/lib/ceph/ \
    -e MON_IP=192.168.1.230 \
    -e CEPH_PUBLIC_NETWORK=192.168.1.0/24 \
    ceph/daemon mon

    该部不能通过集群stack部署,因为--net=host是指用主机网络

  2. 然后复制目录/dockerdata/ceph/data到另一台机,复制/dockerdata/ceph/config/bootstrap*到另一台机

  3. 启动第二台,如果有第三台,第三台同理

  4. 执行docker exec mon ceph -s就可以看到两台了

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    [root@environment-test1 ceph]#  docker exec mon ceph -s
    cluster:
    id: cf6e2bed-0eb6-4ba1-9854-e292c936ea0f
    health: HEALTH_OK

    services:
    mon: 2 daemons, quorum lfadmin,environment-test1
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

    data:
    pools: 0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage: 0 B used, 0 B / 0 B avail
    pgs:
  5. 添加osd,需要先在主机上添加一块新硬盘,执行lsblk查看硬盘编号,硬盘非空,会启动报错,如何清空看磁盘格始化(删除所有分区),单个分区sda5不成功,最后只好全磁盘格式化

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    docker run -d \
    --net=host \
    --name=ceph_osd \
    --restart=always \
    -v /etc/ceph:/etc/ceph \
    -v /var/lib/ceph/:/var/lib/ceph/ \
    -v /dev/:/dev/ \
    --privileged=true \
    -e OSD_FORCE_ZAP=1 \
    -e OSD_DEVICE=/dev/sda \
    ceph/daemon osd_ceph_disk
  6. 执行docker exec mon ceph -s就可以看到两台了和一个osd了,但是空间详情看不到,需要运行mds和rgw服务

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    [root@environment-test1 ~]# docker exec mon ceph -s
    cluster:
    id: cf6e2bed-0eb6-4ba1-9854-e292c936ea0f
    health: HEALTH_WARN
    no active mgr

    services:
    mon: 2 daemons, quorum lfadmin,environment-test1
    mgr: no daemons active
    osd: 1 osds: 1 up, 1 in

    data:
    pools: 0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage: 0 B used, 0 B / 0 B avail
    pgs:
  7. 添加 mgr

    1
    2
    3
    4
    5
    6
    docker run -d \
    --net=host \
    --name=mgr \
    -v /dockerdata/ceph/data:/etc/ceph \
    -v /dockerdata/ceph/config/:/var/lib/ceph/ \
    ceph/daemon mgr
  8. 添加 mds

    1
    2
    3
    4
    5
    6
    7
    8

    docker run -d \
    --net=host \
    --name=mds \
    -v /dockerdata/ceph/data:/etc/ceph \
    -v /dockerdata/ceph/config/:/var/lib/ceph/ \
    -e CEPHFS_CREATE=1 \
    ceph/daemon mds
  9. 添加 rgw

    1
    2
    3
    4
    5
    6
    docker run -d \
    --name=rgw \
    -p 80:80 \
    -v /dockerdata/ceph/data:/etc/ceph \
    -v /dockerdata/ceph/config/:/var/lib/ceph/ \
    ceph/daemon rgw
  10. 再次执行docker exec mon ceph -s查看,就可以看到空间信息了

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    [root@environment-test1 ~]# docker exec mon ceph -s
    cluster:
    id: cf6e2bed-0eb6-4ba1-9854-e292c936ea0f
    health: HEALTH_WARN
    1 MDSs report slow metadata IOs
    Reduced data availability: 24 pgs inactive
    Degraded data redundancy: 24 pgs undersized
    too few PGs per OSD (24 < min 30)

    services:
    mon: 2 daemons, quorum lfadmin,environment-test1
    mgr: environment-test1(active)
    mds: cephfs-1/1/1 up {0=environment-test1=up:creating}
    osd: 1 osds: 1 up, 1 in

    data:
    pools: 3 pools, 24 pgs
    objects: 0 objects, 0 B
    usage: 2.0 GiB used, 463 GiB / 465 GiB avail
    pgs: 100.000% pgs not active
    24 undersized+peered

使用

测试发现只有一个osd挂载失败,因此在两台电脑都添加osd,并都挂载

  1. 首先查看登陆用户名和密码

    1
    2
    3
    4
    5
    6
    7
    8
    [root@environment-test1 ~]# cat /dockerdata/ceph/data/ceph.client.admin.keyring 
    [client.admin]
    key = AQDTqMFbDC4UAxAApyOvC8I+8nA5PMK1bHWDWQ==
    auid = 0
    caps mds = "allow"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"
  2. 创建挂载目录

    1
    [root@lfadmin mnt]# mkdir /mnt/mycephfs
  3. 挂载

    1
    [root@lfadmin mnt]# mount -t ceph 192.168.1.213,192.168.1.230,192.168.1.212:/ /dockerdata/cephdata -o name=admin,secret=AQCu98JblQgRChAAskEmJ1ekN2Vasa9Chw+gvg==
  4. 设置开机自动挂载?

  5. 取消挂载umount /mnt/mycephfs/ 如果被占用,关闭占用程序和窗口

docker exec ea8577875af3 ceph osd tree

测试

  1. 两台节点,一台当掉,不能访问挂载目录

集成部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
version: "3.6"

networks:
hostnet:
external: true
name: host

services:
mon212:
restart: always
image: ceph/daemon
command: mon
networks:
hostnet: {}
volumes:
- /etc/ceph:/etc/ceph
- /var/lib/ceph/:/var/lib/ceph/
environment:
MON_IP: 192.168.1.212
CEPH_PUBLIC_NETWORK: 192.168.1.0/24
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == worker]
mon213:
restart: always
image: ceph/daemon
command: mon
networks:
hostnet: {}
volumes:
- /etc/ceph:/etc/ceph
- /var/lib/ceph/:/var/lib/ceph/
environment:
MON_IP: 192.168.1.213
CEPH_PUBLIC_NETWORK: 192.168.1.0/24
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == lfadmin]
mon230:
restart: always
image: ceph/daemon
command: mon
networks:
hostnet: {}
volumes:
- /etc/ceph:/etc/ceph
- /var/lib/ceph/:/var/lib/ceph/
environment:
MON_IP: 192.168.1.230
CEPH_PUBLIC_NETWORK: 192.168.1.0/24
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == environment-test1]
mgr230:
restart: always
image: ceph/daemon
command: mgr
networks:
hostnet: {}
volumes:
- /etc/ceph:/etc/ceph
- /var/lib/ceph/:/var/lib/ceph/
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == environment-test1]
mds230:
restart: always
image: ceph/daemon
command: mds
networks:
hostnet: {}
volumes:
- /etc/ceph:/etc/ceph
- /var/lib/ceph/:/var/lib/ceph/
environment:
CEPHFS_CREATE: 1
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == environment-test1]
rgw230:
restart: always
image: ceph/daemon
command: rgw
networks:
hostnet: {}
volumes:
- /etc/ceph:/etc/ceph
- /var/lib/ceph/:/var/lib/ceph/
ports:
- target: 80
published: 14002 #只有worker能访问该端口
protocol: tcp
mode: host #版本要求3.2
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == environment-test1]
# osd挂载需要特权模式(privileged=true),目前不支持
# osd213:
# restart: always
# image: ceph/daemon
# command: osd_ceph_disk
# privileged: true
# networks:
# hostnet: {}
# volumes:
# - /dockerdata/ceph/data:/etc/ceph
# - /dockerdata/ceph/config/:/var/lib/ceph/
# - /dev/:/dev/
# environment:
# OSD_FORCE_ZAP: 1
# OSD_DEVICE: /dev/sda
# deploy:
# replicas: 1
# restart_policy:
# condition: on-failure
# placement:
# constraints: [node.hostname == lfadmin]
# osd230:
# restart: always
# image: ceph/daemon
# command: osd_ceph_disk
# privileged: true
# networks:
# hostnet: {}
# volumes:
# - /dockerdata/ceph/data:/etc/ceph
# - /dockerdata/ceph/config/:/var/lib/ceph/
# - /dev/:/dev/
# environment:
# OSD_FORCE_ZAP: 1
# OSD_DEVICE: /dev/sda
# deploy:
# replicas: 1
# restart_policy:
# condition: on-failure
# placement:
# constraints: [node.hostname == environment-test1]

注意

swarm 不支持 privileged: true特权模式,所以使用集群部署时,提示没有权限

磁盘格始化(删除所有分区)

查看分区情况lsblk

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@environment-test1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 70.9G 0 part
├─sda2 8:2 0 1K 0 part
├─sda5 8:5 0 105.1G 0 part
├─sda6 8:6 0 145G 0 part
└─sda7 8:7 0 144.7G 0 part
sdb 8:16 0 465.8G 0 disk
├─sdb1 8:17 0 200M 0 part /boot/efi
├─sdb2 8:18 0 1G 0 part /boot
└─sdb3 8:19 0 464.6G 0 part
├─centos-root 253:0 0 408G 0 lvm /
├─centos-swap 253:1 0 5.8G 0 lvm [SWAP]
└─centos-home 253:2 0 50G 0 lvm /home

格式化磁盘mkfs.ext4 /dev/sda,如果是格式化一个区,跟上特定数字,例如mkfs.ext4 /dev/sda5

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@environment-test1 ~]# mkfs.ext4 /dev/sda
mke2fs 1.42.9 (28-Dec-2013)
/dev/sda is entire device, not just one partition!
无论如何也要继续? (y,n) y
文件系统标签=
OS type: Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
30531584 inodes, 122096646 blocks
6104832 blocks (5.00%) reserved for the super user
第一个数据块=0
Maximum filesystem blocks=2271215616
3727 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000

Allocating group tables: 完成
正在写入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成

再次查看

1
2
3
4
5
6
7
8
9
10
[root@environment-test1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
sdb 8:16 0 465.8G 0 disk
├─sdb1 8:17 0 200M 0 part /boot/efi
├─sdb2 8:18 0 1G 0 part /boot
└─sdb3 8:19 0 464.6G 0 part
├─centos-root 253:0 0 408G 0 lvm /
├─centos-swap 253:1 0 5.8G 0 lvm [SWAP]
└─centos-home 253:2 0 50G 0 lvm /home

减少(压缩)分区空间(大小)

CentOS Linux如何无损调整分区大小(XFS文件系统) : 没有做到无损

没找到无损调整的方法

https://www.linuxidc.com/Linux/2016-06/132270.htm

http://blog.51cto.com/happyliu/1902022

参考

[喵咪Liunx(7)]Ceph分布式文件共享解决方案

https://tobegit3hub1.gitbooks.io/ceph_from_scratch/content/usage/index.html

swarm脚本部署

mac 篇

常用命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 查看当前路由表
netstat -rn
----------------------------------------------------------------
Routing tables
Internet:
Destination Gateway Flags Netif Expire
default 192.168.43.88 UGSc en0
default 11.13.2.254 UGScI en7
-----------------------------------------------------------------
#获取默认路由
route get 0.0.0.0
--------------------------------------------------------------------------------
route to: default
destination: default
mask: default
gateway: 192.168.43.88
interface: en0
flags: <UP,GATEWAY,DONE,STATIC,PRCLONING>
recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire
0 0 0 0 0 0 1500 0
---------------------------------------------------------------------------------
#删除默认路由
sudo route -n delete default 192.168.43.88
#添加外网网关
sudo route add -net 0.0.0.0 192.168.43.88
#添加内网网关
sudo route add -net 11.8.129.0 11.13.2.254

Linux 篇

常见命令

1
2
3
4
5
6
7
8
9
10
#和网络有关的配置文件 
/etc/resolv.conf
#查看网关设置
grep GATEWAY /etc/sysconfig/network-scripts/ifcfg*
#增加网关:
route add default gw 192.168.40.1
#重启网络
service network restart
#查看DNS解析
grep hosts /etc/nsswitch.conf

分析

traceroute <ip>

网络测试、测量、管理、分析,官网

ICMP错误信息分析:

!H 不能到达主机

!N 不能到达网络

!P 不能到达的协议

!S 源路由失效

!F 需要分段

正常情况:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@environment-test1 ~]# traceroute 4.2.2.2
traceroute to 4.2.2.2 (4.2.2.2), 30 hops max, 60 byte packets
1 gateway (192.168.1.1) 0.440 ms 0.594 ms 0.743 ms
2 * * *
3 121.33.196.105 (121.33.196.105) 4.352 ms 4.443 ms 4.521 ms
4 183.56.31.37 (183.56.31.37) 7.290 ms 183.56.31.21 (183.56.31.21) 9.217 ms 183.56.31.13 (183.56.31.13) 6.755 ms
5 153.176.37.59.broad.dg.gd.dynamic.163data.com.cn (59.37.176.153) 6.884 ms 6.993 ms 7.084 ms
6 121.8.223.13 (121.8.223.13) 9.307 ms 5.848 ms 183.56.31.173 (183.56.31.173) 4.443 ms
7 202.97.94.130 (202.97.94.130) 4.029 ms 4.165 ms 202.97.94.142 (202.97.94.142) 5.546 ms
8 202.97.94.98 (202.97.94.98) 11.225 ms 202.97.94.118 (202.97.94.118) 6.177 ms 6.600 ms
9 202.97.52.18 (202.97.52.18) 209.571 ms 202.97.52.142 (202.97.52.142) 206.772 ms 202.97.58.2 (202.97.58.2) 197.316 ms
10 195.50.126.217 (195.50.126.217) 213.784 ms 213.917 ms 211.676 ms
11 4.69.163.22 (4.69.163.22) 312.436 ms 4.69.141.230 (4.69.141.230) 214.040 ms 213.168 ms
12 b.resolvers.Level3.net (4.2.2.2) 209.348 ms 210.701 ms 210.588 ms

有问题的情况:

1
2
3
[root@lfadmin ~]# traceroute 4.2.2.2
traceroute to 4.2.2.2 (4.2.2.2), 30 hops max, 60 byte packets
1 gateway (192.168.1.1) 0.751 ms !N 0.817 ms !N 1.326 ms !N

ifconfig <网卡名字>

netstat -r相似route

显示路由连接信息等

1
2
3
4
5
6
7
8
[root@environment-test1 ~]# netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default gateway 0.0.0.0 UG 0 0 0 enp3s0
link-local 0.0.0.0 255.255.0.0 U 0 0 0 enp3s0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 doc...ridge
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp3s0

host <域名> 相似nslookup <域名>

dns分析

1
2
3
4
[root@environment-test1 ~]#  host www.baidu.com
www.baidu.com is an alias for www.a.shifen.com.
www.a.shifen.com has address 14.215.177.38
www.a.shifen.com has address 14.215.177.39

nmcli查看设备状态

ip route show | column -t 查看路由

问题1 :无法连外网,可以ping 路由器

提示

1
2
3
[root@lfadmin ~]# traceroute 4.2.2.2
traceroute to 4.2.2.2 (4.2.2.2), 30 hops max, 60 byte packets
1 gateway (192.168.1.1) 0.751 ms !N 0.817 ms !N 1.326 ms !N

解决原因,是网络配置文件uuid冲突,导致不能上网,修改即可

执行uuidgen ens33生产新的830a6ae2-85fb-41e7-9e5d-60d084f56f5f替换配置文件里面的

执行nmcli con | sed -n '1,2p'进行验证

问题2: 在使用移动的专线时,映射出8088端口,一些手机能访问服务一些手机不能访问服务

解决:移动说时该ip下的一些域名被idc封堵了,需要解封,或者备案,备了案就不回封堵了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# 能访问的热点
➜ ~ telnet 27.176.159.182 8088
Trying 27.176.159.182...
Connected to 27.176.159.182.
Escape character is '^]'.
^CConnection closed by foreign host.
➜ ~ sudo tcptraceroute 27.176.159.182 8088
Password:
Sorry, try again.
Password:
Selected device en0, address 172.20.10.3, port 60864 for outgoing packets
Tracing the path to 27.176.159.182 on TCP port 8088 (radan-http), 30 hops max
1 172.20.10.1 4.176 ms 13.755 ms 3.996 ms
2 * 192.168.25.254 2240.375 ms 2457.379 ms
3 * * *
4 * * *
5 172.31.5.1 55.430 ms * *
6 139.203.67.153 114.910 ms
139.203.67.157 84.705 ms *
7 * * 202.97.66.86 48.711 ms
8 221.183.187.29 66.619 ms * *
9 * * *
10 * * *
11 * * *
12 223.87.26.174 41.827 ms 47.060 ms 63.440 ms
13 * * *
14 221.182.42.126 58.215 ms
221.182.42.130 130.437 ms 60.580 ms
15 * * *
16 27.176.159.182 91.646 ms 41.202 ms 90.640 ms
17 27.176.159.182 [open] 58.467 ms 84.589 ms 40.736 ms
➜ ~
# 不能访问的热点
➜ ~ telnet 27.176.159.182 8088
Trying 27.176.159.182...
^C
➜ ~ sudo tcptraceroute 27.176.159.182 8088
Password:
Selected device en0, address 172.20.10.6, port 51541 for outgoing packets
Tracing the path to 27.176.159.182 on TCP port 8088 (radan-http), 30 hops max
1 172.20.10.1 4.015 ms 2.467 ms 2.508 ms
2 192.168.25.254 29.404 ms 19.509 ms 40.122 ms
3 10.1.0.9 41.479 ms * 42.718 ms
4 * * *
5 172.31.4.1 41.928 ms * *
6 139.203.67.145 56.212 ms * *
7 * * *
8 * * 221.183.95.209 27.317 ms
9 * 221.183.90.82 85.681 ms *
10 * * *
11 * 223.87.26.29 56.235 ms 52.355 ms
12 223.87.27.250 29.739 ms 33.021 ms 38.172 ms
13 223.85.135.158 20.524 ms 35.416 ms 40.353 ms
14 221.182.42.130 42.606 ms 31.836 ms 39.801 ms
15 * * *
......
30 * * *
Destination not reached

使用ping时都能ping通,tcp进行测试时,结果如下,一些能访问,一些不能访问:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Query: tcp 27.176.159.182:8088

Location ISP TCP port check result
Canada, BC, Vancouver Telus Connection to 27.176.159.182:8088 successful
Canada, BC, Vancouver Shaw Connection to 27.176.159.182:8088 successful
USA, CA, Fremont Hurricane Connection to 27.176.159.182:8088 successful
USA, CA, Fremont IT7 FMT2 Connection to 27.176.159.182:8088 successful
USA, CA, Fremont Linode Connection to 27.176.159.182:8088 failed
USA, CA, San Francisco Digital Ocean Connection to 27.176.159.182:8088 failed
USA, CA, Santa Clara Hurricane Connection to 27.176.159.182:8088 failed
USA, CA, Los Angeles Cogent Connection to 27.176.159.182:8088 successful
Australia, Sydney Vultr Connection to 27.176.159.182:8088 failed
Taiwan, Taichung Google Connection to 27.176.159.182:8088 failed
China, Guiyang Huawei Connection to 27.176.159.182:8088 successful
China, Beijing Tencent Connection to 27.176.159.182:8088 failed
China, Beijing Huawei Connection to 27.176.159.182:8088 successful
China, Shandong China Unicom Connection to 27.176.159.182:8088 successful
China, Jiangsu China Telecom Connection to 27.176.159.182:8088 failed
China, Jiangsu China Mobile Connection to 27.176.159.182:8088 failed
China, Qingdao Aliyun Connection to 27.176.159.182:8088 successful
China, Shanghai Aliyun Connection to 27.176.159.182:8088 failed
China, Shanghai Huawei Connection to 27.176.159.182:8088 successful
China, Shanghai Tencent Connection to 27.176.159.182:8088 failed

后面联系移动,移动给了一个公网测试ip,为了不影响原有网络,直接通过附加ip的方式配置:

  1. 登录FortiGate-100F

  2. 网络->接口->物理接口,编辑移动出口2 (port1)

  3. 打开附加的ip地址的开关,新建,输入移动给的测试ip211.137.109.15/255.255.255.0,勾选ping,方便测试。

  4. 保存之后通过ping 211.137.109.15就可以看到ping成功了,如果不成功稍等几分钟。

  5. 配置虚拟ip服务映射,在策略&对象->虚拟IP->新建,新建一个test服务

    1
    2
    3
    4
    5
    6
    7
    8
    9
    名称: test
    接口: 移动出口2 (port1)
    外部IP地址/范围: 211.137.109.15
    映射到IPv4地址/范围:172.16.10.30
    端口转发:开
    协议:TCP
    端口映射类型:一对一
    外部服务端口:8099
    映射到IPv4端口:8099
  6. 配置转发规则也就是防火墙策略,在策略&对象->防火墙策略->新建,新建一个test策略,这个页面策略的优先级是从上到下,如果没有匹配到就会被最后一个隐式拒绝策略拒绝。

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    名称: test
    类型:标准
    流入接口:移动给出口2(port1)
    流出接口:lan
    源地址:all #这里设置china可以限制为国内访问
    目标地址:test #这里是上一步建立虚拟ip映射的名称
    计划任务:always
    服务:ALL
    动作:接受
    检测模式:基于流
    启用NAT:开
    ...剩下的默认
  7. 配置好后可以在第五步看到有一个关联项,现在测试tcp 211.137.109.15:8088,结果发现还是部分可以部分不可以,由此可以判断可能不是移动ip被封了

继续排查,用电脑直接连接移动专线的光猫的lan2口,提供服务,经全国访问测试tcp 211.137.109.15:8088,发现都能访问,基本排除了移动的问题,基本问题可以定位到FortiGate-100F的问题。

点击网络->诊断程序->Debug Flow开启过滤,输入8088端口,点击开始debug flow,分别抓取可以访问的手机的流量和不可以访问的手机流量。

可以访问的如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
Trace ID,时间设置,消息
"vd-root:0 received a packet(proto=6, 171.218.234.126:50428->211.137.109.15:8099) tun_id=0.0.0.0 from port1. flag [S], seq 152754952, ack 0, win 65535"
"allocate a new session-0002e97a, tun_id=0.0.0.0"
"in-[port1], out-[]"
len=1
checking gnum-100000 policy-25
"find DNAT: IP-172.16.10.30, port-8099"
"matched policy-25, act=accept, vip=25, flag=100, sflag=2000000"
"result: skb_flags-02000000, vid-25, ret-matched, act-accept, flag-00000100"
"VIP-172.16.10.30:8099, outdev-port1"
DNAT 211.137.109.15:8099->172.16.10.30:8099
find a route: flag=00000000 gw-192.168.100.253 via lan
"in-[port1], out-[lan], skb_flags-020000c0, vid-25, app_id: 0, url_cat_id: 0"
"gnum-100004, use int hash, slot=2, len=4"
"checked gnum-100004 policy-6, ret-matched, act-accept"
"checked gnum-100004 policy-10, ret-no-match, act-accept"
"checked gnum-100004 policy-13, ret-matched, act-accept"
ret-matched
"gnum-4e22, check-ffffffbffc02b9e4"
"checked gnum-4e22 policy-6, ret-no-match, act-accept"
"checked gnum-4e22 policy-6, ret-no-match, act-accept"
"checked gnum-4e22 policy-6, ret-no-match, act-accept"
"gnum-4e22 check result: ret-no-match, act-accept, flag-00000000, flag2-00000000"
"find SNAT: IP-192.168.100.254(from IPPOOL), port-50428"
"policy-13 is matched, act-accept"
"after iprope_captive_check(): is_captive-0, ret-matched, act-accept, idx-13"
"after iprope_captive_check(): is_captive-0, ret-matched, act-accept, idx-13"
"in-[port1], out-[lan], skb_flags-020000c0, vid-25"
"gnum-100015, check-ffffffbffc02a8d0"
"checked gnum-100015 policy-1, ret-no-match, act-accept"
"checked gnum-100015 policy-4, ret-no-match, act-accept"
"gnum-100015 check result: ret-no-match, act-accept, flag-00000000, flag2-00000000"
"after check: ret-no-match, act-accept, flag-00000000, flag2-00000000"
"in-[port1], out-[lan], skb_flags-020000c0, vid-25"
len=0
Allowed by Policy-13: SNAT
SNAT 171.218.234.126->192.168.100.254:50428

不可以访问的如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Trace ID,时间设置,消息
"vd-root:0 received a packet(proto=6, 39.144.139.31:10233->211.137.109.15:8099) tun_id=0.0.0.0 from port1. flag [S], seq 2898517655, ack 0, win 65535"
"allocate a new session-03ce447f, tun_id=0.0.0.0"
"in-[port1], out-[]"
len=1
checking gnum-100000 policy-25
"find DNAT: IP-172.16.10.30, port-8099"
"matched policy-25, act=accept, vip=25, flag=100, sflag=2000000"
"result: skb_flags-02000000, vid-25, ret-matched, act-accept, flag-00000100"
"VIP-172.16.10.30:8099, outdev-port1"
DNAT 211.137.109.15:8099->172.16.10.30:8099
find a route: flag=00000000 gw-172.16.10.30 via l2t.root
"in-[port1], out-[l2t.root], skb_flags-020000c0, vid-25, app_id: 0, url_cat_id: 0"
"gnum-100004, use int hash, slot=126, len=1"
"checked gnum-100004 policy-0, ret-matched, act-accept"
ret-matched
"policy-0 is matched, act-drop"
"after iprope_captive_check(): is_captive-0, ret-matched, act-drop, idx-0"
"after iprope_captive_check(): is_captive-0, ret-matched, act-drop, idx-0"
"in-[port1], out-[l2t.root], skb_flags-020000c0, vid-25"
"gnum-100015, check-ffffffbffc02a8d0"
"checked gnum-100015 policy-1, ret-no-match, act-accept"
"checked gnum-100015 policy-4, ret-no-match, act-accept"
"gnum-100015 check result: ret-no-match, act-accept, flag-00000000, flag2-00000000"
"after check: ret-no-match, act-accept, flag-00000000, flag2-00000000"
Denied by forward policy check (policy 0)

对比发现

1
2
3
4
#不可以访问的走到了l2t.root
find a route: flag=00000000 gw-172.16.10.30 via l2t.root
#可以访问的走到了lan
find a route: flag=00000000 gw-192.168.100.253 via lan

在FortiGate-100F的CLI命令行执行get router info routing-table all

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
FortiGate-100F # get router info routing-table all
Codes: K - kernel, C - connected, S - static, R - RIP, B - BGP
O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area
V - BGP VPNv4
* - candidate default

Routing table for VRF=0
S* 0.0.0.0/0 [10/0] via 117.176.159.129, port1, [1/0]
[10/0] via 192.168.2.1, wan1, [1/0]
[10/0] via 223.85.227.129, wan2, [1/0]
S 8.8.8.8/32 [10/0] via 10.0.11.1, to-forti-hk, [1/0]
C 10.0.10.0/30 is directly connected, to-forti-hk2
C 10.0.10.2/32 is directly connected, to-forti-hk2
C 10.0.11.0/30 is directly connected, to-forti-hk
C 10.0.11.2/32 is directly connected, to-forti-hk
S 100.64.8.8/32 [10/0] via 10.0.11.1, to-forti-hk, [1/0]
C 117.176.159.128/25 is directly connected, port1
S 172.16.0.0/16 [10/0] via 192.168.100.253, lan, [1/0]
[10/0] is directly connected, l2t.root, [1/0]
S 172.16.200.0/29 [10/0] is directly connected, l2t.root, [1/0]
C 192.168.2.0/24 is directly connected, wan1
C 192.168.100.0/24 is directly connected, lan
C 211.137.109.0/24 is directly connected, port1
C 223.85.227.128/25 is directly connected, wan2

分析发现172.16.0.0有两个路由地址,一个lan一个l2t.root

在界面网络->静态路由里面也可以看到172.16.0.0有两个路由地址,优先级且一样,这就导致了流量随机命中一个路由。修改优先级或禁用错误的路由,解决了。

参考

CentOS7配置网卡为静态IP,如果你还学不会那真的没有办法了!

实战按月分类list数据

数据源:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"2018年08月":[
{
"createTime":"2018-08-15 15:51:16"
},
{
"createTime":"2018-08-15 15:51:15"
}
],
"2018年09月":[
{
"createTime":"2018-09-15 15:51:16"
},
{
"createTime":"2018-09-15 15:51:15"
}
]
}

代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
//-----------------实体类-----------------
public class ThematicMap extends BaseBean {
....
public String getMonth(){
Date createTime= this.getCreateTime(); //获取basebean的时间
SimpleDateFormat format1 = new SimpleDateFormat("yyyy年MM月");
return format1.format(createTime.getTime());
}
}
//--------------------------------------

List<ThematicMap> thematicMapList = thematicMapMapper.listForPage(params);
//getMonth方法获取数据,
Map<String,List<ThematicMap>> stringListMap=thematicMapList.stream().collect(Collectors.groupingBy(ThematicMap::getMonth,LinkedHashMap::new,Collectors.toList()));

Collectors.groupingBy(Function<? super T, ? extends K> classifier, ​ Supplier<M> mapFactory, ​ Collector<? super T, A, D> downstream)有三个参数

如果不考虑顺序一个参数即可thematicMapList.stream().collect(Collectors.groupingBy(ThematicMap::getMonth));

第二个参数是指定容器:默认值是HashMap::new,但是它会导致乱序,因此使用LinkedHashMap

最终数据结构

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"2018年08月":[
{
"createTime":"2018-08-15 15:51:16"
},
{
"createTime":"2018-08-15 15:51:15"
}
],
"2018年09月":[
{
"createTime":"2018-09-15 15:51:16"
},
{
"createTime":"2018-09-15 15:51:15"
}
]
}
参考

Collectors.groupingBy分组后的排序问题

maven私库nexus3搭建使用

常用命令

1
2
#Maven 测试仓库命令,下载jar包,测试一般会报错,说没有权限
mvn dependency:get -DremoteRepositories=http://47.98.114.63:14006/repository/maven-third/ -DgroupId=com.taobao -DartifactId=taobao-sdk-java-auto -Dversion=20190804

sonatype/nexus3安装

  1. 创建挂载目录mkdir -p v-nexus/data并修改目录权限chown -R 200 v-nexus/data

  2. 创建部署脚本

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    # 默认用户名admin/admin123
    version: '3.2'

    services:
    nexus:
    restart: always
    image: sonatype/nexus3
    ports: #自定义端口
    - target: 8081
    published: 18081 #只有worker能访问该端口
    protocol: tcp
    mode: host #版本要求3.2
    volumes:
    - "/dockerdata/v-nexus/data:/nexus-data"
    deploy:
    replicas: 1
    restart_policy:
    condition: on-failure
    placement:
    constraints: [node.hostname == lfadmin]
  3. 测试访问http://192.168.1.213:18081/然后输入admin和admin123进行登陆即可

配置yum代理

远程原remote url: http://maven.aliyun.com/nexus/content/groups/public

新建一个type:yum(proxy)

在新建一个组yum(group),添加刚刚的代理地址,同理,可以添加elpe,docker等其他仓库代理

复制生成的地址http://192.168.1.230:18081/repository/yum-public/配置在`nexus.repo`

执行vim /etc/yum.repos.d/nexus.repo

1
2
3
4
5
6
[nexusrepo]
name=Nexus Repository
baseurl=http://192.168.1.230:18081/repository/yum-public/$releasever/os/$basearch/
enabled=1
gpgcheck=0
priority=1

yum clean all

rm -rf /etc/yum.repos.d/C*

注意

epel源需要单独配置,直接用public不识别

执行vim /etc/yum.repos.d/nexus-epel.repo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[nexus-epel-debuginfo]
name = Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl = http://192.168.1.230:18081/repository/yum-epel/7/$basearch/debug
failovermethod = priority
enabled = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck = 0

[nexus-epel-source]
name = Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl = http://192.168.1.230:18081/repository/yum-epel/7/SRPMS
failovermethod = priority
enabled = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck = 0

[nexus-epel]
baseurl = http://192.168.1.230:18081/repository/yum-epel/7/$basearch
failovermethod = priority
gpgcheck = 0
name = EPEL YUM repo

win10下maven安装

  1. 下载apache-maven-3.5.4-bin.zip然后解压

  2. 添加环境变量,新建系统环境变量Maven_HOME值为解压路径,编辑path环境变量添加%Maven_HOME%\bin

  3. 命令窗口测试mvn -v,只支持cmd

  4. 修改apache-maven-3.5.4\conf\settings.xml文件

    1
    2
    <!--jar本地缓存地址-->
    <localRepository>D:\MavenRepository</localRepository>
  5. 完整的setting.xml设置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    <?xml version="1.0" encoding="UTF-8"?>

    <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">

    <!-- jar本地缓存地址 -->
    <localRepository>D:\MavenRepository</localRepository>

    <pluginGroups>

    </pluginGroups>

    <proxies>

    </proxies>


    <servers>
    <!--配置权限,使用默认用户-->
    <server>
    <!--这里的id要和项目里的pom.xml的id一致-->
    <id>nexus-releases</id>
    <username>admin</username>
    <password>admin123</password>
    </server>
    <server>
    <id>nexus-snapshots</id>
    <username>admin</username>
    <password>admin123</password>
    </server>
    </servers>

    <mirrors>

    </mirrors>

    <profiles>
    <profile>
    <id>MyNexus</id>

    <activation>
    <jdk>1.4</jdk>
    </activation>

    <repositories>
    <!-- 私有库地址-->
    <repository>
    <id>nexus</id>
    <name>>Nexus3 Repository</name>
    <!-- 注意修改成对应的IP,在nexus里面复制public里面的地址 -->
    <url>http://192.168.1.213:18081/repository/maven-public/</url>

    <releases>
    <enabled>true</enabled>
    </releases>
    <!-- snapshots默认是关闭的,需要手动开启 -->
    <snapshots>
    <enabled>true</enabled>
    </snapshots>
    </repository>

    </repositories>
    <pluginRepositories>
    <!--插件库地址-->
    <pluginRepository>
    <id>nexus</id>
    <url>http://192.168.1.213:18081/repository/maven-public/</url>
    <releases>
    <enabled>true</enabled>
    </releases>
    <snapshots>
    <enabled>true</enabled>
    </snapshots>
    </pluginRepository>
    </pluginRepositories>
    </profile>

    </profiles>

    <!--激活profile-->
    <activeProfiles>
    <activeProfile>MyNexus</activeProfile>
    </activeProfiles>

    </settings>
  6. 在项目的pom.xml修改或添加如下配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    <?xml version="1.0" encoding="UTF-8"?>
    <project ...>
    ....
    <!-- 配置maven地址 -->
    <distributionManagement>
    <repository>
    <!--这里的id要和maven里的的settings.xml的id一致-->
    <id>nexus-releases</id>
    <name>Nexus Release Repository</name>
    <url>http://192.168.1.213:18081/repository/maven-releases/</url>
    </repository>
    <snapshotRepository>
    <id>nexus-snapshots</id>
    <name>Nexus Snapshot Repository</name>
    <url>http://192.168.1.213:18081/repository/maven-snapshots/</url>
    </snapshotRepository>
    </distributionManagement>
    ...
    </project>
  7. 编译在cmd执行mvn install发布上传jar执行mvn deploy,可以到nexus地址进行检查

  8. 使用私库下载和上传是一样的

nexus3 配置阿里云代理仓库

  1. 点击Create Repository->maven2(proxy)
  2. 添加名字aliyun-proxy设置阿里云url地址http://maven.aliyun.com/nexus/content/groups/public
  3. 设置阿里云优先级,在maven-public里面的group把刚刚创建的添加过去并移到maven-central上面
  4. 设置允许发布release,在maven-release的hosted里面选择allow redeploy

创建第三方仓库

  1. create repository->maven2(hosted)

    name:3rd_part

    hosted:Allow redeploy

  2. 添加srd_partmaven_public

  3. 如果没有groupId最好统一为com.3rdPart标注为第三方包

发布上传jar包到nexus

语法:

1
2
3
4
5
6
7
8
mvn deploy:deploy-file \ 
-DgroupId=<group-id> \
-DartifactId=<artifact-id> \
-Dversion=<version> \
-Dpackaging=<type-of-packaging> \
-Dfile=<path-to-file> \
-DrepositoryId=<这里的id要和maven里的的settings.xml的id一致> \
-Durl=<url-of-the-repository-to-deploy>

实战

1
2
3
4
5
6
7
8
9
mvn deploy:deploy-file \
-Dfile=spring-boot-starter-druid-0.0.1-SNAPSHOT.jar \
-DgroupId=cn.binux \
-DartifactId=spring-boot-starter-druid \
-Dversion=0.0.1-SNAPSHOT \
-Dpackaging=jar \
-DpomFile=spring-boot-starter-druid-0.0.1-SNAPSHOT.pom \
-DrepositoryId=nexus-snapshots \
-Durl=http://192.168.1.213:18081/repository/maven-snapshots/

上传jar包到私有maven仓库

1
2
3
4
5
6
7
8
9
10
11
12
mvn deploy:deploy-file -Dfile=spring-boot-starter-druid-0.0.1-SNAPSHOT.jar -DgroupId=cn.binux -DartifactId=spring-boot-starter-druid -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar -DpomFile=spring-boot-starter-druid-0.0.1-SNAPSHOT.pom -DrepositoryId=nexus-snapshots -Durl=http://192.168.1.213:18081/repository/maven-snapshots/

mvn deploy:deploy-file -Dfile=spring-boot-starter-dubbox-0.0.1-SNAPSHOT.jar -DgroupId=cn.binux -DartifactId=spring-boot-starter-dubbox -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar -DpomFile=spring-boot-starter-dubbox-0.0.1-SNAPSHOT.pom -DrepositoryId=nexus-snapshots -Durl=http://192.168.1.213:18081/repository/maven-snapshots/

mvn deploy:deploy-file -Dfile=spring-boot-starter-redis-0.0.1-SNAPSHOT.jar -DgroupId=cn.binux -DartifactId=spring-boot-starter-redis -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar -DpomFile=spring-boot-starter-redis-0.0.1-SNAPSHOT.pom -DrepositoryId=nexus-snapshots -Durl=http://192.168.1.213:18081/repository/maven-snapshots/

#这个不是snapshots要发布到releases,注意设置nexus为允许发布,看jar报后缀,没有`SNAPSHOT`就是release
mvn deploy:deploy-file -Dfile=dubbo-2.8.4.jar -DgroupId=com.alibaba -DartifactId=dubbo -Dversion=2.8.4 -Dpackaging=jar -DrepositoryId=nexus-releases -Durl=http://192.168.1.213:18081/repository/maven-releases/

mvn deploy:deploy-file -Dfile=fastdfs-1.24.jar -DgroupId=org.csource -DartifactId=fastdfs -Dversion=1.24 -Dpackaging=jar -DrepositoryId=nexus-releases -Durl=http://192.168.1.213:18081/repository/maven-releases/

mvn deploy:deploy-file -Dfile=examples-1.0.jar -DgroupId=com.haikang -DartifactId=examples -Dversion=1.0 -Dpackaging=jar -DrepositoryId=nexus-releases -Durl=http://192.168.1.230:18081/repository/maven-releases/

本地安装jar包到本地maven仓库

1
2
3
4
5
mvn install:install-file -Dfile=spring-boot-starter-druid-0.0.1-SNAPSHOT.jar -DgroupId=cn.binux -DartifactId=spring-boot-starter-druid -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar
mvn install:install-file -Dfile=spring-boot-starter-dubbox-0.0.1-SNAPSHOT.jar -DgroupId=cn.binux -DartifactId=spring-boot-starter-dubbox -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar
mvn install:install-file -Dfile=spring-boot-starter-redis-0.0.1-SNAPSHOT.jar -DgroupId=cn.binux -DartifactId=spring-boot-starter-redis -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar
mvn install:install-file -Dfile=dubbo-2.8.4.jar -DgroupId=com.alibaba -DartifactId=dubbo -Dversion=2.8.4 -Dpackaging=jar
mvn install:install-file -Dfile=fastdfs-1.24.jar -DgroupId=org.csource -DartifactId=fastdfs -Dversion=1.24 -Dpackaging=jar

配置用户和角色

  1. 创建roles:

    id: nx-deploy

    prlvlleges: nx-repository-view-*-*-*

  2. 创建用户:

    ID: develop

    roles: nx-deploy

maven 项目内局部配置私库地址

1
2
3
4
5
6
7
8
## pom.xml里面设置    
<repositories>
<repository>
<id>maven-third</id>
<name>maven-third</name>
<url>http://47.98.114.63:14006/repository/maven-third/</url>
</repository>
</repositories>

问题

  1. 下载了找不到包,解决,删除项目重新导入,重新maven依赖
  2. 刚上传或添加了新的jar到私库,无法下载,解决,删除本地仓库的该包目录
  3. 注意powershell执行命令时需要在等号后面加双引号,不然改用cmd

docker占用大量磁盘空间分析

linux 磁盘分析

df -h 查看挂载使用情况

du -h --max-depth=1 /var/lib/docker/overlay2 查看某个目录下文件夹大小

docker 磁盘空间占用情况

查看docker 空间分布

1
2
3
4
5
6
[root@environment-test1 ~]# docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 26 6 4.554GB 2.513GB (55%)
Containers 8 6 157.7GB 157.3GB (99%)
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B

查看空间占用细节docker system df -v会显示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@environment-test1 ~]# docker system df -v
Images space usage:

REPOSITORY TAG IMAGE ID CREATED ago SIZE SHARED SIZE UNIQUE SiZE CONTAINERS
redis latest 4e8db158f18d 3 weeks ago ago 83.4MB 58.6MB 24.8MB 2

Containers space usage:

CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED ago STATUS NAMES
2f9172a5f6d2 manage/test/ygl/hikvision:latest "/usr/bin/supervisor…" 0 157GB 42 hours ago ago Dead manager-test-ygl_hikvision.1.6deoeddsuari0ob63cddg8f28
3bd29db83e99 redis:latest "docker-entrypoint.s…" 0 0B 42 hours ago ago Exited (137) 16 minutes ago manager-test-ygl_redis.1.ha792r91z4erzmn967sf9u4zx

Local Volumes space usage:

VOLUME NAME LINKS SIZE

Build cache usage: 0B

最后找到占用最多的容器,分析原因解决即可

java视频监控二次开发

工具环境:

  1. SADP设备网络搜索软件:改密,查询海康设备参数型号,访问地址等
  2. VLC mdeia player网络视频流测试工具

如果不知道摄像头网段(或ip)如何找回

可以使用[Wireshark](http://www.wireshark.org的软件,可以抓取数据包进行解析,启动软件开始抓包了。抓包前最好把摄像头直接用网线和电脑网口连接,排除其它设备的干扰,也好对抓包结果进行分辨,主机IP随便设置都行。摄像头和电脑连接好后,看到指示灯亮,偶尔有规律的闪烁,说明有数据传输,就可以开始了,如果没有重启下摄像头,通电时会进行通信。

参考:https://www.mydigit.cn/thread-399479-1-1.html

总结

视频设置有两种一种通过硬盘录像机管理所有单个录像摄像头进行直播,二是单个摄像头进行直播流设置

RTSP端口

查看登陆设备:高级->网络->端口->RTSP

应用:

1
2
# 7544 为 RTSP端口,摄像头独立直播流配置
ffmpeg -rtsp_transport tcp -i rtsp://admin:12345@192.0.0.63:554/h264/ch1/main/av_stream -vcodec copy -acodec aac -ar 44100 -strict -2 -ac 1 -f flv -s 704x576 -q 10 -f flv rtmp://127.0.0.1:1935/hls/video1
SDK 端口/服务端口

高级->网络->端口->SDK端口或服务端口

硬盘录像机:服务端口,应用hikvision_port=2004 其中2004为服务端口(也叫sdk端口)

摄像头:sdk端口,应用hikvision_video_username_password2 = 34,192.168.1.193,8000,admin,12345其中8000为服务端口(也叫sdk端口)

控制原理

1535084718762

dvr视频录像机统一 做直播预览与回放

1
2
3
4
5
6
7
8
9
10
11
12
#tracks为回放
#为dvr的ip和rstp端口,其中101 代表通道1主码流01
rtsp://admin:12345@192.168.1.195:5555/Streaming/tracks/101?starttime=20180911t063812z&endtime=20180911t064816z

#Channels为直播
rtsp://admin:12345@192.168.1.195:5555/Streaming/Channels/

#回播推流,报Too many packets buffered for output stream 0:0.,加了-max_muxing_queue_size 1024 转码期间不能播放,强制结束才开始播放,似乎源不能拖动进度条
ffmpeg -rtsp_transport tcp -i "rtsp://admin:12345@192.168.1.195:5555/Streaming/tracks/101?starttime=20180911t063812z&endtime=20180911t064016z" -max_muxing_queue_size 10240 -vcodec copy -acodec aac -ar 44100 -strict -2 -ac 1 -f flv -s 1280x720 -q 10 -f flv "rtmp://127.0.0.1:1935/hls/video7"



回放下载

回访下载,然后实时播放增长的文件,可以用vlc播放,但是ckplayer播放不了,需要解码,可以用ffmpeg命令,因此做实施解码相对比较麻烦

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
NativeLong hPlayback;
String filename=sFileName+".mp4";
String flvfilename=sFileName+".flv";
String savePath=recordStore+filename;

File file=new File(savePath);
logger.info("创建目录:"+file.getParentFile());
if (!file.getParentFile().exists()) {
boolean result = file.getParentFile().mkdirs();
if (!result) {
logger.info("创建失败");
}
}

if (file.exists()){
logger.info("已经在下载了");
return;
}

if( (hPlayback = hCNetSDK.NET_DVR_GetFileByName(nUserId, sFileName, savePath)).intValue() < 0 ){
logger.error( "GetFileByName failed. error[%d]\n"+hCNetSDK.NET_DVR_GetLastError());
return;
}

if(!hCNetSDK.NET_DVR_PlayBackControl_V40(hPlayback, hCNetSDK.NET_DVR_PLAYSTART, null,0,null,null))
{
logger.error("play back control failed [%d]\n"+hCNetSDK.NET_DVR_GetLastError());
return;
}

if (!ExecuteCodecs.exchangeToFlv(ffmpegBin, savePath,recordStore+ flvfilename)){
logger.error("mp4 to flv \n");
}

海康录像机RTSP取流路径

2012年之前的设备支持老的取流格式,之后的设备新老取流格式都支持。

【老URL,小于64路的NVR或混合录像机的IP通道从33开始;大于等于64路的NVR的IP通道从1开始】

rtsp://username:password@<ipaddress>/<videotype>/ch<number>/<streamtype>

详细描述:

blob.png

举例说明:

DS-9016HF-ST的IP通道01主码流:

rtsp://admin:12345@172.6.22.106:554/h264/ch33/main/av_stream

DS-9016HF-ST的模拟通道01子码流:

rtsp://admin:12345@172.6.22.106:554/h264/ch1/sub/av_stream

【新URL,通道号全部按顺序从1开始】

详细描述:

rtsp://username:password@<address>:<port>/Streaming/Channels/<id>(?parm1=value1&parm2-=value2…)

blob.png

举例说明:

DS-9632N-ST的IP通道01主码流,:

rtsp://admin:12345@172.6.22.234:554/Streaming/Channels/101

DS-9632N-ST的IP通道01子码流:

rtsp://admin:12345@172.6.22.234:554/Streaming/Channels/102