0%

基础概念

sidecar: 微服务异构,就是指可以让其他第三方(语言)服务,接入springcloud(nacos)里面进行管理等

框架源码:alibaba/spring-cloud-alibaba:Sidecar

需求

  1. 需要接入第三方服务,第三方服务以接口方式提供

  2. 第三方服务可以被其他第三方服务替换

  3. 第三方服务可能不支持集群部署,就是部署多个相同的实例,数据不共享

  4. 需要支持集群部署

  5. 需要监控第三方服务

  6. 集成到alibaba springcloud框架

  7. 接入方式feign

设计

项目框架采用边车模式(sidecar),但是不集成alibaba-sidecar,手动进行实现,因为需要支持多同类型第三方服务,需要对数据进行包装,

备选方案:集成alibaba-sidecar,因为异构只能直接代理,因此数据的包装可以采用过滤器和解码器进行处理

支持同类型第三方服务扩展替换

采用工厂设计模式进行搭建工程

支持集群部署

采用边车系统部署模式,一个第三方服务一个该服务

支持第三方服务监控

采用重写心跳,在心跳里面对第三方服务进行监控并绑定为自己的服务状态。

测试发现心跳是down的状态不熔断,只是降级。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
@Component
public class SidecarHealthIndicator extends AbstractHealthIndicator {

@Autowired
AiConfig aiConfig;

@Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
try {
String result;
if (aiConfig.aiFaceType.equals(FaceType.NT.name())) {
result = HttpUtil.get(aiConfig.aiFaceUrl + "/version", aiConfig.aiFaceUrlTimeout);
builder.withDetail("version", result);
} else if (aiConfig.aiFaceType.equals(FaceType.KS.name())) {
result = HttpUtil.get(aiConfig.aiFaceUrl + "/version", aiConfig.aiFaceUrlTimeout);
JSONObject r = JSONUtil.parseObj(result);
builder.withDetail("version", r.getStr("platform_version"));
} else {
result = HttpUtil.get(aiConfig.aiFaceUrl + "/version", aiConfig.aiFaceUrlTimeout);
builder.withDetail("version", result);
}
builder.up();
} catch (Exception e) {
builder.down(e);
}
}
}

第三方服务不支持集群,数据不共享(不考虑异常情况)

方案1: 在业务包装接口里面实现向其他实例进行数据同步

在数据存储类型的接口里面查询该服务的其他实例,然后发同样的数据到该服务的其他实例。

注意事项:由于该服务也部署了复数个实例,因此估计需要采用redis等中间件实现那些服务已经发送过,不然会形成服务间的死循环

方案2: 利用feign的重试机制

在接口里面返回指定错误码,然后根据错误码进行重试,然后计数重试次数(可采用redis进行计数),当重试次数达到了实例的个数,就说明每个实例都请求了一次了,数据都存在于每个实例了。

缺点:如果10个实例,每个实例处理请求时间2s,10个就需要20s,因为是按顺序进行请求的

方案3: 利用feign拦截器异步请求其他实例(目前采用)

可以在拦截器里面设置header标志,标志其他服务不需要拦截,向其他服务请求,不然也会形成服务间的死循环

拦截器两种实现方式

  • 在feign指定配置类@FeignClient(...,configuration = MyConfiguration.class)
  • 实现1⃣️feign.RequestInterceptor/2⃣️HandlerInterceptor/3⃣️ClientHttpRequestInterceptor接口,进行全局拦截

这里采用接口拦截模式,配置模式会在其他项目里面引入

拦截器用2⃣️HandlerInterceptor,因为1⃣️feign.RequestInterceptor不知道为什么拦截不生效

具体实现见附录一:spring HandlerInterceptor器的实现并读取body

步骤:
  1. 继承HttpServletRequestWrapper实现一个读取并保存requestBody的类BodyReaderHttpServletRequestWrapper.java

  2. 新建一个过滤器BodyReadFilter.java用于调用BodyReaderHttpServletRequestWrapper进行保存body

  3. 新建一个拦截器StatefulFeignInterceptor.java实现HandlerInterceptor中的preHandle

  4. 新建一个配置StatefulConfig.java用于启用拦截器StatefulFeignInterceptor

注意:如果要在拦截器里面使用@Autowired功能,就必须使用bean注入该类,不能用注解@Component等进行注入

向其他服务发送请求的逻辑,在StatefulFeignInterceptor里面的preHandle进行实现就可以了,代码如下

sub的作用时为了防止死循环,子服务不进行转发

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
if ("true".equals(request.getHeader("sub"))) {
log.info("sub request " + request.getRequestURI());
} else {
ThreadUtil.execAsync(() -> {
String uri = request.getRequestURI();
log.info("main request " + uri);
List<String> urls = aiConfig.aiFaceStatefulUrls;
if (urls.contains(uri)) {
BodyReaderHttpServletRequestWrapper requestWrapper = null;
try {
requestWrapper = new BodyReaderHttpServletRequestWrapper(request);
} catch (IOException e) {
log.error("read body error: {}", e.getMessage());
}
String body = IoUtil.read(requestWrapper.getInputStream(), requestWrapper.getCharacterEncoding());
log.debug("请求体:{}", body);
String ip = discoveryProperties.getIp();
List<ServiceInstance> instanceList = discoveryClient.getInstances("xkiot-ai");
for (ServiceInstance serviceInstance : instanceList) {
if (!ip.equals(serviceInstance.getHost())) {
String url = serviceInstance.getUri().toString() + uri;
HttpRequest.post(url).header("sub", "true").body(body).execute(true).body();
}
}
}
});
}
return true;

注意事项:如果服务里面需要创建一个用户id,然后每台服务的用户id要一致,只能通过接口传入用户id,或者把用户id共享到redis内存里面(比较麻烦)

方案4: 利用feign解码器异步请求其他实例

解码器是对请求结果进行处理,因此如果使用该模式,估计需要用中间件redis来解决服务间的死循环

方案5: 幻想方案,在某个地方设置或重写,可以让feign支持向所有实例发送请求
方案6: 幻想方案,利用事务或异步请求合并处理结果,该模式可以解决异常情况
方案7: 解决第三方有状态服务的部署,第三方服务实现数据共享

附录一:spring HandlerInterceptor器的实现并读取body

BodyReaderHttpServletRequestWrapper.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
import org.springframework.util.StreamUtils;

import javax.servlet.ReadListener;
import javax.servlet.ServletInputStream;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletRequestWrapper;
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStreamReader;

public class BodyReaderHttpServletRequestWrapper extends HttpServletRequestWrapper {

private byte[] requestBody = null;//用于将流保存下来

public BodyReaderHttpServletRequestWrapper(HttpServletRequest request) throws IOException {
super(request);
requestBody = StreamUtils.copyToByteArray(request.getInputStream());
}

@Override
public ServletInputStream getInputStream() {
final ByteArrayInputStream bodyStream = new ByteArrayInputStream(requestBody);
return new ServletInputStream() {
@Override
public int read() {
return bodyStream.read(); // 读取 requestBody 中的数据
}

@Override
public boolean isFinished() {
return false;
}

@Override
public boolean isReady() {
return false;
}

@Override
public void setReadListener(ReadListener readListener) {
}
};
}

@Override
public BufferedReader getReader() {
return new BufferedReader(new InputStreamReader(getInputStream()));
}

}

BodyReadFilter.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import org.springframework.stereotype.Component;

import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import java.io.IOException;

@Component
@WebFilter(urlPatterns = "/**", filterName = "BodyReadFilter")
public class BodyReadFilter implements Filter {
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
ServletRequest requestWrapper = null;
if (servletRequest instanceof HttpServletRequest) {
requestWrapper = new BodyReaderHttpServletRequestWrapper((HttpServletRequest) servletRequest);
}
if (requestWrapper == null) {
filterChain.doFilter(servletRequest, servletResponse);
} else {
filterChain.doFilter(requestWrapper, servletResponse);
}
}
}

StatefulFeignInterceptor.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import cn.hutool.core.io.IoUtil;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.servlet.HandlerInterceptor;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;


@Slf4j
public class StatefulFeignInterceptor implements HandlerInterceptor {

@Autowired
AiConfig aiConfig;

@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

if (aiConfig.aiFaceStatefulUrls.contains(request.getRequestURI())) {
BodyReaderHttpServletRequestWrapper requestWrapper = new BodyReaderHttpServletRequestWrapper(request);
String body = IoUtil.read(requestWrapper.getInputStream(), requestWrapper.getCharacterEncoding());
log.debug("请求体:{}", body);
}
return true;
}

}

StatefulConfig.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.InterceptorRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;

@Configuration
public class StatefulConfig implements WebMvcConfigurer {

/**
* 解决StatefulFeignInterceptor里面的使用Autowired注入为null的问题
*
* @return
*/
@Bean
public StatefulFeignInterceptor statefulFeignInterceptor() {
return new StatefulFeignInterceptor();
}

@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(statefulFeignInterceptor()).addPathPatterns("/**");
}
}

额外

Nacos 的cp/ap模式

AP模式(nacos默认模式)不支持数据一致性,所以只支持服务注册的临时实例

CP模式支持服务注册的永久实例,满足数据的一致性

这里的数据一致性,让我一度认为是指服务的所有实例数据一致,让我以为可以设置过后,每个实例都会发请求

参考

SpringBoot常用拦截器(HandlerInterceptor,ClientHttpRequestInterceptor,RequestInterceptor)

Swagger 整合knife4j

ruoyi-cloud/cloud/swagger

knife4j

Spring Cloud Gateway集成Knife4j

在xkiot-common-swagger的pom.xml添加如下依赖

1
2
3
4
5
 <dependency>
<groupId>com.github.xiaoymin</groupId>
<artifactId>knife4j-micro-spring-boot-starter</artifactId>
<version>2.0.8</version>
</dependency>

然后在xkiot-gateway的pom.xml添加如下依赖

1
2
3
4
5
<dependency>
<groupId>com.github.xiaoymin</groupId>
<artifactId>knife4j-spring-boot-starter</artifactId>
<version>2.0.8</version>
</dependency>

原生swagger访问http://{网关ip}:{port}/swagger-ui.html通过网关进行访问,里面可以进行选择切换服务

整合knife4j后访问http://{网关ip}:{port}/doc.html

服务熔断与降级

Sentinel 熔断与降级

主要功能:实时监控、机器发现、规则配置

Sentinel控制台安装

alibaba/Sentinel

ruoyi-cloud/sentinel

Docker 镜像构造iexxk/dockerbuild-Sentinel

1
2
3
4
5
6
7
8
#基础镜像选择alpine 小巧安全流行方便
FROM exxk/java:8-alpine-cst
#apk安装完整wget,才能下载ssl的包,下载官方的安装包
RUN apk add --no-cache wget && wget --no-check-certificate --content-disposition -q -O /app.jar https://github.com/alibaba/Sentinel/releases/download/1.8.1/sentinel-dashboard-1.8.1.jar
#健康检查 -s 静默模式,不下载文件
#HEALTHCHECK CMD wget -s http://127.0.0.1:14030/actuator/health || exit 1
#8718控制台端口,8719为数据采集端口,他需要从被采集服务的8719进行收集数据
CMD ["java","-Dserver.port=8718","-Dcsp.sentinel.dashboard.server=localhost:8718","-Dproject.name=sentinel-dashboard","-Dcsp.sentinel.api.port=8719","-jar","app.jar"]

部署

1
2
3
4
5
6
#部署注意需要和其他服务部署到一个stack里面,不然8719是访问不了的
sentinel:
restart: always
image: exxk/sentinel:1.8.1
ports:
- "8718:8718"

访问通过127.0.0.1:8718进行控制台的访问,默认用户名密码是sentinel/sentinel

网关路由基础知识

官网

SpringCloud版本对应关系

gateway:异步网关,读取body可以通过ReadBodyRoutePredicateFactory进行缓存

zuul:同步阻塞式网关,因此读取或修改body就比较简单

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
- id: xkiot-cmdb
uri: lb://xkiot-platform
predicates:
- Path=/cmdb/**
# CustomReadBody 对应 CustomReadBodyRoutePredicateFactory
# ReadBody 同理对应 ReadBodyRoutePredicateFactory
- name: CustomReadBody
args:
inClass: '#{T(String)}'
#需要在@Configuration的注解的类里面添加
# @Bean
# public Predicate bodyPredicate(){return o -> true;}
predicate: '#{@bodyPredicate}' #注入实现predicate接口类
filters:
# 设备token验证
# DynamicToken对应 DynamicTokenGatewayFilterFactory
# true对应DynamicTokenGatewayFilterFactory里面的Config类的参数
- DynamicToken=true
- StripPrefix=1

gateway读取body并进行签名校验

需求,只需要读取校验签名,因此不需要修改body,因此采用缓存方案进行读取,关键类ReadBodyRoutePredicateFactory

  1. @Configuration的注解类里面添加该配置,或者新建个配置类,这里的bodyPredicate,会在第二部里面的yml的predicate进行关联

    1
    2
    3
    4
    5
    6
    7
    8
    /**
    * 读取body断言需要注册bodyPredicate
    * @return
    */
    @Bean
    public Predicate bodyPredicate(){
    return o -> true;
    }
  2. 首先加载ReadBodyRoutePredicateFactory类,也可以自定义重写该类,其他的修改body的类同理,加载需要在yml里面配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    - id: xkiot-cmdb
    uri: lb://xkiot-platform
    predicates:
    - Path=/cmdb/**
    # CustomReadBody 对应 CustomReadBodyRoutePredicateFactory
    # ReadBody 同理对应 ReadBodyRoutePredicateFactory
    - name: CustomReadBody
    args:
    inClass: '#{T(String)}'
    #需要在@Configuration的注解的类里面添加
    # @Bean
    # public Predicate bodyPredicate(){return o -> true;}
    predicate: '#{@bodyPredicate}' #注入实现predicate接口类
  3. 然后实现一个过滤器,用于接受body,以及对body进行校验等

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    package com.xkiot.gateway.filter;

    import com.alibaba.fastjson.JSON;
    import com.xkiot.common.core.constant.CacheConstants;
    import com.xkiot.common.core.constant.Constants;
    import com.xkiot.common.core.domain.R;
    import com.xkiot.common.core.utils.ServletUtils;
    import com.xkiot.common.core.utils.StringUtils;
    import com.xkiot.common.core.web.domain.AjaxResult;
    import com.xkiot.common.redis.constant.RedisConstants;
    import com.xkiot.common.redis.service.RedisService;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.cloud.gateway.filter.GatewayFilter;
    import org.springframework.cloud.gateway.filter.factory.AbstractGatewayFilterFactory;
    import org.springframework.core.io.buffer.DataBufferFactory;
    import org.springframework.http.HttpStatus;
    import org.springframework.http.MediaType;
    import org.springframework.http.server.reactive.ServerHttpRequest;
    import org.springframework.http.server.reactive.ServerHttpResponse;
    import org.springframework.stereotype.Component;
    import org.springframework.web.server.ServerWebExchange;
    import reactor.core.publisher.Mono;

    import java.util.Collections;
    import java.util.List;

    @Component
    public class DynamicTokenGatewayFilterFactory extends AbstractGatewayFilterFactory<DynamicTokenGatewayFilterFactory.Config> {
    private static final Logger log = LoggerFactory.getLogger(DynamicTokenGatewayFilterFactory.class);

    private final static long EXPIRE_TIME = Constants.TOKEN_EXPIRE * 60;

    @Autowired
    private RedisService redisService;

    public DynamicTokenGatewayFilterFactory() {
    super(Config.class);
    }

    @Override
    public List<String> shortcutFieldOrder() {
    return Collections.singletonList("enabled");
    }

    @Override
    public GatewayFilter apply(DynamicTokenGatewayFilterFactory.Config config) {
    return (exchange, chain) -> {
    ServerHttpRequest request = exchange.getRequest();
    String requestBody = exchange.getAttribute("cachedRequestBodyObject");
    log.info("requestBody : {}", requestBody);
    //todo 添加验签代码等
    try {
    ServerHttpRequest mutableReq = exchange.getRequest().mutate().header(CacheConstants.DETAILS_TERM_ID, sn)
    .header(CacheConstants.DETAILS_TERM_ID, ServletUtils.urlEncode(sn)).build();
    ServerWebExchange mutableExchange = exchange.mutate().request(mutableReq).build();
    return chain.filter(mutableExchange);
    } catch (Exception e) {
    ServerHttpResponse response = exchange.getResponse();
    response.getHeaders().add("Content-Type", "application/json;charset=UTF-8");
    return exchange.getResponse().writeWith(
    Mono.just(response.bufferFactory().wrap(JSON.toJSONBytes(AjaxResult.error(e.getMessage())))));
    }
    return chain.filter(exchange);
    };
    }

    public static class Config {

    private boolean enabled;

    public Config() {
    }

    public boolean isEnabled() {
    return enabled;
    }

    public void setEnabled(boolean enabled) {
    this.enabled = enabled;
    }
    }
    }
  4. 引用第三步骤的过滤器

    1
    2
    3
    4
    5
    6
    filters:
    # 设备token验证
    # DynamicToken对应 DynamicTokenGatewayFilterFactory
    # true对应DynamicTokenGatewayFilterFactory里面的Config类的参数
    - DynamicToken=true
    - StripPrefix=1

参考

API网关才是大势所趋?SpringCloud Gateway保姆级入门教程

SpringCloud Gateway设计改造

因为要做一个兼容多网络协议,多报文兼容的动态网关

设计架构

gSZKje.png

动态路由相关设置类

RouteDefinitionRepository 路由存储器

用于存储路由规则的接口,通过实现它,可以进行自定义存储路由规则到不同的中间件(redis/db等)

实现三个方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
@Component
public class RedisRouteDefinitionRepository implements RouteDefinitionRepository {
private static final Logger log = LoggerFactory.getLogger(RedisRouteDefinitionRepository.class);
public static final String GATEWAY_ROUTES = CacheConstants.GATEWAY_ROUTES;
@Autowired
private RedisService redisService;

@Override
public Flux<RouteDefinition> getRouteDefinitions() {
log.debug("get route info by redis/db");
List<RouteDefinition> routeDefinitions = new ArrayList<>();
//定义路由信息,可以从redis/db等地方获取路由信息
redisService.getAllCacheMapValues(GATEWAY_ROUTES).stream().forEach(routeDefinition -> {
routeDefinitions.add(JSON.parseObject(routeDefinition.toString(), RouteDefinition.class));
});
return Flux.fromIterable(routeDefinitions);
}

@Override
public Mono<Void> save(Mono<RouteDefinition> route) {
log.debug("save route info to redis/db");
return route.flatMap(routeDefinition -> {
redisService.setCacheMapValue(GATEWAY_ROUTES, routeDefinition.getId(), JSON.toJSONString(routeDefinition));
return Mono.empty();
});
}

@Override
public Mono<Void> delete(Mono<String> routeId) {
log.debug("delete route info by redis/db");
return routeId.flatMap(id -> {
if (redisService.getCacheMapValue(GATEWAY_ROUTES, id)) {
redisService.delCacheMapValue(GATEWAY_ROUTES, id);
return Mono.empty();
}
return Mono.defer(() -> Mono.error(new BaseException("路由配置没有找到: " + routeId)));
});
}
}

ApplicationEventPublisherAware事件发布接口

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
@Service
public class GatewayServiceHandler implements ApplicationEventPublisherAware {
private static final Logger log = LoggerFactory.getLogger(GatewayServiceHandler.class);

@Autowired
private RedisRouteDefinitionRepository routeDefinitionWriter;

private ApplicationEventPublisher publisher;

@Override
public void setApplicationEventPublisher(ApplicationEventPublisher applicationEventPublisher) {
this.publisher = applicationEventPublisher;
}

/**
* 保存或更新多个路由配置
* @param gatewayRouteList
* @return
*/
public String saveOrUpdateMultiRouteConfig(List<JSONObject> gatewayRouteList) {
log.debug("begin add multi route config");
gatewayRouteList.forEach(gatewayRoute -> {
RouteDefinition definition = handleData(gatewayRoute);
routeDefinitionWriter.save(Mono.just(definition)).subscribe();
});
this.publisher.publishEvent(new RefreshRoutesEvent(this));
return "success";
}

/**
* json数据转换为路由实体
* @param gatewayRoute
* @return
*/
private RouteDefinition handleData(JSONObject gatewayRoute) {
RouteDefinition definition;
definition = JSONObject.toJavaObject(gatewayRoute, RouteDefinition.class);
return definition;
}
}

然后添加一个设置接口

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
@RestController
@RequestMapping("/route")
public class RouteConfigController extends BaseController {
@Autowired
private GatewayServiceHandler gatewayServiceHandler;

/**
* 新增更新路由配置接口
*
* @param gatewayRouteList
* @return
*/
@PostMapping
public AjaxResult add(@Validated @RequestBody List<JSONObject> gatewayRouteList) {
String result = gatewayServiceHandler.saveOrUpdateMultiRouteConfig(gatewayRouteList);
return AjaxResult.success(result);
}
}

测试发送路由配置添加请求{{gateway}}/route

json报文数据如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[
{
"id": "xkiot-auth",
"order": 2,
"predicates": [
{
"args": {
"pattern": "/auth/**"
},
"name": "Path"
}
],
"uri": "lb://xkiot-auth"
},
{
"id": "xkiot-system",
"order": 1,
"predicates": [
{
"args": {
"pattern": "/system/**"
},
"name": "Path"
}
],
"uri": "lb://xkiot-system"
}
]

其中order设置为0,代表不起用该路由配置,id代表服务id,uri代表微服务地址,predicates路由规则,对应的yml配置如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
spring:
cloud:
gateway:
routes:
# 认证中心
- id: xkiot-auth
uri: lb://xkiot-auth
predicates:
- Path=/auth/**
filters:
# 验证码处理
- CacheRequestFilter
- ValidateCodeFilter
- StripPrefix=1
# 系统模块
- id: xkiot-system
uri: lb://xkiot-system
predicates:
- Path=/system/**
filters:
- StripPrefix=1

redis数据存储如下:

g9i63D.png

参考

Srping cloud gateway 实现动态路由(MySQL持久化+redis分布式缓存)

Nacos+Spring Cloud Gateway动态路由配置

简介

Nacos 致力于帮助您发现、配置和管理微服务。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据及流量管理。

主要作用替代spring cloud的注册中心和配置中心

官方文档

依赖关系:nacos依赖与mysql的数据库(也可以是其他数据库)作为存储

访问:ip+端口,默认登陆用户名密码为nacos/nacos

docker部署脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
version: '3'
services:
mysql:
image: nacos/nacos-mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: adminroot
MYSQL_DATABASE: xk-config
MYSQL_USER: nacos
MYSQL_PASSWORD: nacos
ports:
- 14050:3306
# volumes:
# - "/home/dockerdata/v-dev/mysql:/var/lib/mysql"
deploy:
replicas: 1
restart_policy:
condition: on-failure
nacos:
image: nacos/nacos-server:2.0.0-bugfix
restart: on-failure
environment:
PREFER_HOST_MODE: hostname
MODE: standalone
SPRING_DATASOURCE_PLATFORM: mysql
MYSQL_SERVICE_HOST: mysql
MYSQL_SERVICE_DB_NAME: xk-config
MYSQL_SERVICE_PORT: 3306
MYSQL_SERVICE_USER: nacos
MYSQL_SERVICE_PASSWORD: nacos
# volumes:
# - /home/dockerdata/v-dev/nacos/standalone-logs/:/home/nacos/logs
# - /home/dockerdata/v-dev/nacos/init.d/custom.properties:/home/nacos/init.d/custom.properties
ports:
- 14051:8848

docker swarm nacos指定容器虚拟IP自定义网络

在用nacos做为注册中心和配置中心时,如果部署模式是docker swarm模式时,由于容器内部多个网卡,默认随机eth0,就会导致这个ip是内部ip导致无法访问其他容器的服务

解决

先看stack的网络,从下图可以看到用的网络是10.0.3开头的

2MY6df.png

因此可以进行设置优先网络

1
2
environment:
- spring.cloud.inetutils.preferred-networks=10.0.3

或者进入容器进行忽略网卡的设置,可以看到需要忽略到eth0,和eth2,只剩下需要的

2Mtyc9.png

因此配置参数如下:

1
- spring.cloud.inetutils.ignored-interfaces=eth0.*,eth2.*

更多详细的配置见Appendix A: Common application properties

测试网络的互访可以通过springcloud的心跳

1
wget http://10.0.3.194:9200/actuator/health -q -O -

发现项目里面的redis缓存与数据库的数据混乱不一致,因为很多自定义的数据库update方法更新了数据库,但是并没有更新redis,于是想在底层实现自动缓存

Spring cache简单使用

教程

  1. 引入依赖

    1
    2
    compile group: 'org.springframework.boot', name: 'spring-boot-starter-cache', version: '2.1.1.RELEASE'
    compile group: 'org.springframework.boot', name: 'spring-boot-starter-data-redis', version: '2.1.1.RELEASE'
  2. 添加redis缓存的中间件,缓存的中间件也可以不用redis用其他中间件一样的,可选generic,ehcache,hazelcast,infinispan,jcache,redis,guava,simple,none

    1
    2
    spring.redis.host=gt163.cn
    spring.redis.port=14043
  3. 开启cache功能,在@SpringBootApplication启动类或@Configuration配置类上面添加该注解@EnableCaching

  4. 使用缓存功能,在要缓存的方法上面或者类上面添加注解@Cacheable("<redis里面的唯一key,也可以叫表名>")

    1
    2
    3
    4
    5
    //例如
    @Cacheable("user_info")
    public User findById(String id) {
    return userDao.findById(id);
    }

    cYVwvT.png

常见几个注解

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
//开启缓存功能
@EnableCaching
//缓存没有数据查执行方法里面的内容,然后将执行的结果缓存起来,缓存里面有直接读缓存,不会执行方法里面的内容,参数会作为key
@Cacheable("user_info")
public User findById(String id)
//参数unless对结果进行判断,condition对参数进行判断
//缓存不管是否存在都会执行方法里面的内容并更新缓存
@CachePut(value="user_info")
//删除缓存
@CacheEvict(value="user_info")
//多个缓存分组
@Caching
//注解到类上面,类里面的方法只需要添加注解@Cacheable,不用在指定cacheName了
@CacheConfig(cacheNames={"user_info"})

redis缓存mongo数据库表的架构设计

设计方案一

详细代码见github:iexxk/springLeaning:mongo

BaseDao接口层添加缓存注解,然后在各个子类继承实现,达到通用缓存框架的配置

BaseDao.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@CacheConfig(cacheNames = {"mongo"})
public interface BaseDao<T, ID> {
//#root.target.table为SpEL表达式,当前被调用的目标对象实例的table的值
@Cacheable(key = "#root.target.table+#p0",condition ="#root.target.isCache")
T findById(ID id);
@CachePut(key = "#root.target.table+#p0.id",condition ="#root.target.isCache")
<S extends T> S save(S entity);
@CacheEvict(key = "#root.target.table+#p0",condition ="#root.target.isCache")
void deleteById(ID id);
//删除所有是删除mongo所有的表,粒度不能到key
@CacheEvict(key="#root.target.table",allEntries=true,condition="#root.target.isCache")
void deleteAll();
//用来设置是否开启缓存
void enableCache(boolean isCache);
}

BaseDaoImpl.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
public class BaseDaoImpl<T, ID> implements BaseDao<T, ID> {
private SimpleMongoRepository<T, ID> mongoRepository;
private Class<T> entityType;
private Class<ID> identifierType;
protected MongoTemplate mongoTemplate;
//这里用来存储表的名字
public String table;
//这里用来判断是否开启redis缓存
public Boolean isCache = false;
//构造方法初始化
public BaseDaoImpl() {
ResolvableType resolvableType = ResolvableType.forClass(getClass());
entityType=(Class<T>)resolvableType.as(BaseDao.class).getGeneric(0).resolve();
identifierType=(Class<ID>)resolvableType.as(BaseDao.class).getGeneric(1).resolve();
//初始化表的名字,用“:”是因为可以在redis里面进行分类
table=entityType.getSimpleName()+":";
}

@Autowired
public void setMongoTemplate(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
MappingMongoEntityInformation<T, ID> entityInformation = new MappingMongoEntityInformation<T, ID>(
new BasicMongoPersistentEntity<>(ClassTypeInformation.from(entityType)), identifierType);
mongoRepository = new SimpleMongoRepository<T, ID>(entityInformation, mongoTemplate);
}

@Override
public T findById(ID id) {
return mongoTemplate.findOne(Query.query(Criteria.where("Id").is(id.toString())), entityType);
}

@Override
public void enableCache(boolean isCache) {
this.isCache = isCache;
}
}

下面开始进行使用,新建一个UserDao.jva

1
2
3
public interface UserDao extends BaseDao<User, String> {
void updateAddNumById(String id); //自定义的接口
}

UserDaoImpl.jva

1
2
3
4
5
6
7
8
9
10
11
@Repository
public class UserDaoImpl extends BaseDaoImpl<User, String> implements UserDao {

public UserDaoImpl() {
super.enableCache(true); //这里进开启缓存设置,默认是不开启的
}

@Override
public void updateAddNumById(String id) {
}
}

最好调用findById就会进行缓存了

cYVQv8.png

存在的问题

因为cacheNames也就是表名不支持SpEL,因此获取不到表名,因此设计是,表就用通用mongo字段做完通用表,然后key里面才是表加id的设计,因此也导致了deletAll是删除所有的表,因为deletAll基本不会用到,也还可以接受,就算用到了,只是缓存没了,还是能从数据库重建缓存

参考SpringCache扩展@CacheEvict的key模糊匹配清除

解决方案

新建个该文件CustomizedRedisCacheManager.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
public class CustomizedRedisCacheManager extends RedisCacheManager {
private final RedisCacheWriter cacheWriter;
private final RedisCacheConfiguration defaultCacheConfig;


public CustomizedRedisCacheManager(RedisCacheWriter cacheWriter, RedisCacheConfiguration defaultCacheConfiguration) {
super(cacheWriter, defaultCacheConfiguration);
this.cacheWriter = cacheWriter;
this.defaultCacheConfig = defaultCacheConfiguration;
}

public CustomizedRedisCacheManager(RedisCacheWriter cacheWriter, RedisCacheConfiguration defaultCacheConfiguration, String... initialCacheNames) {
super(cacheWriter, defaultCacheConfiguration, initialCacheNames);
this.cacheWriter = cacheWriter;
this.defaultCacheConfig = defaultCacheConfiguration;
}

public CustomizedRedisCacheManager(RedisCacheWriter cacheWriter, RedisCacheConfiguration defaultCacheConfiguration, boolean allowInFlightCacheCreation, String... initialCacheNames) {
super(cacheWriter, defaultCacheConfiguration, allowInFlightCacheCreation, initialCacheNames);
this.cacheWriter = cacheWriter;
this.defaultCacheConfig = defaultCacheConfiguration;
}

public CustomizedRedisCacheManager(RedisCacheWriter cacheWriter, RedisCacheConfiguration defaultCacheConfiguration, Map<String, RedisCacheConfiguration> initialCacheConfigurations) {
super(cacheWriter, defaultCacheConfiguration, initialCacheConfigurations);
this.cacheWriter = cacheWriter;
this.defaultCacheConfig = defaultCacheConfiguration;
}

public CustomizedRedisCacheManager(RedisCacheWriter cacheWriter, RedisCacheConfiguration defaultCacheConfiguration, Map<String, RedisCacheConfiguration> initialCacheConfigurations, boolean allowInFlightCacheCreation) {
super(cacheWriter, defaultCacheConfiguration, initialCacheConfigurations, allowInFlightCacheCreation);
this.cacheWriter = cacheWriter;
this.defaultCacheConfig = defaultCacheConfiguration;
}

/**
* 这个构造方法最重要
**/
public CustomizedRedisCacheManager(RedisConnectionFactory redisConnectionFactory, RedisCacheConfiguration cacheConfiguration) {
this(RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFactory), cacheConfiguration);
}

@Override
public Map<String, RedisCacheConfiguration> getCacheConfigurations() {
Map<String, RedisCacheConfiguration> configurationMap = new HashMap<>(getCacheNames().size());
getCacheNames().forEach(it -> {
RedisCache cache = CustomizedRedisCache.class.cast(lookupCache(it));
configurationMap.put(it, cache != null ? cache.getCacheConfiguration() : null);
});
return Collections.unmodifiableMap(configurationMap);
}

@Override
protected RedisCache createRedisCache(String name, RedisCacheConfiguration cacheConfig) {
return new CustomizedRedisCache(name, cacheWriter, cacheConfig != null ? cacheConfig : defaultCacheConfig);
}
}

新建CustomizedRedisCache.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
public class CustomizedRedisCache extends RedisCache {
private final String name;
private final RedisCacheWriter cacheWriter;
private final ConversionService conversionService;

/**
* Create new {@link RedisCache}.
*
* @param name must not be {@literal null}.
* @param cacheWriter must not be {@literal null}.
* @param cacheConfig must not be {@literal null}.
*/
protected CustomizedRedisCache(String name, RedisCacheWriter cacheWriter, RedisCacheConfiguration cacheConfig) {
super(name, cacheWriter, cacheConfig);
this.name = name;
this.cacheWriter = cacheWriter;
this.conversionService = cacheConfig.getConversionService();
}

@Override
public void evict(Object key) {
if (key instanceof String) {
String keyString = key.toString();
// 后缀删除
if (keyString.endsWith("*")) {
byte[] pattern = this.conversionService.convert(this.createCacheKey(key), byte[].class);
this.cacheWriter.clean(this.name, pattern);
return;
}
}
// 删除指定的key
super.evict(key);
}
}

添加配置CachingConfig.java,指定自定义的缓存类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
@Configuration
public class CachingConfig {
@Bean
public CacheManager cacheManager(RedisConnectionFactory redisConnectionFactory) {
//两个转换器二选一,也可以自定义
//fastJson转换器
FastJson2JsonRedisSerializer serializer = new FastJson2JsonRedisSerializer(Object.class);
//jackson转换器
// Jackson2JsonRedisSerializer<Object> serializer = new Jackson2JsonRedisSerializer<Object>(Object.class);
// ObjectMapper objectMapper = new ObjectMapper();
// objectMapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
// objectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
// serializer.setObjectMapper(objectMapper);
RedisCacheConfiguration cacheConfiguration = RedisCacheConfiguration
.defaultCacheConfig()
//添加下面这句存到redis里面的数据就是以json格式,不添加就是二进制格式缓存。
//为了解决二进制格式下list数据丢失,改成以json存储
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(serializer));
return new CustomizedRedisCacheManager(redisConnectionFactory, cacheConfiguration);
}
}

最后再修改使用@CacheEvict就支持*号模糊删除了

1
2
//删除table开头的所有key
@CacheEvict(key = "#root.target.table+'*'",condition ="#root.target.isCache")

参考

史上超详细的SpringBoot整合Cache使用教程

在项目运行时,偶尔需要排查问题,需要看日志信息,但是平常只开了info级别的,对于调试找问题不是很方便,所以需要改日志重启,这里在不重启的情况下修改springboot的日志级别

名词介绍

  • spring-boot-starter-actuator 是监控springboot的健康情况的一个依赖工具包

    包含三类功能

    1. 应用配置:日志级别、环境变量等
    2. 度量指标:心跳、内存、中间件状态
    3. 操作控制:重启、更新配置等

简单实现动态修改日志级别

  1. 引入依赖

    1
    implementation 'org.springframework.boot:spring-boot-starter-actuator'
  2. 配置loggers接口,这里分别开了三个接口/actuator/loggers/actuator/info/actuator/health

    1
    management.endpoints.web.exposure.include=loggers,health,info
  3. 访问GET /actuator/loggers就可以得到所有包的日志级别了

    ce6aE8.png

  4. 查询特定包的日志级别GET /actuator/loggers/<package path>

    1
    2
    3
    4
    5
    6
    7
    # GET /actuator/loggers/com.exxk.adminClient
    ---------------------------------------------
    # RETURN
    {
    "configuredLevel": null,
    "effectiveLevel": "INFO"
    }
  5. 修改特定包的日志级别POST /actuator/loggers/<package path>然后添加 BODY JSON 内容{"configuredLevel": "DEBUG"},请求成功后对应包的日志级别就改变了,访问就会输出设置的日志级别的日志了

    1
    2
    3
    4
    5
    6
    7
    # POST /actuator/loggers/com.exxk.adminClient
    # BODY
    {
    "configuredLevel": "DEBUG"
    }
    -----------------------------------------------
    # RETURN 204 No Content

Spring Boot Admin可视化管理服务

官方文档

服务端配置

方案一(原生版本):

  1. 引入依赖,注意版本号要和spring boot的版本一致,不然启动会报错

    1
    2
    // https://mvnrepository.com/artifact/de.codecentric/spring-boot-admin-starter-server
    implementation group: 'de.codecentric', name: 'spring-boot-admin-starter-server', version: '2.2.2'
  2. 在启动类上面添加注解@EnableAdminServer

  3. 运行,然后访问http://127.0.0.1:8080

添加用登陆校验 (未配置完,暂时不需要)
  1. 添加依赖

    1
    2
    3
    // https://mvnrepository.com/artifact/de.codecentric/spring-boot-admin-server-ui-login
    implementation group: 'de.codecentric', name: 'spring-boot-admin-server-ui-login', version: '1.5.7'
    implementation 'org.springframework.boot:spring-boot-starter-security'
  2. 添加Spring Security配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    @Configuration(proxyBeanMethods = false)
    public class SecuritySecureConfig extends WebSecurityConfigurerAdapter {

    private final AdminServerProperties adminServer;

    public SecuritySecureConfig(AdminServerProperties adminServer) {
    this.adminServer = adminServer;
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
    SavedRequestAwareAuthenticationSuccessHandler successHandler = new SavedRequestAwareAuthenticationSuccessHandler();
    successHandler.setTargetUrlParameter("redirectTo");
    successHandler.setDefaultTargetUrl(this.adminServer.path("/"));

    http.authorizeRequests(
    (authorizeRequests) -> authorizeRequests.antMatchers(this.adminServer.path("/assets/**")).permitAll()
    .antMatchers(this.adminServer.path("/login")).permitAll().anyRequest().authenticated()
    ).formLogin(
    (formLogin) -> formLogin.loginPage(this.adminServer.path("/login")).successHandler(successHandler).and()
    ).logout((logout) -> logout.logoutUrl(this.adminServer.path("/logout"))).httpBasic(Customizer.withDefaults())
    .csrf((csrf) -> csrf.csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse())
    .ignoringRequestMatchers(
    new AntPathRequestMatcher(this.adminServer.path("/instances"),
    HttpMethod.POST.toString()),
    new AntPathRequestMatcher(this.adminServer.path("/instances/*"),
    HttpMethod.DELETE.toString()),
    new AntPathRequestMatcher(this.adminServer.path("/actuator/**"))
    ))
    .rememberMe((rememberMe) -> rememberMe.key(UUID.randomUUID().toString()).tokenValiditySeconds(1209600));
    }

    // Required to provide UserDetailsService for "remember functionality"
    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
    auth.inMemoryAuthentication().withUser("user").password("{noop}password").roles("USER");
    }

    }
  3. 在配置文件设置密码

    1
    2
    spring.boot.admin.client.username=admin
    spring.boot.admin.client.password=admin

方案二(docker版本):

直接采用官方镜像codecentric/spring-boot-admin运行

1
2
3
4
5
6
docker run -d \
-p 8080:8080 \
-e "server.port=8080" \
-e "spring.boot.admin.client.instance.service-base-url=http://172.16.10.44:31736" \
--name spring-boot-admin \
codecentric/spring-boot-admin:2.7.9

然后访问http://本地ip或映射的外网ip:8080

客户端配置

方案一(原生版本):

  1. 添加依赖

    1
    implementation group: 'de.codecentric', name: 'spring-boot-admin-starter-client', version: '2.2.2'
  2. 添加配置

    1
    2
    3
    4
    5
    spring.boot.admin.client.url=http://localhost:8080
    #生产根据需要开放端口,*代表全部开放
    management.endpoints.web.exposure.include=*
    #健康信息显示所有
    management.endpoint.health.show-details=always
  3. 启动运行,就可以看到该springboot已经注册到了admin server里面去了,可以去日志配置界面动态修改日志级别了

    ceLZ7D.png

方案二(docker版本):

  1. 添加依赖

    1
    2
    3
    4
    5
    6
    <!-- 大版本号2.7.x要和服务端2.7.x一致 -->
    <dependency>
    <groupId>de.codecentric</groupId>
    <artifactId>spring-boot-admin-starter-client</artifactId>
    <version>2.7.15</version>
    </dependency>
  2. 添加springboot配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    spring:
    boot:
    admin:
    client:
    # 默认关闭,启用的时候可以通过设置环境变量开起
    enabled: false
    url: http://172.16.10.44:31736
    # 客户端信息配置,配置注册到服务端,客户端的名字,及自己的地址
    instance:
    name: ${DEVICE_NAME:AiBOX}
    #因为在容器里面,默认的地址是容器内的地址,服务端访问不到,需要手动指定容器的外网地址
    service-base-url: http://${HOST_ADDRESS:172.16.10.202}:8080
    management:
    endpoints:
    web:
    exposure:
    include: '*'
    metrics:
    enable:
    jvm.threads: true

常见问题

  1. /actuator/httptrace网络接口追踪404,解决建议用Sleuth

在docker部署多个微服务后,发现宿主机内存不断的慢慢上涨,因此想知道是哪个微服务慢慢不断让内存上涨,因此想用一个监控软件,监控所有微服务的性能等指标

名词介绍

  • prometheus:时间序列数据存储、查询、可视化、报警。(相当于Grafana+influxDB+其他的组合拳)
  • Cadvisor:用于收集,聚合,处理和导出有关正在运行的容器的信息。
  • Grafana:指标图表分析展示平台,允许您查询,可视化,警报和了解指标。
  • influxDB:时间序列存储数据库。(带时序的数据,一般用于物联网、日志、指标监控)
  • node-exporter:宿主机节点性能指标数据采集

prometheus+cadvisor简单的性能指标采集展示框架

资源占用

  • cadvisor:112M左右
  • Prometheus:300M+(随时间流逝内存在上升)

docker swarm模式部署

官方部署文档

prometheus的配置文件/docker_data/v-monitor/prometheus/prometheus.yml内容如下:

1
2
3
4
5
6
scrape_configs:
- job_name: cadvisor
scrape_interval: 5s
static_configs:
- targets:
- cadvisor:8080

Swarm部署脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
version: '3.2'
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- 9090:9090
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- /docker_data/v-monitor/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
depends_on:
- cadvisor
cadvisor:
image: google/cadvisor:latest
container_name: cadvisor
ports:
- 8080:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro

prometheus容器指标

指标名称 类型 含义
container_memory_usage_bytes gauge 容器当前的内存使用量(单位:字节)
machine_memory_bytes gauge 宿主机内存总量(单位:字节)

内存图表展示

ckwaAx.png

增加Grafana仪表板显示prometheus

增加Grafana部署

1
2
3
4
5
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- 3000:3000

默认账号和密码是admin/admin

官方配置手册GRAFANA SUPPORT FOR PROMETHEUS

  1. 添加数据源:点击configuration->data sources->prometheus->在url输入pro服务的地址(http://prometheus:9090)
  2. 寻找合适的dashboard:去grafana dashboard找一个适合自己的模版(我这里用Docker and system monitoring的模版id为893)
  3. 添加dashboard:点击dashboard->import->输入id添加模版(893)

增加node-exporter宿主机节点数据采集

增加部署

1
2
3
4
5
6
7
8
9
10
11
12
node-exporter:
image: prom/node-exporter:latest
command:
- '--path.rootfs=/host'
pid: host
volumes:
- '/:/host:ro,rslave'
ports:
- target: 9100
published: 9100
protocol: tcp
mode: host

修改配置prometheus.yml内容如下:

1
2
3
4
5
scrape_configs:
- job_name: 'cadvisor' #不能随便修改名字,会造成数据的job name不一致查询时会查询出两组数据
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090','cadvisor:8080','node-exporter:9100']

QUERYING PROMETHEUS 语法

中文文档

1
2
#node_filesystem_free_bytes代表查询的表名,{fstype="rootfs"}相当于查询条件,查询fstype是rootfs的所有数据,[1m]范围向量,一分钟内的数据
node_filesystem_free_bytes{fstype="rootfs"}[1m]

cEniNR.png

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#name是容器名字

#容器名字容易统计死的容器,重启服务的时候,会有多个容器,但其实是一个服务,因此按服务名统计
sum(container_memory_rss{container_label_com_docker_swarm_service_name=~".+"}) by (container_label_com_docker_swarm_service_name)
#按服务名存在一个问题,因此可以通过image进行分组,但是存在后缀
sum(container_memory_rss{name=~".+"})by(image)
#可以去掉后缀,但是有的官方镜像后缀很难看
sum(label_replace(container_memory_rss{name=~".+"},"image_sub","$1","image", "(.*)(:)(.*)"))by(image_sub)
#因此最后采用label_replace方法,进行对原数据进行字段替换,
#label_replace(原数据,"新的字段名","取正则里面的那一部分","旧的字段名", "正则"),正则每一段都用()包裹,$1代表取第一个括号内容,2就代表第二个括号内容,用了括号才能用转义\\
label_replace(container_memory_rss{name=~".+"},"name","$1","name", "(.*)(\\.1\\.)(.*)")
#最终版本,旧的name和新的name要一致,因为有的正则匹配不到,旧的name的数据就会合为一体,就不会丢数据
sum(label_replace(container_memory_rss{name=~".+"},"name","$1","name", "(.*)(\\.1\\.)(.*)"))by(name)
#统计cpu,label_replace要在外面替换
sum(label_replace(rate(container_cpu_usage_seconds_total{name=~".+"}[$interval]),"name","$1","name", "(.*)(\\.1\\.)(.*)"))by (name) * 100

cadvisor+influxDB+Grafana

待更新…

参考

容器监控:cAdvisor

常见问题

  1. 图表不显示数据,显示N/A,检查里面的查询语句,是否表改了名字,新版本好多表都加了_bytes后缀,找到升级后的表名替换旧的就可以了
  2. 更新表的字段后显示Only queries that return single series/table is supported错误,检查右边的panel是否需要合并,不需要合并应该会选中一个图表类型

附录

完整的swarm部署文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
version: '3.2'
services:
prometheus:
image: prom/prometheus:latest
ports:
- 14003:9090
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- /docker_data/v-monitor/prometheus/config:/etc/prometheus
- /docker_data/v-monitor/prometheus/data:/prometheus
cadvisor:
image: google/cadvisor:latest
ports:
- 14004:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
grafana:
image: grafana/grafana:latest
ports:
- 14002:3000
volumes:
- /docker_data/v-monitor/grafana:/var/lib/grafana
node-exporter:
image: prom/node-exporter:latest
command:
- '--path.rootfs=/host'
pid: host
volumes:
- '/:/host:ro,rslave'
ports:
- target: 9100
published: 9100
protocol: tcp
mode: host

prometheus.yml

1
2
3
4
5
scrape_configs:
- job_name: 'cadvisor' #不能随便修改名字,会造成数据的job name不一致查询时会查询出两组数据
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090','cadvisor:8080','node-exporter:9100']

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#基础镜像选择alpine 小巧安全流行方便
FROM exxk/java:8-alpine-cst
# 暴露端口
EXPOSE 9303
# 设置参数
ARG JAVA_OPTS="-Xms256m -Xmx256m -XX:NewRatio=1"
#复制固定路径下打包好的jar包(target/*.jar)并重命名到容器跟目录(/app.jar),或ADD
COPY target/*.jar app.jar
COPY src/main/resources/bootstrap.properties config/bootstrap.properties
#COPY target/lib lib
#健康检查 -s 静默模式,不下载文件
HEALTHCHECK --start-period=40s --interval=30s --timeout=5s --retries=5 CMD (wget http://localhost:9303/actuator/health -q -O -) | grep UP || exit 1
#启动容器执行的命令 java -jar app.jar ,如果加其他参数加 ,"-参数",
ENTRYPOINT ["sh", "-c", "java ${JAVA_OPTS} -jar /app.jar"]
1
2
3
4
5
6
7
8
9
HEALTHCHECK --start-period=40s --interval=30s --timeout=5s --retries=5 CMD (wget http://localhost:9303/actuator/health -q -O -) | grep UP || exit 1
HEALTHCHECK [OPTIONS] CMD command 通过运行容器内的一个指令来检查容器的健康情况
--interval=DURATION 间隔时间, 默认 30s (30秒);
--timeout=DURATION 超时时间, 默认 30s (30秒);
#为需要启动的容器提供了初始化的时间段, 在这个时间段内如果检查失败, 则不会记录失败次数。 如果在启动时间内成功执行了健康检查, 则容器将被视为已经启动, 如果在启动时间内再次出现检查失败, 则会记录失败次数。
--start-period=DURATION 启动时间, 默认 0s, 如果指定这个参数, 则必须大于 0s ;
--retries=N 重试次数, 默认 3 ;
#获取http://localhost:9303/actuator/health内容然后 通过管道| 在获取的内容里面找up,找到了代表执行成功,没找到代表执行失败, || 代表前面的命令执行成功就会执行后面的命令,如果前面执行失败,后面就不会执行
(wget http://localhost:9303/actuator/health -q -O -) | grep UP || exit 1