如何构建基于容器的本机监控系统
【作者的话】Docker 目前非常火爆,如何把Docker使用起来,并且和日常工作结合起来,是需要考虑的一个问题。
本文意图将一个测试的具体步骤展示给大家,可以在一台内存较大的台式机上进行(建议16GB内存),另外本文的参考意义更加在于尝试和使用,对于实际场景的意义则需要方家讨论。
测试拓扑图如下所示:
各个功能Docker模块功能如下:
- Flume:负责搜集日志信息(本文中启动了三个flume容器)
- 第一个负责从本机搜集/var/log/messages日志,直接发送到elasticsearch中
- 第二个负责从本机搜集/var/log/messages日志,发送到kafka中间件,读取日志序列,发送到elasticsearch
- 第三个负责从kafka读取日志序列,发送到elasticsearch。 - 还有一个没有实现的可能性,从kafka读取日志序列,写入HDFS,以便后续进行hadoop分析
- docker ps的输出则是真正运行在CentOS7中的容器集合,共同完成以上任务。
Config docker hub repository accelerator to Daocloud
http://dashboard.daocloud.io/mirrorFor CentOS:
- sudo sed -i 's|other_args=|other_args=--registry-mirror=http://4c5cf935.m.daocloud.io |g' /etc/sysconfig/docker
- sudo sed -i "s|OPTIONS='|OPTIONS='--registry-mirror=http://4c5cf935.m.daocloud.io |g" /etc/sysconfig/docker
- sudo service docker restart
Install docker
- https://docs.docker.com/installation/centos
- yum –y update <make sure kernel >= 3.10.0-229.el7.x86_64>
- crul –sSL https://get.docker/com/ | sh
- - this script adds the ‘docker.repo’ repository and installs Docker
- yum –y install docker-selinux
- systemctl start docker.service
Install docker-compose
- https://docs.docker.com/compose/install
- curl -L https://github.com/docker/comp ... ose-X21X-
uname -m
> /usr/local/bin/docker-compose - chmod +x /usr/local/bin/docker-compose
docker-compose kafka - https://github.com/wurstmeister/kafka-docker
- under the kafka-docker-master directory:
- modify the KAFKA_ADVERTISED_HOST_NAME in docker-compose.yml to match your docker host IP (Note: Do not use localhost or 127.0.0.1 as the host ip if you want to run multiple brokers.)
- start a cluster : #docker-compute up –d
- Add user broker: docker-compose scale kafka=2<-(no less than replication factor below)
- Destroy a cluster: docker-compose stop
- Monitor the logs: docker-compose logs - To see the containers IP and ports:
- Systemctl status docker.service
- ./start-kafka-shell <docker_host_ip> <zk_host:zk_port>
- <container1> # $KAFKA_HOME/bin/kafka-topics.sh –create –topic topic –partitions 4 –zookeeper $ZK –replication-factor 2←(must equal to kafka broker’s #)
- <container1># $KAFKA_HOME/bin/kafka-topics.sh –list –zookeeper $ZK
- <container1># $KAFKA_HOME/bin/kafka-topics.sh –describe –topic topic –zookeeper $ZK
- <container1># $KAFKA_HOME/bin/kafka-console-producer.sh –topic=topic –broker-list=
broker-list.sh
- ./start-kafka-shell <docker_host_ip> <zk_host:zk_port>
- <container2># $KAFKA_HOME/bin/kafka-console-consumer.sh –topic=topic –zookeeper=$ZK –from-beginning - troubleshooting: http://wurstmeister.github.io/kafka-docker/
配置elasticsearch
- docker pull elasticsearch:latest
- mkdir /mnt/isilon
- mount isilon.mini:/ifs/hdfs /mnt/isilon
- docker run –d –p 9200:9200 –p 9300:9300 –v /mnt/isilon/elasticsearch:/data –v /mnt/isilon/elasticsearch/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml elasticsearch
- <会将以上目录和文件挂载到container内部;>
- 起作用的配置文件在:/usr/share/elasticsearch/config目录下 - 一台物理机,只能启动一个elasticsearch容器
systemctl status docker.service <to get the elasticsearch IP>
http://<elasticsearch_ip>:9200 or http://<host_IP>:9200- docker exec –it <elasticsearch_container_ID> /bin/bash
- cd /usr/share/elasticsearch/plugins
- /usr/share/elasticsearch/bin/plugin –install mobz/elasticsearch-head
- /usr/share/elasticsearch/bin/plugin –install lukas-vlcek/bigdesk
- http://<host_IP>:9200/_plugin/bigdesk - http://<host_IP>:9200/_plugin/head
- docker exec –it <es_container_ID> /bin/bash
- cp –r /usr/share/elasticsearch/lib/* /data/lib
- - used for later flume library to access elasticsearch
配置kibana
- docker pull kibana
- docker run - -link <elasticsearch_container_name> -d kibana
- - 默认方式,5601端口只在container内部可用
- docker run - -link <elasticsearch_container_name> -d kibana - -plugins /somewhere/else
- - 可以传递某些参数
- docker run - -name kibana - -link <elasticsearch_container_name> -p 5601:5601 –d kibana
- - 对外输出5601端口,可以通过主机IP访问,但是有可能对elasticsearch提供服务的主机名localhost解析不了,造成问题。建议用下一种方式
- docker run - -name kibana –e http://<host_IP>:9200 -p 5601:5601 –d kibana
- netstat –tupln | grep 5601
- docker logs <kibana_container_ID> - http://<host_IP>:5601
配置flume-----监控文件日志输出到elasticsearch
- docker pull probablyfine/flume <最新flume-ng为1.6.0版本>
- cat /mnt/isilon/config/flume_log2es.conf
- - refer to 《配置ELK》文章中,配置flume-ng一节,第12A步配置
- docker run -e FLUME_AGENT_NAME=log2es -v /mnt/isilon:/data –v /var/log/messages:/var/log/messages -e FLUME_CONF_FILE=/data/config/flume_log2es.conf -d probablyfine/flume
- 为调试起见将host机器/var/log/messages挂入flume容器中 - flume_log2es.conf中监控/var/log/messages的变化,可以修改为容器内任何感兴趣的日志
- docker exec –it <flume_container_ID> /bin/bash
- - cp –r /data/elasticsearch/lib/* /opt/flume/lib
- copy previous elasticsearch lib files to flume lib - docker logs <flume_container_ID>
- - docker stop <flume_container_ID>
- docker start <flume_container_ID> 即可,再用logs命令看状态
配置flume-----监控日志文件输出到kafka
- docker pull probablyfine/flume
- yum –y install maven
- - to create flume-ng-kafka library
- https://github.com/jinoos/flume-ng-extends, download the zip file
- unzip flume-ng-extends-source-master.zip
- cd flume-ng-extends-source-master
- mvn clean packages
- mkdir –p /mnt/isilon/kafka/lib
- cp target/flume-ng-extends-source-0.8.0.jar /mnt/isilon/kafka/lib
- cat /mnt/isilon/config/flume_kafka_producer.conf
- - refer to 《配置ELK》文章中,配置flume-ng一节,第12B步配置
- docker run -e FLUME_AGENT_NAME=kfk_pro -v /mnt/isilon:/data -v /var/log/messages:/var/log/messages -e FLUME_CONF_FILE=/data/config/flume_kafka_producer.conf -d probablyfine/flume
- docker exec –it <kfk_pro_container_ID> /bin/bash
- cp –r /data/elasticsearch/lib/* /opt/flume/lib - cp –r /data/kafka/lib/* /opt/flume/lib
- docker stop <kfk_pro_container_ID>
- docker start <kfk_pro_container_ID>
- docker logs <kfk_pro_container_ID> - refer to “docker-compose kafka” about topic list and consumer command
- cat /mnt/isilon/config/flume_kafka_consumer.conf
- - refer to 《配置ELK》文章中,配置flume-ng一节,第12C步配置
- docker run -e FLUME_AGENT_NAME=kfk_con -v /mnt/isilon:/data -e FLUME_CONF_FILE=/data/config/flume_kafka_consumer.conf -d probablyfine/flume
- docker exec –it <kfk_con_container_ID> /bin/bash
- cp –r /data/elasticsearch/lib/* /opt/flume/lib - cp –r /data/kafka/lib/* /opt/flume/lib
- docker stop <kfk_con_container_ID>
- docker start <kfk_con_container_ID>
- docker logs <kfk_con_container_ID> - refer to “docker-compose kafka” about topic list and consumer command
配置kibana
Bigdesk插件输出情况
logstash日志为12A配置的flume生成。
es-index日志为12B/C配置的flume生成(via kafka)
通过kibana接收到的基于时序的日志信息。
整个系统状况
停止所有docker:
- docker ps | grep ^[0-9] | awk ‘{print $1}’ | xargs –t –I docker stop {}
启动所有docker: - docker ps -a| grep ^[0-9] | awk ‘{print $1}’ | xargs –t –I docker stop {}