使用docker快速部署Elasticsearch集群

發(fā)布時(shí)間:2024-02-22
elasticsearch是一個(gè)基于lucene的搜索服務(wù)器。它提供了一個(gè)分布式多用戶能力的全文搜索引擎,基于restful web接口。elasticsearch是用java開發(fā)的,并作為apache許可條款下的開放源碼發(fā)布,是當(dāng)前流行的企業(yè)級(jí)搜索引擎。
本文將使用docker容器(使用docker-compose編排)快速部署elasticsearch 集群,可用于開發(fā)環(huán)境(單機(jī)多實(shí)例)或生產(chǎn)環(huán)境部署。
注意,6.x版本已經(jīng)不能通過 -epath.config 參數(shù)去指定配置文件的加載位置,文檔說明:
for the archive distributions, the config directory location defaults to $es_home/config. the location of the >config directory can be changed via the es_path_conf environment variable as follows:
es_path_conf=/path/to/my/config ./bin/elasticsearch
alternatively, you can export the es_path_conf environment variable via the command line or via your shell profile.
即交給環(huán)境變量 es_path_conf 來設(shè)定了,單機(jī)部署多個(gè)實(shí)例且不使用容器的同學(xué)多多注意。
準(zhǔn)備工作
安裝 docker & docker-compose
這里推進(jìn)使用 daocloud 做個(gè)加速安裝:
#docker curl -ssl https://get.daocloud.io/docker | sh #docker-compose curl -l \ https://get.daocloud.io/docker/compose/releases/download/1.23.2/docker-compose-`uname -s`-`uname -m` \ > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose #查看安裝結(jié)果 docker-compose -v數(shù)據(jù)目錄
#創(chuàng)建數(shù)據(jù)/日志目錄 這里我們部署3個(gè)節(jié)點(diǎn) mkdir /opt/elasticsearch/data/{node0,nod1,node2} -p mkdir /opt/elasticsearch/logs/{node0,nod1,node2} -p cd /opt/elasticsearch #權(quán)限我也很懵逼啦 給了 privileged 也不行 索性0777好了 chmod 0777 data/* -r && chmod 0777 logs/* -r #防止jvm報(bào)錯(cuò) echo vm.max_map_count=262144 >> /etc/sysctl.conf sysctl -pdocker-compse 編排服務(wù)
創(chuàng)建編排文件
vim docker-compose.yml
參數(shù)說明
- cluster.name=elasticsearch-cluster
集群名稱
- node.name=node0
- node.master=true
- node.data=true
節(jié)點(diǎn)名稱、是否可作為主節(jié)點(diǎn)、是否存儲(chǔ)數(shù)據(jù)
- bootstrap.memory_lock=true
鎖定進(jìn)程的物理內(nèi)存地址避免交換(swapped)來提高性能
- http.cors.enabled=true
- http.cors.allow-origin=*
開啟cors以便使用head插件
- es_java_opts=-xms512m -xmx512m
jvm內(nèi)存大小配置
- discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2
- discovery.zen.minimum_master_nodes=2
由于5.2.1后的版本是不支持多播的,所以需要手動(dòng)指定集群各節(jié)點(diǎn)的tcp數(shù)據(jù)交互地址,用于集群的節(jié)點(diǎn)發(fā)現(xiàn)和failover,默認(rèn)缺省9300端口,如設(shè)定了其它端口需另行指定,這里我們直接借助容器通信,也可以將各節(jié)點(diǎn)的9300映射至宿主機(jī),通過網(wǎng)絡(luò)端口通信。
設(shè)定failover選取的quorum = nodes/2 + 1
當(dāng)然,也可以掛載自己的配置文件,es鏡像的配置文件是/usr/share/elasticsearch/config/elasticsearch.yml,掛載如下:
volumes: - path/to/local/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:rodocker-compose.yml
version: '3' services: elasticsearch_n0: image: elasticsearch:6.6.2 container_name: elasticsearch_n0 privileged: true environment: - cluster.name=elasticsearch-cluster - node.name=node0 - node.master=true - node.data=true - bootstrap.memory_lock=true - http.cors.enabled=true - http.cors.allow-origin=* - es_java_opts=-xms512m -xmx512m - discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2 - discovery.zen.minimum_master_nodes=2 ulimits: memlock: soft: -1 hard: -1 volumes: - ./data/node0:/usr/share/elasticsearch/data - ./logs/node0:/usr/share/elasticsearch/logs ports: - 9200:9200 elasticsearch_n1: image: elasticsearch:6.6.2 container_name: elasticsearch_n1 privileged: true environment: - cluster.name=elasticsearch-cluster - node.name=node1 - node.master=true - node.data=true - bootstrap.memory_lock=true - http.cors.enabled=true - http.cors.allow-origin=* - es_java_opts=-xms512m -xmx512m - discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2 - discovery.zen.minimum_master_nodes=2 ulimits: memlock: soft: -1 hard: -1 volumes: - ./data/node1:/usr/share/elasticsearch/data - ./logs/node1:/usr/share/elasticsearch/logs ports: - 9201:9200 elasticsearch_n2: image: elasticsearch:6.6.2 container_name: elasticsearch_n2 privileged: true environment: - cluster.name=elasticsearch-cluster - node.name=node2 - node.master=true - node.data=true - bootstrap.memory_lock=true - http.cors.enabled=true - http.cors.allow-origin=* - es_java_opts=-xms512m -xmx512m - discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2 - discovery.zen.minimum_master_nodes=2 ulimits: memlock: soft: -1 hard: -1 volumes: - ./data/node2:/usr/share/elasticsearch/data - ./logs/node2:/usr/share/elasticsearch/logs ports: - 9202:9200這里我們分別為node0/node1/node2開放宿主機(jī)的9200/9201/9202作為http服務(wù)端口,各實(shí)例的tcp數(shù)據(jù)傳輸用默認(rèn)的9300通過容器管理通信。
如果需要多機(jī)部署,則將es的transport.tcp.port: 9300端口映射至宿主機(jī)xxxx端口,discovery.zen.ping.unicast.hosts填寫各主機(jī)代理的地址即可:
#比如其中一臺(tái)宿主機(jī)為192.168.1.100 ... - discovery.zen.ping.unicast.hosts=192.168.1.100:9300,192.168.1.101:9300,192.168.1.102:9300 ... ports: ... - 9300:9300創(chuàng)建并啟動(dòng)服務(wù)
[root@localhost elasticsearch]# docker-compose up -d [root@localhost elasticsearch]# docker-compose ps name command state ports -------------------------------------------------------------------------------------------- elasticsearch_n0 /usr/local/bin/docker-entr ... up 0.0.0.0:9200->9200/tcp, 9300/tcp elasticsearch_n1 /usr/local/bin/docker-entr ... up 0.0.0.0:9201->9200/tcp, 9300/tcp elasticsearch_n2 /usr/local/bin/docker-entr ... up 0.0.0.0:9202->9200/tcp, 9300/tcp #啟動(dòng)失敗查看錯(cuò)誤 [root@localhost elasticsearch]# docker-compose logs #最多是一些訪問權(quán)限/jvm vm.max_map_count 的設(shè)置問題查看集群狀態(tài)
192.168.20.6 是我的服務(wù)器地址
訪問http://192.168.20.6:9200/_cat/nodes?v即可查看集群狀態(tài):
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.3 36 98 79 3.43 0.88 0.54 mdi * node0 172.25.0.2 48 98 79 3.43 0.88 0.54 mdi - node2 172.25.0.4 42 98 51 3.43 0.88 0.54 mdi - node1驗(yàn)證 failover
通過集群接口查看狀態(tài)
模擬主節(jié)點(diǎn)下線,集群開始選舉新的主節(jié)點(diǎn),并對(duì)數(shù)據(jù)進(jìn)行遷移,重新分片。
[root@localhost elasticsearch]# docker-compose stop elasticsearch_n0 stopping elasticsearch_n0 ... done集群狀態(tài)(注意換個(gè)http端口 原主節(jié)點(diǎn)下線了),down掉的節(jié)點(diǎn)還在集群中,等待一段時(shí)間仍未恢復(fù)后就會(huì)被剔出
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.2 57 84 5 0.46 0.65 0.50 mdi - node2 172.25.0.4 49 84 5 0.46 0.65 0.50 mdi * node1 172.25.0.3 mdi - node0等待一段時(shí)間
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.2 44 84 1 0.10 0.33 0.40 mdi - node2 172.25.0.4 34 84 1 0.10 0.33 0.40 mdi * node1恢復(fù)節(jié)點(diǎn) node0
[root@localhost elasticsearch]# docker-compose start elasticsearch_n0 starting elasticsearch_n0 ... done等待一段時(shí)間
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.2 52 98 25 0.67 0.43 0.43 mdi - node2 172.25.0.4 43 98 25 0.67 0.43 0.43 mdi * node1 172.25.0.3 40 98 46 0.67 0.43 0.43 mdi - node0配合 head 插件觀察
git clone git://github.com/mobz/elasticsearch-head.git cd elasticsearch-head npm install npm run start集群狀態(tài)圖示更容易看出數(shù)據(jù)自動(dòng)遷移的過程
1、集群正常 數(shù)據(jù)安全分布在3個(gè)節(jié)點(diǎn)上
2、下線 node1 主節(jié)點(diǎn) 集群開始遷移數(shù)據(jù)
遷移中
遷移完成
3、恢復(fù) node1 節(jié)點(diǎn)
問題小記
elasticsearch watermark
部署完后創(chuàng)建索引發(fā)現(xiàn)有些分片處于 unsigned 狀態(tài),是由于 elasticsearch watermark:low,high,flood_stage的限定造成的,默認(rèn)硬盤使用率高于85%就會(huì)告警,開發(fā)嘛,手動(dòng)關(guān)掉好了,數(shù)據(jù)會(huì)分片到各節(jié)點(diǎn),生產(chǎn)自行決斷。
curl -x put http://192.168.20.6:9201/_cluster/settings \ -h 'content-type':'application/json' \ -d '{transient:{cluster.routing.allocation.disk.threshold_enabled: false}}'
上一個(gè):優(yōu)雅的手法泡出溫醇的茶香
下一個(gè):GR2512F0R36T4G00_F 2512 0R36現(xiàn)貨購買,鼎聲微2512 0.36Ω ±1% 1W

投標(biāo)保證金退還嗎
火炬花花期怎么養(yǎng)護(hù)
小米5怎么設(shè)置權(quán)限,請(qǐng)問小米5怎么開權(quán)限請(qǐng)?jiān)谙到y(tǒng)設(shè)置中打開和包支付的權(quán)限
王妃雷神的繁殖要點(diǎn)有哪些?
三星note3播放器異常怎么辦,三星note3 n9008v不能播放mp4怎么辦
win7系統(tǒng)與win10系統(tǒng)哪個(gè)好用(win7和win10那個(gè)系統(tǒng)更穩(wěn)定)
惠普筆記本換塊電池多少錢,HP筆記本電池要多少錢啊
阿里雙十一爆款云服務(wù)器領(lǐng)券
索尼相機(jī)包裝盒怎么包(索尼相機(jī)背包)
怎么種植發(fā)財(cái)樹,發(fā)財(cái)樹種植全部過程
十八禁 网站在线观看免费视频_2020av天堂网_一 级 黄 色 片免费网站_绝顶高潮合集Videos