smniuhe

春风十里,不如你...


  • Home

  • About

  • Tags

  • Categories

  • Archives

  • Schedule

  • Search

nginx web服务器的搭建

Posted on 2018-03-19 | Edited on 2019-02-02 | In 技术积累
安装相关环境
1
2
3
4
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# yum install gcc-c++
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# yum install -y pcre pcre-devel
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# yum install -y zlib zlib-devel
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# yum install -y openssl openssl-devel
Read more »

Sublime Text3 多行编辑的技巧

Posted on 2018-03-17 | Edited on 2019-02-02 | In smart

应用场景

有时候,我们操作的的 pojo 下对应的属性过多,这个时候多行编辑非常有用,熟悉掌握开发效率也会跳高

Read more »

ActiveMQ 的搭建

Posted on 2018-03-16 | Edited on 2019-02-02 | In middleware

搭建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 通过sftp上传压缩包
sftp> put /Users/niuhesm/resouces/major/remoteServer/apache-activemq-5.15.3-bin.tar.gz /root


# 启动消息队列服务
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# tar zxvf apache-activemq-5.15.3-bin.tar.gz
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# cd apache-activemq-5.15.3/bin
[root@iZuf6iq8e7ya9v3ix71k0pZ bin]# ./activemq start
INFO: Loading '/root/apache-activemq-5.15.3//bin/env'
INFO: Using java '/usr/bin/java'
INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details
INFO: pidfile created : '/root/apache-activemq-5.15.3//data/activemq.pid' (pid '15876')
[root@iZuf6iq8e7ya9v3ix71k0pZ bin]#

Read more »

solr 集群搭建

Posted on 2018-03-13 | Edited on 2019-02-02 | In 技术积累

搭建zookeeper集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# cd /usr/local/
[root@iZuf6iq8e7ya9v3ix71k0pZ local]# mkdir solr-cluster
[root@iZuf6iq8e7ya9v3ix71k0pZ local]# cd solr-cluster/
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp /usr/local/zookeeper/zookeeper-3.4.10.tar.gz /usr/local/solr-cluster/
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# tar zxvf zookeeper-3.4.10.tar.gz
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# mv zookeeper-3.4.10 zookeeper01
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r zookeeper01 zookeeper02
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r zookeeper01 zookeeper03
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# rm -rf zookeeper-3.4.10.tar.gz
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# ll
总用量 12
drwxr-xr-x 10 root root 4096 3月 13 16:33 zookeeper01
drwxr-xr-x 10 root root 4096 3月 13 16:32 zookeeper02
drwxr-xr-x 10 root root 4096 3月 13 16:32 zookeeper03
<!-- more -->

#创建 data 目录,并创建 myid 文件,编辑标识 1
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# mkdir data
[root@iZuf6iq8e7ya9v3ix71k0pZ data]# vim myid (或者echo 1 > myid)
1
~
~
~
~
~
:wq

# 重命名配置文件,修改配置文件
[root@iZuf6iq8e7ya9v3ix71k0pZ data]# cd ../conf
[root@iZuf6iq8e7ya9v3ix71k0pZ conf]# mv zoo_sample.cfg zoo.cfg
[root@iZuf6iq8e7ya9v3ix71k0pZ conf]# vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
参数说明

clientPort=客户端连接端口
dataDir=data目录(/usr/local/solr-cluster/zookeeper01/data)
server.1=127.0.0.1:2881:3881 (1 myid标识,2881 zookeeper 连接内部通讯端口,3881 选举端口)

配置修改如下
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/solr-cluster/zookeeper01/data
# the port at which the clients will connect
clientPort=2182
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=127.0.0.1:2881:3881
server.2=127.0.0.1:2882:3882
server.3=127.0.0.1:2883:3883

# 配置zookeeper02,03
[root@iZuf6iq8e7ya9v3ix71k0pZ sorl-cluster]
生成批处理文件,启动并查看状态
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# 创建批处理文件
[root@iZuf6iq8e7ya9v3ix71k0pZ sorl-cluster] vim start-zookeeper.sh
▽
cd zookeeper01/bin
./zkServer.sh start
cd ../../
cd zookeeper02/bin
./zkServer.sh start
cd ../../
cd zookeeper03/bin
./zkServer.sh start
cd ../../
~
~
~
~
"start-zookeeper.sh" 9L, 147C

# 授权
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# chmod +x start-zookeeper.sh

# 启动查看状态
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# ./start-zookeeper.sh
ZooKeeper JMX enabled by default
Using config: /usr/local/solr-cluster/zookeeper01/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
ZooKeeper JMX enabled by default
Using config: /usr/local/solr-cluster/zookeeper02/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
ZooKeeper JMX enabled by default
Using config: /usr/local/solr-cluster/zookeeper03/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# zookeeper01/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/solr-cluster/zookeeper01/bin/../conf/zoo.cfg
Mode: follower
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# zookeeper02/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/solr-cluster/zookeeper02/bin/../conf/zoo.cfg
Mode: leader
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# zookeeper03/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/solr-cluster/zookeeper03/bin/../conf/zoo.cfg
Mode: follower
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]#

搭建solr集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp /usr/local/tomcat/apache-tomcat-8.5.27.tar.gz /usr/local/solr-cluster/
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# tar zxvf apache-tomcat-8.5.27.tar.gz
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# rm -rf apache-tomcat-8.5.27.tar.gz
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# mv apache-tomcat-8.5.27/ tomcat01
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r tomcat01/ tomcat02
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r tomcat01/ tomcat02
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r tomcat01/ tomcat03
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r tomcat01/ tomcat04
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim tomcat01/conf/server.xml
tomcat01 tomcat02 tomcat03 tomcat04
8005 +10 8015 8016 8017 8018
8080 +10 8090 8091 8092 8093
8009 +10 8019 8020 8021 8022

[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# ll
总用量 32
-rwxr-xr-x 1 root root 147 3月 13 17:36 start-zookeeper.sh
drwxr-xr-x 9 root root 4096 3月 14 09:57 tomcat01
drwxr-xr-x 9 root root 4096 3月 14 10:00 tomcat02
drwxr-xr-x 9 root root 4096 3月 14 10:01 tomcat03
drwxr-xr-x 9 root root 4096 3月 14 10:01 tomcat04
drwxr-xr-x 11 root root 4096 3月 13 16:39 zookeeper01
drwxr-xr-x 11 root root 4096 3月 13 17:02 zookeeper02
drwxr-xr-x 11 root root 4096 3月 13 17:28 zookeeper03

# 复制solr相关配置
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r /usr/local/solr/tomcat-8082/webapps/solr/ /usr/local/solr-cluster/tomcat01/webapps/
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r /usr/local/solr/tomcat-8082/webapps/solr/ /usr/local/solr-cluster/tomcat02/webapps/
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r /usr/local/solr/tomcat-8082/webapps/solr/ /usr/local/solr-cluster/tomcat03/webapps/
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r /usr/local/solr/tomcat-8082/webapps/solr/ /usr/local/solr-cluster/tomcat04/webapps/
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r /usr/local/solr/solrhome /usr/local/solr-cluster/solrhome01
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r /usr/local/solr/solrhome /usr/local/solr-cluster/solrhome02
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r /usr/local/solr/solrhome /usr/local/solr-cluster/solrhome03
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# cp -r /usr/local/solr/solrhome /usr/local/solr-cluster/solrhome04


# 修改solrcloud配置(solrtomcat01为例)
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim solrhome01/solr.xml
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim solrhome02/solr.xml
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim solrhome03/solr.xml
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim solrhome04/solr.xml
<solr>

<solrcloud>

<str name="host">${host:}</str>
<int name="hostPort">${jetty.port:8983}</int>
...
...

<solr>

<solrcloud>

<str name="host">127.0.0.1</str>
<int name="hostPort">8090</int>

# solr实例绑定对应的solrhome(solrhome1为例)
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim tomcat01/webapps/solr/WEB-INF/web.xml
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim tomcat02/webapps/solr/WEB-INF/web.xml
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim tomcat03/webapps/solr/WEB-INF/web.xml
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim tomcat04/webapps/solr/WEB-INF/web.xml
<env-entry>
<env-entry-name>solr/home</env-entry-name>
<env-entry-value>/usr/local/solr/solrhome</env-entry-value>
<env-entry-type>java.lang.String</env-entry-type>
</env-entry>
...
...
<env-entry>
<env-entry-name>solr/home</env-entry-name>
<env-entry-value>/usr/local/solr-cluster/solrhome01</env-entry-value>
<env-entry-type>java.lang.String</env-entry-type>
</env-entry>


tomcat关联zookeeper
1
2
3
4
5
6
7
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim tomcat01/bin/catalina.sh
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim tomcat02/bin/catalina.sh
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim tomcat03/bin/catalina.sh
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim tomcat04/bin/catalina.sh
...
...
JAVA_OPTS="-DzkHost=127.0.0.1:2182,127.0.0.1:2183,127.0.0.1:2184"
zookeeper 统一管理配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 让zookeeper统一管理配置文件。需要把solrhome/mycore/conf目录上传到zookeeper。上传任意solrhome中的配置文件即可。
# ./zkcli.sh -zkhost 127.0.0.1:2182,127.0.0.1:2183,127.0.0.1:2184 -cmd upconfig -confdir /usr/local/solr-cluster/solrhome01/mycore/conf/ -confname myconf

[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# cd solr-7.2.1/server/scripts/cloud-scripts/
[root@iZuf6iq8e7ya9v3ix71k0pZ cloud-scripts]# ./zkcli.sh -zkhost 127.0.0.1:2182,127.0.0.1:2183,127.0.0.1:2184 -cmd upconfig -confdir /usr/local/solr-cluster/solrhome01/mycore/conf/ -confname myconf

# 查看上传的文件
[root@iZuf6iq8e7ya9v3ix71k0pZ cloud-scripts]# cd /usr/local/solr-cluster/zookeeper01/bin/
[root@iZuf6iq8e7ya9v3ix71k0pZ bin]# ./zkCli.sh -server 127.0.0.1:2182
[zk: 127.0.0.1:2182(CONNECTED) 2] ls /configs/myconf
[data-config.xml, managed-schema, protwords.txt, solrconfig.xml, synonyms.txt, stopwords.txt, dataimport.properties, params.json, lang, managed-schema的副本]
[zk: 127.0.0.1:2182(CONNECTED) 3]

# 创建快速启动文件
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim start-tomcat.sh
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# chmod +x start-tomcat.sh
/usr/local/solr-cluster/tomcat01/bin/startup.sh
/usr/local/solr-cluster/tomcat02/bin/startup.sh
/usr/local/solr-cluster/tomcat03/bin/startup.sh
/usr/local/solr-cluster/tomcat04/bin/startup.sh
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# vim stop-tomcat.sh
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# chmod +x stop-tomcat.sh
/usr/local/solr-cluster/tomcat01/bin/shutdown.sh
/usr/local/solr-cluster/tomcat02/bin/shutdown.sh
/usr/local/solr-cluster/tomcat03/bin/shutdown.sh
/usr/local/solr-cluster/tomcat04/bin/shutdown.sh
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# ./start-tomcat.sh

访问xxxxxx:8090/solr/index.html 报错:SolrCore Initialization Failures
https://segmentfault.com/q/1010000012076404/a-1020000012123299
原因:solr启动的时候会去检测home的collection,但是你那个core1是从单节点拷贝过来的,结构上肯定不一样,单节点上core和collection可以理解成相等的,集群中collection是有分布在不通节点上的core组成的

1
2
3
4
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# rm -rf solrhome01/mycore
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# rm -rf solrhome02/mycore
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# rm -rf solrhome03/mycore
[root@iZuf6iq8e7ya9v3ix71k0pZ solr-cluster]# rm -rf solrhome04/mycore

hexo localSearch

Posted on 2018-03-07 | Edited on 2019-02-02 | In hexo

补充说明

博文写的很清晰,但是有人难免会不清楚配置到哪里

EZLippi个人博客

  • 配置的搜索相关参数

    1
    2
    3
    4
    5
    6
    站点(博客的根目录) /users/xxx/bloghome/blog1/themes/next/_config.xml
    search:
    path: search.xml
    field: post
    format: html
    limit: 10000
  • 主题的配置文件中开启搜索功能

    1
    2
    3
    4
    5
    6
    7
    8
    9
    /users/xxx/bloghome/blog1/themes/next/_config.xml
    local_search:
    enable: true
    # if auto, trigger search by changing input
    # if manual, trigger search by pressing enter key or search button
    trigger: auto
    # show top n results per article, show all results by setting to -1
    top_n_per_article: 1

chrome 搜索引擎功能

Posted on 2018-03-07 | Edited on 2019-02-02 | In browser

发现

输入 sf.gg 突然发现浏览器输入窗口有变化,

Read more »

redis 集群配置

Posted on 2018-03-05 | Edited on 2019-02-02 | In NoSQL

redis 集群准备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
# 拷贝环境,如果存在 dump.rdb,记得rm
[root@iZuf6iq8e7ya9v3ix71k0pZ local]# mkdir redis-cluster
[root@iZuf6iq8e7ya9v3ix71k0pZ local]# cp -r redis/bin redis-cluster/redis01
[root@iZuf6iq8e7ya9v3ix71k0pZ local]#

<!-- more -->

# 设置 redis.conf,修改 port,并开启集群模式
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# vim redis01/redis.conf
port 6379 改为 port 7001
cluster-enabled yes
...
...
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# ll
总用量 24
drwxr-xr-x 2 root root 4096 3月 5 23:58 redis01
drwxr-xr-x 2 root root 4096 3月 5 23:57 redis02
drwxr-xr-x 2 root root 4096 3月 5 23:57 redis03
drwxr-xr-x 2 root root 4096 3月 5 23:57 redis04
drwxr-xr-x 2 root root 4096 3月 5 23:57 redis05
drwxr-xr-x 2 root root 4096 3月 5 23:57 redis06

# 创建 .sh 批处理文件,批量启动
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# vim start-all.sh
cd redis01
./redis-server redis.conf
cd ..
cd redis02
./redis-server redis.conf
cd ..
cd redis03
./redis-server redis.conf
cd ..
cd redis04
./redis-server redis.conf
cd ..
cd redis05
./redis-server redis.conf
cd ..
cd redis06
./redis-server redis.conf
cd ..
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# chmod +x start-all.sh

# 创建 .sh 批处理文件,批量停止
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# vim stop-all.sh
cd redis01
./redis-cli -p 7001 shutdown
cd ..
cd redis02
./redis-cli -p 7002 shutdown
cd ..
cd redis03
./redis-cli -p 7003 shutdown
cd ..
cd redis04
./redis-cli -p 7004 shutdown
cd ..
cd redis05
./redis-cli -p 7005 shutdown
cd ..
cd redis06
./redis-cli -p 7006 shutdown
cd ..
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# chmod +x stop-all.sh

# 启动server
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# ./start-all.sh
8907:C 06 Mar 00:07:10.397 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8907:C 06 Mar 00:07:10.397 # Redis version=4.0.8, bits=64, commit=00000000, modified=0, pid=8907, just started
8907:C 06 Mar 00:07:10.397 # Configuration loaded
8909:C 06 Mar 00:07:10.401 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8909:C 06 Mar 00:07:10.401 # Redis version=4.0.8, bits=64, commit=00000000, modified=0, pid=8909, just started
8909:C 06 Mar 00:07:10.401 # Configuration loaded
8911:C 06 Mar 00:07:10.406 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8911:C 06 Mar 00:07:10.406 # Redis version=4.0.8, bits=64, commit=00000000, modified=0, pid=8911, just started
8911:C 06 Mar 00:07:10.406 # Configuration loaded
8919:C 06 Mar 00:07:10.410 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8919:C 06 Mar 00:07:10.411 # Redis version=4.0.8, bits=64, commit=00000000, modified=0, pid=8919, just started
8919:C 06 Mar 00:07:10.411 # Configuration loaded
8924:C 06 Mar 00:07:10.415 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8924:C 06 Mar 00:07:10.415 # Redis version=4.0.8, bits=64, commit=00000000, modified=0, pid=8924, just started
8924:C 06 Mar 00:07:10.415 # Configuration loaded
8926:C 06 Mar 00:07:10.419 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8926:C 06 Mar 00:07:10.419 # Redis version=4.0.8, bits=64, commit=00000000, modified=0, pid=8926, just started
8926:C 06 Mar 00:07:10.419 # Configuration loaded
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# ps -aux|grep redis
root 8908 0.0 0.2 147300 9612 ? Ssl 00:07 0:00 ./redis-server *:7001 [cluster]
root 8910 0.0 0.2 147300 9608 ? Ssl 00:07 0:00 ./redis-server *:7002 [cluster]
root 8918 0.0 0.2 147300 9608 ? Ssl 00:07 0:00 ./redis-server *:7003 [cluster]
root 8920 0.0 0.2 147300 9612 ? Ssl 00:07 0:00 ./redis-server *:7004 [cluster]
root 8925 0.0 0.2 147300 9608 ? Ssl 00:07 0:00 ./redis-server *:7005 [cluster]
root 8930 0.0 0.2 147300 9608 ? Ssl 00:07 0:00 ./redis-server *:7006 [cluster]
root 8940 0.0 0.0 112664 972 pts/1 S+ 00:07 0:00 grep --color=auto redis
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]#

下载并配资后 ruby 搭建 redis 集群所依赖的包

https://rubygems.global.ssl.fastly.net/gems/redis-4.0.1.gem

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
sftp> put /Users/niuhesm/Downloads/redis-4.0.1.gem /root
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# yum install ruby
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# gem install redis-4.0.1.gem
ERROR: Error installing redis-4.0.1.gem:
​ redis requires Ruby version >= 2.2.2.

[root@iZuf6iq8e7ya9v3ix71k0pZ ~]#

# 由于不想更新 yum 源,换个方式更新 ruby 版本
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# yum remove ruby
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# wget https://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.3.tar.gz
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# tar zxvf ruby-2.2.3.tar.gz
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# cd ruby-2.2.3
[root@iZuf6iq8e7ya9v3ix71k0pZ ruby-2.2.3]# ./configure
[root@iZuf6iq8e7ya9v3ix71k0pZ ruby-2.2.3]# make
[root@iZuf6iq8e7ya9v3ix71k0pZ ruby-2.2.3]# sudo make install
[root@iZuf6iq8e7ya9v3ix71k0pZ ruby-2.2.3]# ruby -v
ruby 2.2.3p173 (2015-08-18 revision 51636) [x86_64-linux]
# 重新安装依赖包
[root@iZuf6iq8e7ya9v3ix71k0pZ ~]# gem install redis-4.0.1.gem

ruby 脚本搭建集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# cd /usr/local/redis/redis-4.0.8/src/
[root@iZuf6iq8e7ya9v3ix71k0pZ src]# cp redis-trib.rb /usr/local/redis-cluster/
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# ./redis-trib.rb create --replicas 1 106.15.191.27:7001 106.15.191.27:7002 106.15.191.27:7003 106.15.191.27:7004 106.15.191.27:7005 106.15.191.27:7006
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
106.15.191.27:7001
106.15.191.27:7002
106.15.191.27:7003
Adding replica 106.15.191.27:7005 to 106.15.191.27:7001
Adding replica 106.15.191.27:7006 to 106.15.191.27:7002
Adding replica 106.15.191.27:7004 to 106.15.191.27:7003
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: ec761eb0418892ec5ddee2c9360d09e28174236a 106.15.191.27:7001
slots:0-5460 (5461 slots) master
M: 5ae347530fee2c4569fb2fc9b0c34a9e17d48399 106.15.191.27:7002
slots:5461-10922 (5462 slots) master
M: d1fef0edb0a6efe7b2c25d1628fa01e7aa31200d 106.15.191.27:7003
slots:10923-16383 (5461 slots) master
S: be0615cbdb26b634adeed9290557750e3bda7c5c 106.15.191.27:7004
replicates ec761eb0418892ec5ddee2c9360d09e28174236a
S: caa925f9e8074f7ba23ee9e73d0c1d67eb8d829b 106.15.191.27:7005
replicates 5ae347530fee2c4569fb2fc9b0c34a9e17d48399
S: 274b0e14d2cb0413c026cbb9606ed9b60d44f7fa 106.15.191.27:7006
replicates d1fef0edb0a6efe7b2c25d1628fa01e7aa31200d
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.....................................................................................................................................................^C./redis-trib.rb:653:in `sleep': Interrupt
from ./redis-trib.rb:653:in `wait_cluster_join'
from ./redis-trib.rb:1436:in `create_cluster_cmd'
from ./redis-trib.rb:1830:in `<main>'
[root@iZuf6iq8e7ya9v3ix71k0pZ]#

# 出现此问题(WRNING] Some slaves are in the same host as their master)
解决一:修改 redis.conf 下的 bind ip 的地址改为服务器实际地址
解决二:(由于只有一台服务器,也可以)
./redis-trib.rb create --replicas 1 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006

# 成功启动
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# ./redis-trib.rb create --replicas 1 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
127.0.0.1:7001
127.0.0.1:7002
127.0.0.1:7003
Adding replica 127.0.0.1:7005 to 127.0.0.1:7001
Adding replica 127.0.0.1:7006 to 127.0.0.1:7002
Adding replica 127.0.0.1:7004 to 127.0.0.1:7003
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 8495f96bb7cecaa753af70b0bd76c060d1a69a77 127.0.0.1:7001
slots:0-5460 (5461 slots) master
M: 701c093b7045eedc542f68a1cbfb1c59f1b36c1a 127.0.0.1:7002
slots:5461-10922 (5462 slots) master
M: 4cf1720b03f3db54abb8f0f1eeb743f7af7eacc9 127.0.0.1:7003
slots:10923-16383 (5461 slots) master
S: a1d3258ac24b74d72a5fe6d3a255bf6dbd337684 127.0.0.1:7004
replicates 701c093b7045eedc542f68a1cbfb1c59f1b36c1a
S: 892d5474496613876cbc78ac5f903f39db94e8de 127.0.0.1:7005
replicates 4cf1720b03f3db54abb8f0f1eeb743f7af7eacc9
S: fc2dfc9a44179023f2743c87c0bd7f9387728728 127.0.0.1:7006
replicates 8495f96bb7cecaa753af70b0bd76c060d1a69a77
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 127.0.0.1:7001)
M: 8495f96bb7cecaa753af70b0bd76c060d1a69a77 127.0.0.1:7001
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: a1d3258ac24b74d72a5fe6d3a255bf6dbd337684 127.0.0.1:7004
slots: (0 slots) slave
replicates 701c093b7045eedc542f68a1cbfb1c59f1b36c1a
M: 4cf1720b03f3db54abb8f0f1eeb743f7af7eacc9 127.0.0.1:7003
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: fc2dfc9a44179023f2743c87c0bd7f9387728728 127.0.0.1:7006
slots: (0 slots) slave
replicates 8495f96bb7cecaa753af70b0bd76c060d1a69a77
M: 701c093b7045eedc542f68a1cbfb1c59f1b36c1a 127.0.0.1:7002
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 892d5474496613876cbc78ac5f903f39db94e8de 127.0.0.1:7005
slots: (0 slots) slave
replicates 4cf1720b03f3db54abb8f0f1eeb743f7af7eacc9
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]#

# 测试
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# redis01/redis-cli -p 7001 -c
127.0.0.1:7001> set key1 aaa
-> Redirected to slot [9189] located at 127.0.0.1:7002
OK
127.0.0.1:7002> keys *
1) "key1"

# 查看集群的信息
127.0.0.1:7002> cluster info

# 查看节点信息
127.0.0.1:7002> cluster nodes

相关错误

1
2
3
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# ./redis-trib.rb create --replicas 1 106.15.191.27:7001 106.15.191.27:7002 106.15.191.27:7003 106.15.191.27:7004 106.15.191.27:7005 106.15.191.27:7006
>>> Creating cluster
[ERR] Node 106.15.191.27:7002 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.

解决:http://blog.csdn.net/vtopqx/article/details/50235737

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
[root@iZuf6iq8e7ya9v3ix71k0pZ redis-cluster]# ./redis-trib.rb create --replicas 1 106.15.191.27:7001 106.15.191.27:7002 106.15.191.27:7003 106.15.191.27:7004 106.15.191.27:7005 106.15.191.27:7006
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
106.15.191.27:7001
106.15.191.27:7002
106.15.191.27:7003
Adding replica 106.15.191.27:7005 to 106.15.191.27:7001
Adding replica 106.15.191.27:7006 to 106.15.191.27:7002
Adding replica 106.15.191.27:7004 to 106.15.191.27:7003
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: f2387ef8a1ff7770623ae9545bea3ad9a632da52 106.15.191.27:7001
slots:0-5460 (5461 slots) master
M: 58ab432ad1e25b9699a2552442e6e1b84d07473c 106.15.191.27:7002
slots:5461-10922 (5462 slots) master
M: 8a8276597cc83b02d39f845f7a0bf51e0a7a27b3 106.15.191.27:7003
slots:10923-16383 (5461 slots) master
S: c292b2833e1c76d0b69da96ebf35d9113262f458 106.15.191.27:7004
replicates 58ab432ad1e25b9699a2552442e6e1b84d07473c
S: 2b94f8ede1a83302a719d6839c7b9ce79669e33f 106.15.191.27:7005
replicates 8a8276597cc83b02d39f845f7a0bf51e0a7a27b3
S: 16e3c1535950a7bc02bb45993a70250d3402db05 106.15.191.27:7006
replicates f2387ef8a1ff7770623ae9545bea3ad9a632da52
Can I set the above configuration? (type 'yes' to accept): yes
/usr/local/lib/ruby/gems/2.2.0/gems/redis-4.0.1/lib/redis/client.rb:119:in `call': ERR Slot 9189 is already busy (Redis::CommandError)
from /usr/local/lib/ruby/gems/2.2.0/gems/redis-4.0.1/lib/redis.rb:2764:in `block in method_missing'
from /usr/local/lib/ruby/gems/2.2.0/gems/redis-4.0.1/lib/redis.rb:45:in `block in synchronize'
from /usr/local/lib/ruby/2.2.0/monitor.rb:211:in `mon_synchronize'
from /usr/local/lib/ruby/gems/2.2.0/gems/redis-4.0.1/lib/redis.rb:45:in `synchronize'
from /usr/local/lib/ruby/gems/2.2.0/gems/redis-4.0.1/lib/redis.rb:2763:in `method_missing'
from ./redis-trib.rb:212:in `flush_node_config'
from ./redis-trib.rb:906:in `block in flush_nodes_config'
from ./redis-trib.rb:905:in `each'
from ./redis-trib.rb:905:in `flush_nodes_config'
from ./redis-trib.rb:1426:in `create_cluster_cmd'
from ./redis-trib.rb:1830:in `<main>'

解决:删除相关的 dump.rdb,nodes-700x.conf

dubbo 启动服务器提供者,ip 绑定错乱

Posted on 2018-02-19 | Edited on 2019-02-02 | In serve register discovery
服务配置
1
2
3
4
<dubbo:application name="taotao-manager"/>
​ <dubbo:registry address="zookeeper://106.15.xxx.xxx:2181" />
​ <dubbo:protocol name="dubbo" port="20881" />
​ <dubbo:service interface="com.smniuhe.service.ItemService" ref="itemServiceImpl" timeout="300000"/>

启动正常

查看发布的服务
1
2
3
4
[zk: 106.15.191.27:2181(CONNECTED) 1] ls /dubbo/com.smniuhe.service.ItemService/providers
[dubbo%3A%2F%2F192.168.0.28%3A20880%2Fcom.smniuhe.service.ItemService%3Fanyhost%3Dtrue%26application%3Dtaotao-manager%26dubbo%3D2.8.4
%26generic%3Dfalse%26interface%3Dcom.smniuhe.service.ItemService%26methods%3DgetTbItemById%26pid%3D32659%26revision%3D1.0-
SNAPSHOT%26side%3Dprovider%26timeout%3D300000%26timestamp%3D1518972080865]

发现 192.168.0.28 是本地的 ip 地址

zookeeper 命令 查看发布的服务提供者和消费者

Posted on 2018-02-18 | Edited on 2019-02-02 | In registration center
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[root@iZuf6iq8e7ya9v3ix71k0pZ bin]# pwd
/usr/local/zookeeper/zookeeper-3.4.10/bin
[root@iZuf6iq8e7ya9v3ix71k0pZ bin]# ./zkCli.sh -server 127.0.0.1:2181
Connecting to 127.0.0.1:2181
2018-02-18 21:30:05,213 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2018-02-18 21:30:05,216 [myid:] - INFO [main:Environment@100] - Client environment:host.name=iZuf6iq8e7ya9v3ix71k0pZ
2018-02-18 21:30:05,216 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_111
2018-02-18 21:30:05,218 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2018-02-18 21:30:05,218 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8.0_111/jre
2018-02-18 21:30:05,219 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/usr/local/zookeeper/zookeeper-3.4.10/bin/../build/classes:/usr/local/zookeeper/zookeeper-3.4.10/bin/../build/lib/*.jar:/usr/local/zookeeper/zookeeper-3.4.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper/zookeeper-3.4.10/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper/zookeeper-3.4.10/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper/zookeeper-3.4.10/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper/zookeeper-3.4.10/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper/zookeeper-3.4.10/bin/../zookeeper-3.4.10.jar:/usr/local/zookeeper/zookeeper-3.4.10/bin/../src/java/lib/*.jar:/usr/local/zookeeper/zookeeper-3.4.10/bin/../conf:
2018-02-18 21:30:05,219 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2018-02-18 21:30:05,219 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2018-02-18 21:30:05,219 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA>
2018-02-18 21:30:05,219 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
2018-02-18 21:30:05,219 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
2018-02-18 21:30:05,219 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.10.0-514.26.2.el7.x86_64
2018-02-18 21:30:05,219 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root
2018-02-18 21:30:05,220 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root
2018-02-18 21:30:05,220 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/usr/local/zookeeper/zookeeper-3.4.10/bin
2018-02-18 21:30:05,221 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@277050dc
Welcome to ZooKeeper!
2018-02-18 21:30:05,243 [myid:] - INFO [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2018-02-18 21:30:05,308 [myid:] - INFO [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:2181, initiating session
2018-02-18 21:30:05,315 [myid:] - INFO [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:2181, sessionid = 0x161a83a11fc0007, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: 127.0.0.1:2181(CONNECTED) 0] ls /
[dubbo, zookeeper]
[zk: 127.0.0.1:2181(CONNECTED) 1] ls /dubbo
[com.smniuhe.service.ItemService]
[zk: 127.0.0.1:2181(CONNECTED) 2] ls /dubbo/com.smniuhe.service.ItemService/providers
[]
[zk: 127.0.0.1:2181(CONNECTED) 3]

现在我通过 tomcat maven 插件 tomcat7:run 发布该服务

[zk: 127.0.0.1:2181(CONNECTED) 3] ls /dubbo/com.smniuhe.service.ItemService/providers
[dubbo%3A%2F%2F192.168.0.28%3A20880%2Fcom.smniuhe.service.ItemService%3Fanyhost%3Dtrue%26application%3Dtaotao-manager%26dubbo%3D2.8.4%26generic%3Dfalse%26interface%3Dcom.smniuhe.service.ItemService%26methods%3DgetTbItemById%26pid%3D31972%26revision%3D1.0-SNAPSHOT%26side%3Dprovider%26timestamp%3D1518960843164] # 当前发布服务器的时间戳
[zk: 127.0.0.1:2181(CONNECTED) 4] ls /dubbo/com.smniuhe.service.ItemService/

configurators providers

zookeeper 注册中心的简单搭建

Posted on 2018-02-18 | Edited on 2019-02-02 | In registration center
zookeeper 官网
官网地址
Read more »
1…3456

smniuhe

大家好,我是牛河
56 posts
37 categories
83 tags
GitHub E-Mail
Links
  • 阿里中间件
  • 美团技术团队
  • Mercy Ma
  • CoolShell
  • 当然我在扯淡
  • 周立
  • 芋道源码
  • 程序猿DD
  • 梁桂钊
  • 纯洁的微笑
© 2020 smniuhe
Powered by Hexo v3.8.0
|
Theme – NexT.Muse v7.0.1