在以下範例中,我們使用 AdminAPI 的沙箱部署來完成下列任務,以部署具有 MySQL Router 的 InnoDB Cluster。
部署和使用本機 MySQL 沙箱執行個體,可讓您在部署到生產伺服器之前,在本機測試功能。AdminAPI 具有內建功能,可建立預先設定為在本機部署案例中與 InnoDB Cluster、InnoDB ClusterSet 和 InnoDB ReplicaSet 搭配使用的沙箱執行個體。
本範例包含下列章節
沙箱執行個體僅適合在您的本機上部署和執行以進行測試。
安裝下列元件
MySQL 伺服器:如需詳細資訊,請參閱安裝 MySQL。
MySQL Shell:如需詳細資訊,請參閱安裝 MySQL Shell。
MySQL Router:如需詳細資訊,請參閱安裝 MySQL Router。
為了提供容錯能力,以應對一次失敗,請建立具有三個執行個體的 InnoDB Cluster。在本範例中,我們將使用在同一部電腦上執行的三個沙箱執行個體。在實際環境中,這三個執行個體將在網路上的不同主機上執行。
-
若要啟動 MySQL Shell,請發出
> mysqlsh
-
若要建立和啟動 MySQL 沙箱執行個體,請使用屬於 X AdminAPI 一部分的
dba.deploySandboxInstance()
函式。在 MySQL Shell 中發出下列三個陳述式,並為每個執行個體輸入根密碼mysql-js> dba.deploySandboxInstance(3310) mysql-js> dba.deploySandboxInstance(3320) mysql-js> dba.deploySandboxInstance(3330)
注意所有執行個體都使用相同的根密碼。
若要建立 InnoDB Cluster,請完成下列步驟
-
透過發出以下命令連線至您要作為 InnoDB Cluster 中主要執行個體的 MySQL 執行個體
mysql-js> shell.connect('root@localhost:3310')
-
發出
dba.createCluster()
命令以建立叢集,並使用指定的變數cluster
來保留輸出值mysql-js> cluster = dba.createCluster('devCluster')
此命令會輸出
A new InnoDB cluster will be created on instance 'localhost:3310'. Validating instance configuration at localhost:3310... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3310 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33101'. Use the localAddress option to override. Creating InnoDB cluster 'devCluster' on '127.0.0.1:3310'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure. <Cluster:devCluster>
-
使用指定變數
cluster
的cluster.status()
函式來確認建立是否成功mysql-js> cluster.status()
以下會輸出狀態
{ “clusterName”: “devCluster”, “defaultReplicaSet”: { "name": "default", "primary": "127.0.0.1:3310", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, “groupInformationSourceMember”: “127.0.0.1:3310” }
-
叢集已啟動並執行,但尚未能容忍失敗。使用
<Cluster>.addInstance()
函式將另一個 MySQL 伺服器執行個體新增至叢集{ mysql-js> cluster.addInstance('root@localhost:3320') NOTE: The target instance '127.0.0.1:3320' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3320' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): mysql-js> cluster.addInstance('root@localhost:3330') }
-
從提示中選取復原方法。選項如下
複製:複製您要新增至主要叢集的執行個體,並刪除執行個體包含的任何交易。MySQL Clone 外掛程式會自動安裝。假設您要新增空的執行個體 (尚未處理任何交易) 或包含您不希望保留的交易的執行個體,請選取「複製」選項。
增量復原:使用非同步複寫,將叢集處理的所有交易復原到加入的執行個體。如果您確定叢集處理的所有更新都已完成,並且已啟用全域交易 ID (
GTID
),則增量復原很適合。沒有已清除的交易,且新執行個體包含與叢集相同的GTID
集,或其子集。
在本範例中,針對 複製 選取 C
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C Validating instance configuration at localhost:3320... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3320 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33201'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: 127.0.0.1:3320 is being cloned from 127.0.0.1:3310 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: 127.0.0.1:3320 is shutting down... * Waiting for server restart... ready * 127.0.0.1:3320 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.61 MB transferred in about 1 second (~72.61 MB/s) State recovery already finished for '127.0.0.1:3320' The instance '127.0.0.1:3320' was successfully added to the cluster.
-
新增建立的第三個執行個體,然後再次針對「複製」復原方法選取 C
mysql-js> cluster.addInstance('root@localhost:3330')
-
透過發出以下命令來檢查叢集的狀態
mysql-js> cluster.status()
這會輸出下列內容
{ "clusterName": "devCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3310", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3310" } The setup of the InnoDB Cluster was successful!
現在,叢集可以容忍最多一次失敗。透過發出:
\q
來結束 MySQL Shell
設定 MySQL InnoDB Cluster 之後,請測試叢集的高可用性。為此,請使用 MySQL Router。如果一個執行個體失敗,MySQL Router 會自動更新其路由設定,並確保新的連線會路由到其餘的執行個體。
在 MySQL Router 執行路由操作之前,請讓它知道新的 InnoDB Cluster。若要執行此操作,請使用 –bootstrap
選項,並將 MySQL Router 指向叢集目前的 R/W
MySQL 伺服器執行個體 (主要執行個體)。使用 -d
選項,將路由器的設定儲存在名為 mysql-router
的資料夾中。
-
在您的主目錄中開啟終端機
-
在 Linux 系統上,發出
[demo-user@losthost]$> mysqlrouter --bootstrap root@localhost:3310 -d mysqlrouter
-
在 Windows 系統上,發出
C:\Users\demo-user> mysqlrouter --bootstrap root@localhost:3310 -d mysql-router
然後,MySQL Router 會列印出它將用於路由連線的 TCP/IP 連接埠。如需詳細資訊,請參閱部署 MySQL Router。
-
-
成功設定 MySQL Router 之後,請在背景執行緒中啟動它
-
在 Windows 系統上,使用
start /B
命令,並將路由器指向使用–bootstrap
選項產生的設定檔C:\> start /B mysqlrouter -c %HOMEPATH%\mysql-router\mysqlrouter.conf
-
或呼叫先前建立的
mysqlrouter
資料夾中的Windows PowerShell
指令碼\mysqlrouter\start.ps1
-
在使用 systemd 的 Linux 系統上,發出
sudo systemctl start mysqlrouter.service
-
或在 Linux 系統上,呼叫先前建立的
mysqlrouter
資料夾中的Shell
指令碼/mysqlrouter/start.sh
-
現在 InnoDB Cluster 和 MySQL Router 都已在執行,請測試叢集設定。
不要直接連線至其中一個 MySQL 伺服器執行個體,而是透過 MySQL Router 連線。
-
發出下列連線命令
> mysqlsh root@localhost:6446
提供根密碼以連線至 InnoDB Cluster。
-
透過建立變數
cluster
並使用dba.getCluster()
操作的值來指派它,藉此檢查 InnoDB Cluster 的狀態mysql-js> cluster = dba.getCluster()
mysql-js> cluster.status()
-
切換至 SQL 模式
mysql-js> \sql
-
透過發出以下命令來查詢執行個體執行的連接埠
mysql-sql> SELECT @@port; +--------+ | @@port | +--------+ | 3310 | +--------+ 1 row in set (0.0007 sec)
-
切換回 JavaScript 模式
mysql-js> \js
-
使用
dba.killSandboxInstance()
函式來停止 MySQL 伺服器執行個體dba.killSandboxInstance(3310) Killing MySQL instance... Instance localhost:3310 successfully killed.
-
透過對剛停止的執行個體執行
SELECT @@port
命令並檢查結果,來檢查 MySQL Router 是否正確路由流量-
切換至 SQL 模式
mysql-js> \sql
-
檢查 MySQL 的連接埠
mysql-sql> SELECT @@port;
-
傳回錯誤:
ERROR: 2013 (HY000): Lost connection to MySQL server during query
。此錯誤表示在連接埠 3310 上執行的執行個體已不再執行。-
再次檢查連接埠
mysql-sql> SELECT @@port; +--------+ | @@port | +--------+ | 3320 | +--------+
此輸出顯示在連接埠 3320 上執行的執行個體已升級成為新的
讀取/寫入
主要執行個體。-
返回 JavaScript 模式,並檢查叢集的狀態
mysql-js> cluster.status() { "clusterName": "devCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3320", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "SECONDARY", "mode": "n/a", "readReplicas": {}, "role": "HA", "shellConnectError": "MySQL Error 2003: Could not open connection to '127.0.0.1:3310': Can't connect to MySQL server on '127.0.0.1:3310' (10061)", "status": "(MISSING)" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3320" }
先前在連接埠 3310 上正式執行的 MySQL 伺服器執行個體
遺失
。-
使用帶有連接埠號碼的
dba.startSandboxInstance()
操作來重新啟動此執行個體。mysql-js> dba.startSandboxInstance(3310)
-
檢查叢集的狀態顯示,該執行個體已還原為叢集中的作用中狀態,但為
SECONDARY
成員。mysql-js > cluster.status() { "clusterName": "devCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3320", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3320" }
所有執行個體都已恢復連線,且叢集再次可以容忍一個故障。