Howto Build a MySQL Cluster in Debian Based Systems


About the MySQL Cluster

Each MySQL Cluster host computer running an SQL node must have a MySQL binary installed. For management nodes and data nodes, it is not necessary to install the MySQL server binary, but management nodes require the management server daemon (ndb_mgmd) and data nodes require the data node daemon (ndbd). It is also a good idea to install the management client (ndb_mgm) on the management server host. This section covers the steps necessary to install the correct binaries for each type of Cluster node. This may be confusing but will make sense later.
In this example, we will use 5 Debian Lenny servers. You should start with a clean operating system installation for this tutorial. If you've already tried installing MySQL packages, you may have problems due to inconsistencies. All steps below should be performed as root.
The 5 servers to be used in this tutorial are as follows:
mysqlmanagement.example.com: 192.168.1.10 [ mysqlmanagement ]
mysqlnode1.example.com:  192.168.1.20 [ mysqlnode1 ]
mysqlnode2.example.com:  192.168.1.30 [ mysqlnode2 ]
mysqldatanode1.example.com:  192.168.1.40 [ datanode1 ]
mysqldatanode2.example.com:  192.168.1.50 [ datanode2 ]

MySQL Cluster 7.0 Management Server Installation [mysqlmanagement]

Visit http://dev.mysql.com/downloads/cluster and locate the Linux x86 and copy the url of the "Download". Your exact version may vary.
cd /var/tmp
wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/mysql-cluster-...
tar -zxvf mysql-cluster-gpl-7.0.6-linux-i686-glibc23.tar.gz
cd mysql-cluster-gpl...
cp bin/ndb_mgm* /usr/local/bin
After copying the files, we can delete the gzip and extracted directories since we won't need them anymore.
rm -r /var/tmp mysql-cluster*
Change directories to the bin directory and adjust permissions on the nbd_mgm files
cd /usr/local/bin
chmod +x ndb_mgm*
Create a lib directory and create a config file for the cluster manager
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
nano config.ini
Add the following default options to the configuration file:
 
# Options affecting ndbd processes on all data nodes:
[ndbd default]
NoOfReplicas=2    # Number of replicas (This must match the number of data nodes!!!)
DataMemory=256M   # How much memory to allocate for data storage
IndexMemory=64M   # How much memory to allocate for index storage
                  # For DataMemory and IndexMemory, we have used the
                  # default values. Since the "world" database takes up
                  # only about 500KB, this should be more than enough for
                  # this example Cluster setup.
  
[tcp default]
#portnumber=2202  # This the default; however, you can use any port that is free
                  # for all the hosts in the cluster
                  # Note: It is recommended that you do not specify the port
                  # number at all and allow the default value to be used instead

# Management process options  
[ndb_mgmd]
hostname=192.168.1.10  # Hostname or IP address of management node
datadir=/var/lib/mysql-cluster  # Directory for management node log files
 
# First Data Node (ID=2 because the management node autmatically obtains ID=1) 
[ndbd]
Id=2
hostname=192.168.1.40
datadir=/usr/local/mysql/data

# Second Data Node
[ndbd]
Id=3
hostname=192.168.1.50
datadir=/usr/local/mysql/data

# MySQLD Processing Nodes (workhorses) - Defining 4 will allow 4 nodes to join the cluster
# from any location on the network. We use 4 because our data nodes also have
# mysqld running, so we are going to use them to process as well.
[mysqld]
[mysqld]
[mysqld]
[mysqld]
Start the Cluster Management Server by loading config.ini file for the first time
ndb_mgmd --initial -f /var/lib/mysql-cluster/config.ini
The cluster management server is now running. We will leave this server alone for now until we start joining data and sql nodes to the cluster.

MySQL 5.1 Installation [mysqlnode1 | mysqlnode2 | mysqldatanode1 | mysqldatanode2]

Visit http://dev.mysql.com/downloads/cluster/7.0.html and locate the Linux x86 copy the url of the "Download". Your exact version may vary. Complete the following steps on all 4 nodes (data and sql).
Add the user and group mysql
groupadd mysql
useradd -g mysql mysql
cd /var/tmp
Download the binary using wget and pasting the URL from above
wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.0/mysql-cluster...
Unzip the package, then delete the tar.gz left behind
tar -C /usr/local -xzvf mysql-VERSION-OS.tar.gz
rm -f mysql-VERSION-OS.tar.gz
Rename the extracted folder to mysql. This is where the mysql binary is installed.
cd /usr/local
mv mysql-VERSION-OS mysql
Change directories and adjust permissions
cd mysql
chown -R mysql .
chgrp -R mysql .
Install mysql and adjust permissions after the install
scripts/mysql_install_db --user=mysql
chown -R root .
chown -R mysql data
chgrp -R mysql .
Copy the mysql startup file and adjust permissions
cp support-files/mysql.server /etc/init.d/mysql
chmod +x /etc/init.d/mysql
Set mysql to start on boot
cd /etc/init.d/
update-rc.d mysql defaults
Specify in your PATH the bin directory so we can execute mysql commands without the need to type the full path to mysql. Echo the current path then replace it with mysql added:
echo $PATH
export PATH="REPLACEWITHCOPIEDOUTPUTOFPATH:/usr/local/mysql/bin"
Create the my.cnf file (MySQL Configuration File
nano /etc/my.cnf
For all 4 MySQL nodes, my.cnf should look like this:
[mysqld]
ndbcluster
ndb-connectstring='host=192.168.1.10'    # IP address of the management server

[mysql_cluster]
ndb-connectstring='host=192.168.1.10'    # IP address of the management server
Create the mysql lib directory on all 4 nodes
mkdir /var/lib/mysql
Set the root password on the data nodes [ mysqldatanode1 | mysqldatanode2 ]
mysqladmin -u root password YOURSTRONGROOTPASSWORD
On [ mysqldatanode1 ], run this command to connect to the management server, which is already running from before. Note: Only use "--initial" if this is the first time you are connecting to the management server, or if the config.ini has changed on the management server.
cd /usr/local/mysql/bin
./ndbd --initial
On [ mysqldatanode2 ], run this command.
cd /usr/local/mysql/bin
./ndbd --initial
On [ mysqlnode1 ], run this command.
cd /usr/local/mysql/bin
./ndbd --initial
On [ mysqlnode2 ], run this command.
cd /usr/local/mysql/bin
./ndbd --initial
The output should look something like this on each node:
[ndbd] INFO -- Configuration fetched from '192.168.1.40:1186'

Test management server to see if cluster is running

On the management server, test to see if the cluster is running and the nodes have connected to the management server. It may take a few minutes to connect.
ndb_mgm
ndb_mgm> show;
Output should look like this:
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.1.50  (mysql-5.1.34 ndb-7.0.6, Nodegroup: 0)
id=2 @192.168.1.40  (mysql-5.1.34 ndb-7.0.6, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.10  (mysql-5.1.34 ndb-7.0.6)

[mysqld(API)] 2 node(s)
id=4  @192.168.1.40  (mysql-5.1.34 ndb-7.0.6)
id=5  @192.168.1.50  (mysql-5.1.34 ndb-7.0.6)
id=6  @192.168.1.20  (mysql-5.1.34 ndb-7.0.6)
id=7  @192.168.1.30  (mysql-5.1.34 ndb-7.0.6)
If you see that everything is connected, then things are OK and you can start MySQL on all of your nodes.
Start MySQL on all 4 nodes
/etc/init.d/mysql start
If not, then repeat the steps and ensure you didn't miss anything.
quit;

Test enter some data into your cluster

On [ mysqldatanode1 ] create a table and enter some data
mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
CREATE TABLE testtable (I INT) ENGINE=NDBCLUSTER;
INSERT INTO testtable () VALUES (1);
SELECT * FROM testtable;
On [ mysqldatanode2 ], see if the database was mirrored and the data
mysql -u root -p
USE mysqlclustertest;
CREATE TABLE testtable (I INT) ENGINE=NDBCLUSTER;
INSERT INTO testtable () VALUES (1);
SELECT * FROM testtable;
If you can use the database and retrieve the data, your replication was a success. If not, retrace all steps above and try again.
Further testing of data. On [ mysqldatanode2 ] create a table and enter some data
mysql -u root -p
USE mysqlclustertest;
INSERT INTO testtable () VALUES (2);
SELECT * FROM testtable;
Now go back to [ mysqldatanode1 ] and see if the value was replicated
mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;

Comments

Popular posts from this blog

Scan FC LUN's in RHEL/CentOS

CEPH bluestore using ceph-ansible

How to enable federated engine in MySQL