Cloning a MySQL Slave
09 November 2011

Cloning a MySQL slave is typically very straight forward.

  1. Execute stop slave on the donor slave and capture the slave status information
  2. Stop mysql on the donor
  3. Copy the database files from the donor to the new slave
  4. Start MySQL on the new slave
  5. Execute the change master statement to start the new slave's replication process
  6. Start mysql on the donor and allow replication to catch up

Simple right? It is, if you don't run into the scenario I managed to hit. Show slave status gives you a lot of information like this:

                  Master_Host: 10.10.10.10
                  Master_User: repl_user
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.002644
          Read_Master_Log_Pos: 1015419943
               Relay_Log_File: relay-log.000257
                Relay_Log_Pos: 68175060
        Relay_Master_Log_File: binlog.002643
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 887594041
              Relay_Log_Space: 2448352803

This information is then matched into a change master statement. The most important values here are MASTER_LOG_FILE and MASTER_LOG_POS which tell the slave thread where to start replicating from. This will be the point where the donor slave was stopped before the copy. Now, you'd think that the master log file would correspond with the Master_Log_File value from show slave status. It doesn't. You want to use Relay_Master_Log_File. Most often, these log files are the same - if replication isn't lagging. But even in instances where the slave is not behind there is a small chance you'll catch it as the IO thread is getting the next binlog file. Now if you use the log file you think is correct, you'll be setting your replication to start too far ahead. Now, you may get lucky like me and the position won't exist in the binlog you set, and you'll get a replication IO error in my case, 1236 "Exceeded max_allowed_packet", instructing you to set the max_allowed_packet larger on the master which is misleading in this case.

The CHANGE MASTER statement for the above slave status should be

CHANGE MASTER TO
MASTER_LOG_FILE='binlog.002643'
MASTER_LOG_POS=887594041
The log position is the value from Exec_Master_Log_Pos.

Hopefully this will keep you from bludgeoning your forehead on your desk as I nearly did.

Dumping MySQL
07 October 2011

My task was to dump our database at work and build a new database cluster. Now our database isn't trivial. It's large. Nearing a terabyte large. A mysqldump took 14 hours and I figured that the subsequent import would be double that at most. I decided to go ahead without any optimization of the process. It should fit into the allowed time frame and I had a lot of other work to do.

I checked the import before going to bed two days later and it was finally nearing the end. The following morning however, when the import was somewhere around the 60 hour mark, it died on an innodb deadlock error.

Now for the redo. This time, I'm going to optimize this process. Instead of one large dump file, I broke the export to one file per table with this handy two step job

1. Get list of tables
mysql -u user -ppasswd dbname -e "show tables" > list_of_tables.txt

2. Iterate through the table list and dump to file
cat list_of_tables.txt | while read -r line; do mysqldump -h dbhost -u user -ppasswd dbname "$line" > "$line".sql; done

I had done this process in a single step previously, but in this case, there were a few tables I wanted to exclude so I dumped the table list to file first so I could edit it. For even more speed, you could break the list into multiple files and run parallel processes. I let it go over night so this was fine for me.

Fortunately, I ran this dump the night before just in case of a failure and I have all the SQL ready to go - I'm ready to import. This time I will split the tables into separate parallel jobs as the import is CPU bound and I want to get more processors involved. Break the table list into multiple text files being sure to distribute the largest tables evenly. You can then start 4 mysql import processes feeding the table names and appending ".sql" to them.

I am scripting this so I can track the process easily

#!/bin/bash
FILE=$1  # file listing tables passed in as argument
HOST=dbhost
USER=user
PASSWD=passwd
DB=dbname
SQL_PATH=/path/to/sql

# Loop through file names
if [ -f $FILE ]
then
    echo "restoring from $FILE"

    for f in `cat $FILE`
    do
        echo restoring $f
        mysql -h $HOST -u $USER -p$PASSWD $DB < $SQL_PATH/$f.sql
    done
fi

Then start about 4 of these with
./db_import.sh table_list1.txt > results1.txt &
./db_import.sh table_list2.txt > results2.txt &
./db_import.sh table_list3.txt > results3.txt &
./db_import.sh table_list4.txt > results4.txt &

Tail the result files to track the import progress. I've learned my lesson. Smaller files. More processes. Now if I have a failure part way through, I can deduce which table(s) failed and continue on.

Sprockets
08 September 2011

Finally got Dan's motorcycle back together with new sprockets and chain. The front one definitely was due for replacement.

Note the bent teeth.

DIY Lightbox
01 September 2011

I've been meaning for a while to try and make a photography lightbox and finally got around to it. I found some white plastic in the basement I used for building the box, simply duct-taping them together.

Here are a couple shots. I need to get a couple more lights to brighten it up a bit. I'm also thinking of building a flash diffuser.

Tags


Fatal error: Uncaught Error: Undefined constant "Math" in /home/sgreimer/stevenreimer.com/includes/classes/TagCloud.class.php:188 Stack trace: #0 /home/sgreimer/stevenreimer.com/templates/_includes/sidebar.php(52): TagCloud->showCloud() #1 /home/sgreimer/stevenreimer.com/templates/index.tpl.php(25): include('/home/sgreimer/...') #2 /home/sgreimer/stevenreimer.com/includes/Page.class.php(412): include('/home/sgreimer/...') #3 /home/sgreimer/stevenreimer.com/public_html/index.php(25): Page->show() #4 {main} thrown in /home/sgreimer/stevenreimer.com/includes/classes/TagCloud.class.php on line 188