24 February 2012

It's such a great advantage to have your vehicle maintenance documented.

  1. It's very helpful when it comes to selling.
  2. Prevents over-extending periods between maintenance (forgetting the last time you did an oil change).

My problem is that either I don't have my notebook on hand when I do the maintenance, or I forget, and by the time I get around to it, I can't remember what the odometer was at.

This has inspired my latest pet project, lper100km.com, which allows you to not only keep track of maintenance, but track your fuel economy across multiple vehicles on a mobile device. It's currently designed specifically for mobile so it doesn't look great on a full browser. The site is pretty basic right now, but functional. Features are added as I have time to implement them.

Current features:

  1. Record fill ups and calculate fuel economy.
  2. Record maintenance
  3. Upload pictures of vehicles.

In no particular order, features under consideration are:

  1. Ability to use imperial values set as a user preference.
  2. Attach images to maintenance records.
  3. Multiple maintenance types per record. ie Oil and air filter in one entry.
  4. Plotting fuel economy, with markers for maintenance on the same graph.
  5. Print view of vehicle records.
  6. Full browser site.
  7. Better domain name?

Feel free to try it out and make suggestions.

Monitoring MySQL Queries
17 February 2012

I have the challenge of finding some seemingly random issues with MySQL connection spikes and subsequent connection abortions. MySQL Enterprise Monitor is already in place and no other indicators correspond with the connection issues. The plan is to keep a record of all queries against the server for a period of time so that when the issue happens next, we have a history of activity before and throughout the trouble period.

The traffic to the database is captured via a replicated switch port to another host so as not to impose any additional load on the server. This host will run tcpdump to capture the packets and dump them to a file. Later on, to investigate issues the Maatkit toolkit, specifically, mk-query-digest will be used to recreate and report onthe queries. Then I can look for any anomalies.

The amount of traffic going to the database is enormous. Enough to generate over 1/2 a gig of packet dumps in one minute. I wrote a simple script to create a new dump file every x minutes.


echo $$>$PIDFILE

while [ 1 ] 
    tcpdump -i eth1 port 3306 -s 65535 -x -n -q -tttt 
       > $LOGPATH/packets-$(date +%y%m%d-%H%M%S).log & PID=$!
    sleep $CAPTIME
    kill -HUP $PID

I then have an hourly cron that comes through and deletes any log files over 2 hours old

find /data/capture/logs -mmin +120 -exec rm {} ;

This will give me the history I need. However, I'm fully expecting the parsing of these logs to be extremely tedious. Here's to hoping I find something.

And it's Winter
16 January 2012

Winter has finally arrived. That makes it seem like I have been waiting for it to come. I think I was. Sometimes I miss having a proper, cold winter. When snow stays on the ground until April. I want to be able to dig snow forts in drifts with my kids. Lately I've been playing a lot with the kids. Building Thomas the Train track routes and bridges. I get disappointed when the track gets messed up, it can take a lot of time to maximize the use of the table top.

We are just back from a 2 week Christmas holiday. During the holiday, Julene and I left the kids with our parents and took a holiday-in-a-holiday 4 day trip to Arizona to have some time to ourselves. We took a Grand Canyon tour which was amazing, and also did a desert tour where we were able to drive our own tom car (an Israeli made off-road vehicle).

Now we're back and getting back into the swing of things. It took a while but we seem to be getting back into the swing of things.

I recently had an issue where one of my MySQL slaves repeatedly error'd on replication due to key collisions. The replication type was row based, which is much more strict that statement. In fact, if it had been statement based, a lot of these errors wouldn't have presented themselves and the slave would have continued on happily becoming more and more inconsistent.,

Because of the huge dataset and the speed of recovery required, I did not want to rebuild the entire database. I wanted to restore only the couple tables that were causing issues.

What I wanted to do was

  1. stop replication on a good slave and the problem slave at the same point
  2. dump the table (from the good slave, obviously)
  3. drop and import the dumped tables on the problem slave
  4. restart replication

Fortunately this is achievable with by stopping the slave to be fixed and then a minute or so later the source database, capturing the binlog and position and then issuing the following on the target database:

SELECT MASTER_POS_WAIT('binlog.000001', 123456);
At this point, both databases should have had their slave processes halted at the same execution point and the dump and restore outlined above can be done. This reduced what would have been a 5 hour database copy into 15 minutes. Hopefully this will save someone else some time too.


Fatal error: Uncaught Error: Undefined constant "Math" in /home/sgreimer/stevenreimer.com/includes/classes/TagCloud.class.php:188 Stack trace: #0 /home/sgreimer/stevenreimer.com/templates/_includes/sidebar.php(52): TagCloud->showCloud() #1 /home/sgreimer/stevenreimer.com/templates/index.tpl.php(25): include('/home/sgreimer/...') #2 /home/sgreimer/stevenreimer.com/includes/Page.class.php(412): include('/home/sgreimer/...') #3 /home/sgreimer/stevenreimer.com/public_html/index.php(25): Page->show() #4 {main} thrown in /home/sgreimer/stevenreimer.com/includes/classes/TagCloud.class.php on line 188