mysql monitoring

home | blog | Terrible people and places | Covid-19 links | Teh Internet | guest blog |rants | placeholder | political | projects | Gwen and Liam | Citadel patched | Tools | Scouts



Getting "too many connections"?

Try this:
msyql
show processlist;

or 

SHOW FULL PROCESSLIST;
Gives the connection stats.

Use:
kill ID
To bat cleanup!

If they are sleep connections and you need to kill many of them, give this script a shot: killsleepers.py
Of course, change the user name, and password for your instance. Also, make sure this is what you want as it will kill any sleeping id in the mysql proccesslist table (even ones you want on a live server)....
To see bin logs (writes only):
show binlog events from 201 limit 2;

Or use:
mysqlbinlog mysql-bin.lognumber

to display binary logs of current activity.

Alternatively, if you have tables open (for up to 8 hours - the mysql default 2880 seconds) - check this out:
You might want to set
mysql.allow_persistent = Off

- in /etc/php5/apache2/php.ini

Restart apache2 and mysql. This is due to old php code using mysql_pconnect instead of mysql_connect
It beats finding, changing, and re-writing the old code :-)

Are you getting (errno: 24) during mysqldumps?
You might want to add --lock-tables=false to the mysqldump, or up the open-files-limit in the my.cnf

Have badly written php scripts and find you are running out of memory in Mysql?
If you can't get the developers to fix it, you can always do the bad thing of upping the limits in php.ini for the apache config:

/etc/php5/apache2/php.ini
memory_limit =  XXXM

Where X or XX is some sufficiently large number.

Probably bad longer term, but fun for the short term.
Don't forget to restart Apache.


Need to log slow queries?

1. create /var/log/mysql/mysql-slow.log (and chown and perms).
  1a. -> run mysql (as your admin user).

2. SET GLOBAL slow_query_log_file = '/var/log/mysql/mysql-slow.log';

3. SET GLOBAL long_query_time = 1.000000;
 3a. Set it low enough to catch what you need depending on the type of query you wish to find
     I.E. if it is a web page that loads based on 10 lookups, in an average of 1 second, you will need < 1 sec granularity.

4. SET GLOBAL slow_query_log = ON;

Enjoy - or crank long_query_time > 1.000000 to shut it up.
(let it run for a long time- but not so long you fill the disk up with the slow log file).

5. To catch "other" queries, that might not be slow, but might not be indexed (apart from table scans):
   SET GLOBAL LOG_QUERIES_NOT_USING_INDEXES = ON;

check top 10 using:

mysqldumpslow -a -s c -t 10 /var/log/mysql/mysql-slow.log

This all goes away (unless you put it in the MySQL config file).


Want to log all the queries?
mysql> SHOW VARIABLES LIKE "general_log%";
+------------------+-------------------------+
| Variable_name    | Value                   |
+------------------+-------------------------+
| general_log      | OFF                     |
| general_log_file | /var/lib/mysql/some.log |
+------------------+-------------------------+
2 rows in set (0.01 sec)

mysql> SET GLOBAL general_log = 'ON';
(check yer logs)... Don't forget to turn it off again as it is probably not in your logrotate scripts!
- nice, it seems mysql creates the file if it does not exist, and has permissions to write a file in the location specified!



[æ]