• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
The Blog of Jorge de la Cruz

The Blog of Jorge de la Cruz

Everything about VMware, Veeam, InfluxData, Grafana, Zimbra, etc.

  • Home
  • VMWARE
  • VEEAM
    • Veeam Content Recap 2021
    • Veeam v11a
      • Veeam Backup and Replication v11a
    • Veeam Backup for AWS
      • Veeam Backup for AWS v4
    • Veeam Backup for Azure
      • Veeam Backup for Azure v3
    • VeeamON 2021
      • Veeam Announces Support for Red Hat Enterprise Virtualization (RHEV/KVM)
      • Veeam announces enhancements for new versions of Veeam Backup for AWS v4/Azure v3/GVP v2
      • VBO v6 – Self-Service Portal and Native Integration with Azure Archive and AWS S3 Glacier
  • Grafana
    • Part I (Installing InfluxDB, Telegraf and Grafana on Ubuntu 20.04 LTS)
    • Part VIII (Monitoring Veeam using Veeam Enterprise Manager)
    • Part XII (Native Telegraf Plugin for vSphere)
    • Part XIII – Veeam Backup for Microsoft Office 365 v4
    • Part XIV – Veeam Availability Console
    • Part XV – IPMI Monitoring of our ESXi Hosts
    • Part XVI – Performance and Advanced Security of Veeam Backup for Microsoft Office 365
    • Part XVII – Showing Dashboards on Two Monitors Using Raspberry Pi 4
    • Part XIX (Monitoring Veeam with Enterprise Manager) Shell Script
    • Part XXII (Monitoring Cloudflare, include beautiful Maps)
    • Part XXIII (Monitoring WordPress with Jetpack RESTful API)
    • Part XXIV (Monitoring Veeam Backup for Microsoft Azure)
    • Part XXV (Monitoring Power Consumption)
    • Part XXVI (Monitoring Veeam Backup for Nutanix)
    • Part XXVII (Monitoring ReFS and XFS (block-cloning and reflink)
    • Part XXVIII (Monitoring HPE StoreOnce)
    • Part XXIX (Monitoring Pi-hole)
    • Part XXXI (Monitoring Unifi Protect)
    • Part XXXII (Monitoring Veeam ONE – experimental)
    • Part XXXIII (Monitoring NetApp ONTAP)
    • Part XXXIV (Monitoring Runecast)
  • Nutanix
  • ZIMBRA
  • PRTG
  • LINUX
  • MICROSOFT

Zimbra: How to solve InnoDB: Error: Column last_update in table “mysql”.”innodb_table_stats”

27th July 2017 - Written in: zimbra

Hello everyone, for the last couple of days I’ve been receiving notifications from my monitoring systems about disk issues on one of my Zimbra Servers. The system is an old one I had with Zimbra Collaboration 8.6, and from there it has been upgraded to 8.7, 8.7.1 and 8.7.6 and finally to 8.7.10.

Finding the disk issue

I’ve started with the basics, what directory or file was eating my disk space without my knowledge? To do that, I’ve used the commands df and du, both always handy:

Checking that my disk was 100% full

df -h
Filesystem                 Size  Used Avail Use% Mounted on
udev                       2.0G  4.0K  2.0G   1% /dev
tmpfs                      396M  356K  395M   1% /run
/dev/disk/by-label/DOROOT   59G   56G  4.0K 100% /
none                       4.0K     0  4.0K   0% /sys/fs/cgroup
none                       5.0M     0  5.0M   0% /run/lock
none                       2.0G     0  2.0G   0% /run/shm
none                       100M     0  100M   0% /run/user

Checking which folder in Zimbra was using all the space, mind the log folder:

du -sh /opt/zimbra/*
4.0K	/opt/zimbra/accountds.sh
4.0K	/opt/zimbra/accounts.ss
0	/opt/zimbra/altermime
0	/opt/zimbra/amavisd
0	/opt/zimbra/aspell
4.0K	/opt/zimbra/backup
0	/opt/zimbra/bdb
732K	/opt/zimbra/bin
0	/opt/zimbra/cbpolicyd
0	/opt/zimbra/clamav
642M	/opt/zimbra/common
8.0M	/opt/zimbra/conf
4.0K	/opt/zimbra/config.13612
4.0K	/opt/zimbra/config.13822
4.0K	/opt/zimbra/config.27647
4.0K	/opt/zimbra/config.28214
12K	/opt/zimbra/contrib
172K	/opt/zimbra/convertd
0	/opt/zimbra/curl
0	/opt/zimbra/cyrus-sasl
788M	/opt/zimbra/data
1.4G	/opt/zimbra/db
2.7M	/opt/zimbra/docs
0	/opt/zimbra/dspam
1.3M	/opt/zimbra/extensions-extra
56K	/opt/zimbra/extensions-network-extra
4.0K	/opt/zimbra/fbqueue
0	/opt/zimbra/heimdal
0	/opt/zimbra/httpd
34M	/opt/zimbra/index
0	/opt/zimbra/jetty
3.4M	/opt/zimbra/jetty-distribution-9.1.5.v20140505
633M	/opt/zimbra/jetty-distribution-9.3.5.v20151012
0	/opt/zimbra/keyview
65M	/opt/zimbra/keyview-10.13.0.0
124M	/opt/zimbra/lib
2.4M	/opt/zimbra/libexec
0	/opt/zimbra/libtool
48G	/opt/zimbra/log
274M	/opt/zimbra/logger
0	/opt/zimbra/mailboxd
0	/opt/zimbra/mariadb
0	/opt/zimbra/memcached
4.0K	/opt/zimbra/mta
0	/opt/zimbra/mysql
0	/opt/zimbra/net-snmp
20K	/opt/zimbra/net-snmp-5.7.2.1
0	/opt/zimbra/nginx
0	/opt/zimbra/opendkim
0	/opt/zimbra/openldap
0	/opt/zimbra/openssl
4.0K	/opt/zimbra/packages
0	/opt/zimbra/pflogsumm
0	/opt/zimbra/postfix
24K	/opt/zimbra/postfix-2.11.1.2z
3.9M	/opt/zimbra/redolog
0	/opt/zimbra/rsync
4.0K	/opt/zimbra/services
0	/opt/zimbra/snmp
740K	/opt/zimbra/ssl
233M	/opt/zimbra/store
0	/opt/zimbra/tcmalloc
0	/opt/zimbra/unbound
74M	/opt/zimbra/var
0	/opt/zimbra/zeromq
4.0K	/opt/zimbra/zimbramon
4.5M	/opt/zimbra/zimlets
38M	/opt/zimbra/zimlets-deployed
11M	/opt/zimbra/zimlets-install
1.7M	/opt/zimbra/zimlets-network
1.5G	/opt/zimbra/zmstat

Checking which logs inside that folder were stealing all the disk space, mind the mysql_error.log:

root@mail:~# du -sh /opt/zimbra/log/*
2.7M	/opt/zimbra/log/access_log.2017-07-27
0	/opt/zimbra/log/activity.log
4.0K	/opt/zimbra/log/amavisd.pid
4.0K	/opt/zimbra/log/amavis-mc.pid
44K	/opt/zimbra/log/audit.log
20K	/opt/zimbra/log/audit.log.2017-07-26.gz
4.5M	/opt/zimbra/log/clamd.log
4.0K	/opt/zimbra/log/clamd.pid
4.0K	/opt/zimbra/log/convertd.log.2017-07-27
4.0K	/opt/zimbra/log/convertd.pid
12K	/opt/zimbra/log/ews.log
644K	/opt/zimbra/log/freshclam.log
4.0K	/opt/zimbra/log/freshclam.pid
204K	/opt/zimbra/log/gc.log
0	/opt/zimbra/log/hs_err_pid5592.log
4.0K	/opt/zimbra/log/httpd_error.log.2017-07-27
4.0K	/opt/zimbra/log/httpd.pid
4.0K	/opt/zimbra/log/logswatch.pid
816K	/opt/zimbra/log/mailbox.log
4.0K	/opt/zimbra/log/memcached.pid
10M	/opt/zimbra/log/myslow.log
47G	/opt/zimbra/log/mysql_error.log
4.0K	/opt/zimbra/log/mysql.pid
6.7M	/opt/zimbra/log/nginx.access.log
7.7M	/opt/zimbra/log/nginx.log
4.0K	/opt/zimbra/log/nginx.pid
4.0K	/opt/zimbra/log/opendkim.pid
0	/opt/zimbra/log/searchstat.log
4.0K	/opt/zimbra/log/spamtrain.log
116K	/opt/zimbra/log/sqlMigration.log
40K	/opt/zimbra/log/stacktrace.3003.20170727042929
4.0K	/opt/zimbra/log/swatch.pid
132K	/opt/zimbra/log/sync.log
0	/opt/zimbra/log/syncstate.log
0	/opt/zimbra/log/synctrace.log
17M	/opt/zimbra/log/trace_log.2017_07_27
4.0K	/opt/zimbra/log/unbound.pid
0	/opt/zimbra/log/wbxml.log
0	/opt/zimbra/log/zmconfigd-audit.log
0	/opt/zimbra/log/zmconfigd-log4j.log
4.0K	/opt/zimbra/log/zmconfigd.pid
4.0K	/opt/zimbra/log/zmlogprocess.state
4.0K	/opt/zimbra/log/zmlogswatch.out
4.0K	/opt/zimbra/log/zmmailboxd_java.pid
4.0K	/opt/zimbra/log/zmmailboxd_manager.pid
188K	/opt/zimbra/log/zmmailboxd.out
4.0K	/opt/zimbra/log/zmrrdfetch-server.pid
144K	/opt/zimbra/log/zmsetup.20170410-080047.log
96K	/opt/zimbra/log/zmsetup.20170707-051217.log
44K	/opt/zimbra/log/zmswatch.out

So, now we know which log is using all the available disk space, let’s do a tail and see what’s going on, I saw that the error was filling the log every second with the next:

tail -f /opt/zimbra/log/mysql_error.log
2017-07-27 04:38:30 7f8767c42b00 InnoDB: Error: Column last_update in table "mysql"."innodb_table_stats" is INT UNSIGNED NOT NULL but should be BINARY(4) NOT NULL (type mismatch).
2017-07-27 04:38:30 7f8767c42b00 InnoDB: Error: Column last_update in table "mysql"."innodb_table_stats" is INT UNSIGNED NOT NULL but should be BINARY(4) NOT NULL (type mismatch).
2017-07-27 04:38:30 7f8767c42b00 InnoDB: Error: Column last_update in table "mysql"."innodb_table_stats" is INT UNSIGNED NOT NULL but should be BINARY(4) NOT NULL (type mismatch).
2017-07-27 04:38:30 7f8767c42b00 InnoDB: Error: Column last_update in table "mysql"."innodb_table_stats" is INT UNSIGNED NOT NULL but should be BINARY(4) NOT NULL (type mismatch).
2017-07-27 04:38:30 7f8767c42b00 InnoDB: Error: Column last_update in table "mysql"."innodb_table_stats" is INT UNSIGNED NOT NULL but should be BINARY(4) NOT NULL (type mismatch).

How to solve InnoDB: Error: Column last_update in table “mysql”.”innodb_table_stats”

For some reason during the upgrade Zimbra didn’t fix this automatically (https://bugzilla.zimbra.com/show_bug.cgi?id=107976) , so let’s try to run the next commands to get rid of this problem:

As you might not have any disk space left, we will first wipe the mysql_error.log file with the next command, and perform a reboot:

cp /dev/null /opt/zimbra/log/mysql_error.log
reboot

Once the system is back online, let’s fix the issue.

First, we will need the mysql root password to run the command later:

zmlocalconfig -s | grep mysql | grep password
antispam_mysql_password = 
antispam_mysql_root_password = 
mysql_root_password = THISISMYPASS.9R3Fu_A
zimbra_mysql_password = THISISZIMBRAMYSQLPASS

Now we know the password we can manually create the folder and the symbolic link:

mkdir /opt/zimbra/data/tmp/mysqldata
ln -s /opt/zimbra/data/tmp/mysql/mysql.sock /opt/zimbra/data/tmp/mysqldata/mysql.sock

And last but not least run manually the upgrade command, which will take some seconds depending on the size of your Database:

zimbra@mail:~$ /opt/zimbra/common/bin/mysql_upgrade -u root -p
Enter password: 
Phase 1/6: Checking and upgrading mysql database
Processing databases
mysql
mysql.column_stats                                 OK
mysql.columns_priv                                 OK
mysql.db                                           OK
mysql.event                                        OK
mysql.func                                         OK
mysql.gtid_slave_pos                               OK
mysql.help_category                                OK
mysql.help_keyword                                 OK
mysql.help_relation                                OK
[...]
mboxgroup99.purged_conversations                   OK
mboxgroup99.purged_messages                        OK
mboxgroup99.revision                               OK
mboxgroup99.revision_dumpster                      OK
mboxgroup99.tag                                    OK
mboxgroup99.tagged_item                            OK
mboxgroup99.tombstone                              OK
performance_schema
test
zimbra
zimbra.config                                      OK
zimbra.current_sessions                            OK
zimbra.current_volumes                             OK
zimbra.deleted_account                             OK
zimbra.mailbox                                     OK
zimbra.mailbox_metadata                            OK
zimbra.mobile_devices                              OK
zimbra.out_of_office                               OK
zimbra.pending_acl_push                            OK
zimbra.scheduled_task                              OK
zimbra.service_status                              OK
zimbra.table_maintenance                           OK
zimbra.volume                                      OK
zimbra.volume_blobs                                OK
zimbra.zcs_zimlet_user_config                      OK
zimbra.zmg_devices                                 OK
Phase 6/6: Running 'FLUSH PRIVILEGES'
OK

Let’s run the last zmcontrol restart, and then we can now take a look at the log file again to see if the error now is gone, if everything went well you might see something like this:

tail -f /opt/zimbra/log/mysql_error.log

2017-07-27  5:07:55 140720047503232 [Note] InnoDB: Completed initialization of buffer pool
2017-07-27  5:07:55 140720047503232 [Note] InnoDB: Highest supported file format is Barracuda.
2017-07-27  5:07:55 140720047503232 [Note] InnoDB: 128 rollback segment(s) are active.
2017-07-27  5:07:55 140720047503232 [Note] InnoDB: Waiting for purge to start
2017-07-27  5:07:55 140720047503232 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.29-76.2 started; log sequence number 829206316
2017-07-27  5:07:55 140720047503232 [Note] Plugin 'FEEDBACK' is disabled.
2017-07-27  5:07:55 140718552491776 [Note] InnoDB: Dumping buffer pool(s) not yet started
2017-07-27  5:07:55 140720047503232 [Note] Server socket created on IP: '127.0.0.1'.
2017-07-27  5:07:55 140720047503232 [Note] /opt/zimbra/common/sbin/mysqld: ready for connections.
Version: '10.1.14-MariaDB'  socket: '/opt/zimbra/data/tmp/mysql/mysql.sock'  port: 7306  Zimbra MariaDB binary distribution

That’s it for today guys, I hope this Blog post helps you as much as it helped me.

Filed Under: zimbra Tagged With: zimbra, zimbra error, zimbra innodb, zimbra innodb_table_stats, zimbra mysql

Reader Interactions

Comments

  1. Wolfgang K says

    24th May 2018 at 11:08 am

    Very nice! Exactly what I needed. Thank you!!!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

  • E-mail
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • YouTube

Posts Calendar

July 2017
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  
    Aug »

Disclaimer

All opinions expressed on this site are my own and do not represent the opinions of any company I have worked with, am working with, or will be working with.

Copyright © 2025 · The Blog of Jorge de la Cruz