Archive for the ‘MySQL’ Category

MariaDB at Percona Live Santa Clara

I for one can say that I’m truly excited that MariaDB will be part of Percona Live Santa Clara. The MariaDB session list includes:

  • A tutorial: Improving MySQL/MariaDB query performance through optimizer tuning by Timour Katchaounov and Sergey Petrunia. You can benefit from this even as a stock MySQL user naturally.
  • MySQL Plugins – why should I bother? by Sergei Golubchik is again something that isn’t only MariaDB specific since MySQL also has a plugin architecture. I don’t know if Sergei will talk about authentication plugins as well, but I can imagine this is on the mind of many people.
  • MariaDB: The 2012 edition by Colin Charles – well, this is me. You might wonder what we’ve been doing since mid-2009 having only made our first release in early-2010. We’ve achieved quite a lot in the project, so come find out why you might consider MariaDB to your alternative to MySQL. We may even have a surprise in the way of some new announcements here.
  • MySQL Optimizer Standoff MySQL 5.6 and MariaDB 5.3 by Peter Zaitsev is focused on the optimizer features between MySQL and MariaDB. MariaDB 5.3 as you might know features a lot of optimizer improvements.

The BoFs schedule has just been released, and on Wednesday evening from 9pm onwards, you can come visit Ballroom A to discuss MariaDB, help with the future roadmap as well as be filled with lots of salmiakkikossu.

We have already confirmed our acceptance of having a DotOrg booth and can’t wait till that information makes it up on the Internet.

All in all, I’m happy to be attending and speaking in Santa Clara yet again in the April timeframe. This has become somewhat of a yearly ritual. Make sure you register, and don’t forget to do a little searching to find some promo codes!

Managing MySQL with Percona Toolkit by Frédéric Descamps

Frédéric Descamps of Percona.

Percona Toolkit is Maatkit & Aspersa combined. Opensource and the tools are very useful for a DBA.

You need Perl, DBI, DBD::mysql, Term::ReadKey. Most tools are written in Perl, and whatever is in Bash is being re-written in Perl. There is also a tarball or RPM or DEB packages.

Know your environment. The hardware & OS are crucial for you to know. How much memory/CPU do you use? Do you use swap? Is this a physical/virtual machine? Do you have free space? What kind of RAID controller? Volumes? Disk? What about the network interfaces? What IO schedulers are used? Which filesystem is the data stored on? To answer all that, just use pt-summary.

Know your MySQL environment. Version? Build? How many databases? Where is the data directory? What about replication? What are key InnoDB settings? Storage engine in use? Index type? Foreign keys? Full text indexes? To answer all this and more use pt-mysql-summary.

pt-slave-find shows you the topology and replication hierarchy of your MySQL replication instances. An inventory of replicas!

Where is my disk I/O going? Use pt-diskstats which is an improved iostat. There is pt-ioprofile but it can be dangerous in production.

Now its time to get more intimate with your database. Let’s try to find the answer to these questions: how are the indexes used? Are there duplicate keys? Which queries are eating most of the resources? You can use pt-duplicate-key-checker to check for duplicate/redundant indexes or foreign keys. pt-index-usage can tell you which indexes are unused. If you think you have bad SQL, check out pt-query-advisor.

You can use pt-query-digest to analyze the slow query log and show a profile of the workload. You mostly use this with slow query logs & tcpdump’s. Be careful when you have dropped packets — results may tend to be fake then!

After all this, its time to maintain your environment.

pt-deadlock-logger checks InnoDB status to log MySQL deadlock information. It needs to run continually to capture things.

pt-fk-error-logger extracts and logs MySQL foreign key errors.

pt-online-schema-change to alter tables. It makes a “shadow copy” and swaps them. Extremely useful for large, long-running ALTER. Facebook uses the same technique.

Validate your upgrades as upgrades are the leading cause of downtime. Are queries using different indexes? Is query execution plan different? New errors? See pt-upgrade for this. Best to run this on a third machine (i.e. the old machine and a new machine to see how it goes).

Verify replication integrity – pt-table-checksum. Perform an online replication consistency check or checksum MySQL tables efficiently on one or many servers. Use it routinely (mandatory for 95% of MySQL users). Put it in a weekly crontab. Repair differences with pt-table-sync.

Repair out-of-sync replicas – pt-table-sync

Measure delay acfurately – pt-heartbeat

Deliberately delay replication – pt-slave-delay

Watch & restart MySQL replication after errors – pt-slave-restart

When there are problems, get the symptoms when it hurts. Look at pt-stalk (wait for a condition to occur them begin collecting data – eg. everytime the threads go over 2,000 you have a problem, so it collects stuff – it calls pt-collect), pt-collect (collect information from a server for some period of time), and pt-sift.

pt-mext looks at many samples of MySQL SHOW GLOBAL STATUS side-by-side. Default STATUS shows counter since the MySQL instances started. It is very helpful to see a delta of recent activity.

The future: pt-query-digest will do query reviews; pt-stalk will do “magical fault detection algorithm”. Its all opensource and its all on Launchpad at lp:percona-toolkit.

Replication features of 2011 by Sergey Petrunia

Sergey Petrunia of the MariaDB project & Monty Program.

MySQL 5.5 GA at the end of 2010. MariaDB 5.3 RC towards the end of 2011 (beta in June 2011).

MySQL 5.5 is merged to Percona Server 5.5 which included semi-sync replication, slave fsync options, atuomatic relay log recovery, RBR slave type conversions (question if this is useful or not), individual log flushing (very useful, but not many using), replication heartbeat, SHOW RELAYLOG EVENTS. About 2/3rds of the audience use MySQL 5.5 in production, with only 2 people using semi-sync replication.

MariaDB 5.3 brings replication features brings group commit in the binary log, which is merged into Percona Server 5.5. Checksums for binlog events which is merged from MySQL 5.6. Sergey goes in-depth about the group commit for the binary log. To find out a little more about MariaDB replication changes, see Replication in the Knowledgebase.

There are several implementations of group commit. Facebook started it, followed by MariaDB & Oracle. Percona 5.5 is GA so the feature is there, its not in MySQL 5.6 (yet?), and MariaDB 5.3 is where its at. Seems like the MariaDB implementation is the best so far – refer to the Facebook benchmark performed by Mark Callaghan.

Annotated RBR poses a compatibility problem. MariaDB 5.3 has annotate_rows, while MySQL 5.6 has rows_query event. They are different events. So you cannot have a MariaDB 5.3 master and a MySQL 5.6 slave at this moment. So MySQL 5.6 will have a flag to mark “ignorable” binlog events which will be merged into MariaDB and this will make binary logs compatible again.

There is now also optimized RBR for tables with no primary key.

MySQL 5.6 also has crash-safe slave (replication information stored in tables). Crash-safe master (binary log recovery if the server starts & sees the binary log is corrupted). Parallel event execution is something that is new in MySQL 5.6 which is the most important feature for Sergey.

Pre-heating: There is mk-slave-prefetch (famous quote: “Please don’t use mk-slave-prefetch on #MySQL unless you are Facebook.”). There is replication booster by Yoshinori Matsunobu. There is a Python version of mk-slave-prefetch that Facebook uses.

MySQL Creatively in a Sandbox by Giuseppe Maxia

Giuseppe Maxia of Continuent and long time creator of MySQL Sandbox.

Only works on Unix-like servers. Works with MySQL, Percona & MariaDB servers. MySQL server has the data directory, the port and the socket – you can’t share these.

To use it: make_sandbox foo.tar.gz. Then just do ./use.

$SANDBOX_HOME is ~/sandboxes. You can also create ~/opt/mysql/ and if you have MySQL 5.0.91 binary in that directory, you can just do “sb 5.1.91”.

Sandbox has features to start replication systems as well. You can have varying master/slave setups with varying versions as well (good idea to test from MySQL -> MariaDB master->slave for migration).

You can now also play with tungsten-sandbox, which is a great way to start playing with Tungsten Replicator (see documentation and tungsten-toolbox). There is apparently also a MySQL Cluster sandbox tool that someone is working on.

 

Optimizing your InnoDB buffer pool usage by Steve Hardy

Steve Hardy of Zarafa.

Work that has been done to make Zarafa better. Why do you optimise your buffer pool? To decrease your I/O load. How can you do it? Buy more RAM, page compression, less (smaller) data, rearrange data.

MariaDB or Percona Server allows you to inspect your buffer pool (unsure if this is now available in MySQL 5.6). Giuseppe in the audience says this is available in MySQL 5.6, but Steve used this on MariaDB 5.2.

Strategies to fix it: Make records smaller. Remove indexes if you can use others almost as efficiently. Make records that are accessed around the same time have a higher chance of being on the same page. Use page compression. Buy more RAM. Try Batched Key Access (BKA) in MariaDB 5.3+.

Best to view the presentation since there are specific examples that speak about how Zarafa solves their problems like a user trying to sort their email, etc.

Practical MySQL Indexing guidelines by Stéphane Combaudon

Stéphane Combaudon of Dailymotion.

Index: separate data structure to speed up SELECTs. Think of index in a book. In MySQL, key=index. Consider that indexes are trees.

InnoDB’s clustered index – data is stored with the Primary Key (PK) so PK lookups are fast. Secondary keys hold the PK values. Designing InnoDB PK’s with care is critical for performance.

An index can filter and/or sort values. An index can contain all the fields needed for the query you don’t need to go to the table (a covering index).

MySQL only uses 1 index per table per query (not 100% true – OR clauses), so think of a composite index when you can. Can’t index TEXT fields (use a prefix). Same for BLOBs and long VARCHARs.

Indexes: speed up queries, increases the size of your dataset, slows down writes. How big is the write slowdown? Simple test by Stephane, for in-memory workloads he says adding 2 keys makes performance 2x worse; for on-disk workloads he says its 40x worse. Never neglect the slowdown of your writes when you have an index. There is a graph in the slidedeck.

What is a bad index? Unused indexes. Redundant indexes. Duplicate indexes.

Indexing is not an exact science, but guessing is probably not the best way to design indexes. Always check your assumptions – EXPLAIN does not tell you everything, time your queries with different index combinations, SHOW PROFILES is often valuable. Slow query log is a good place to start.

Many slides with examples, so I hope Stephane posts the deck soon. If possible, try to sort & filter (an index is not always the best for sorting).

InnoDB’s clustered index is always covering. SELECT by PK is the fastest access with InnoDB.

An index can give you 3 benefits: filtering, sorting, covering.

See Userstats v2 – you need Percona Server or MariaDB 5.2+. See also pt-duplicate-key-checker to find redundant indexes easily. See also pt-index-usage to help answer questions not covered by userstats.


i