GPU-ify Your Rig
There are a lot of tips on how to make your rig run faster, most of them are good techniques to use in-general but they can only speed the rig up a tiny bit. There are two techniques that are "relatively new" (having been made available in Maya in 2016) that you should always make use of, because the performance improvement is so great. These two techniques are "parallel evaluation" and "GPU deformers".
The "Bible" of these two techniques is this PDF from Autodesk, that outlines the technical requirements for these two techniques:
This Disney paper offers great practical advice on optimization, so you don't need to wade through the Autodesk PDF, if you're not a very technical rigger:
My contribution to this body of documentation is these scripts. Only the first one is new, but I'm linking to the other two as well because the new one imports them:
One of the requirements for putting deformers on the GPU is that the deformer can only deform a single shape. This is usually not a problem, since the two most common deformers (skinClusters and blendShapes) can only deform a single shape. But if you are using lattices or clusters it's pretty common to deform more than one mesh, which means none of the deformers can go on the GPU. You can get around this by creating a duplicate deformer instead of having one deformer affect two shapes, but that's annoying to implement, work with, and maintain. My script takes all the deformers in the scene and breaks them up into duplicates that affect only one shape each. So you can do your rigging the way you're used to (having a single deformer affect more than one shape) and then run my script before publishing the rig for animators to use.
This even helps with deformers that are not GPU-compatible, because it allows Maya to evaluate each shape in-parallel, which is a big help if you have lots of small meshes (like the nurbs rivets used on the Mathilda rig face).
I would be remiss if I didn't also link to this blog post about GPU deformers as well, so here it is.