I tried rewriting the rendering backend a few different ways to use Core Animation instead of NSOpenGLView, but we were never able to get the performance we needed in terms of getting the composited pixels to the screen. This probably is in part due to the architecture of how Sublime composites everything on the CPU (in a cross-platform way) and then pushes all of the composited pixels to the screen using OS-specific APIs. However, Windows doesn't seem to run into this issue, so part of it seems to be Apple API overhead.
We use OpenGL on Macs for retina and other high resolution screens because Core Graphics can't seem to send pixels fast enough. Instead, we create OpenGL textures in a tiled fashion and move them using OpenGL until new tiles are required.
In the end we determined that it appears when you have an NSOpenGLView inside of an NSWindow using the
NSFullSizeContentViewWindowMask flag, and a GPU change occurs, the NSOpenGLView's context (or something along those lines) gets stuck on the previous GPU. To display the window, all textures and OpenGL command appear to have to be shuffled through main memory to the active GPU, resulting in the increased CPU usage.
In the end, we had to hook into an event that notified us of a screen change (although Apple doesn't actually document the constant sent to the notification function when a GPU change occurs). When that happens, we discard all textures and references to them, then create a new NSOpenGLView in place of the existing one.