Yes, I’m exhausted. Five straight days of sitting attentively in rooms with massive IQs, trying to understand the intention of their higher level mathematics, as a practitioner not an academic, finds me alternating between sheer exhilaration and cluelessness. At no time did I come away with “well, that was pointless.” A couple of talks weren’t targeted at me, so I left and found ones I wanted to know more about. On the whole, I came away inspired and wishing I had the luxury of an academic who has a research budget and time set aside to perform some of their own work (and graduate students to farm the rest out to).
So, in a nutshell, the top honors go to Deep Neural Networks for upstaging just about everything else that was presented. I am suspicious that every good talk, every good paper, that normally would have been attended much more heavily in other years, was about 50% less full while folks were sitting in, hearing about the new hotness of variational autoencoders, or the incredible fidelity of multimodal neural networks, or just geeking out about the cool and surprising things you can do with convolutional neural networks. It seemed these were very full papers, talks, and courses. And rightly so: I flew to Vancouver primarily to learn about how this largely abstract tech has suddenly made strong inroads into my corner of the world and I had no idea it was coming so quickly. There were off-handed comments in talks that will spawn papers and products in the coming years, such as Marco Salvi saying that it might be fruitful to simply train a network to be a complicated HLSL shader rather than use the shader itself, trained against a highly supersampled rendering, and manage to get higher quality results that handles distance LOD automatically, without branches. Sounds amazing! The new DLAA technique (that probably requires TensorCore acceleration) beats temporal anti-aliasing (TAA) hands-down is available in source code form, and is purportedly production-worthy (if accelerated?). There were plenty of researchers applying ANNs in predictable, but interesting, ways… and a few who tried unpredictable things, like trying to denoise one image by training against another noisy image, and found that as long as the noise is zero-mean, a neural network will reveal the ground truth image very well. And another who discovered a training regimen that can be used to lead a neural network to a lower energy/higher quality state by giving it multiple test in increasing difficulty in a set and training it until two tests pass, then dropping the easiest and adding yet another harder test, and continuing training, especially when providing “training wheels” to the network to kind of show it what you’re asking it to do and progressively removing the training aids. There were a couple of great conversations about performance, care and feeding of the training data and how you train, and how to construct loss functions that imply the solution you want (or fitness function for those of us with other AI backgrounds).
Life changing? Maybe. I have a massive itch to scratch now, and will continue my own investigations. But I’m equally looking forward to what the very near future brings, because there is much interesting discovery just over the horizon.