Cat juggling game developers, ready for action


What I Learned at Siggraph 2018 (and what you missed)

Yes, I’m exhausted.  Five straight days of sitting attentively in rooms with massive IQs, trying to understand the intention of their higher level mathematics, as a practitioner not an academic, finds me alternating between sheer exhilaration and cluelessness.  At no time did I come away with “well, that was pointless.”  A couple of talks weren’t targeted at me, so I left and found ones I wanted to know more about.  On the whole, I came away inspired and wishing I had the luxury of an academic who has a research budget and time set aside to perform some of their own work (and graduate students to farm the rest out to).


So, in a nutshell, the top honors go to Deep Neural Networks for upstaging just about everything else that was presented.  I am suspicious that every good talk, every good paper, that normally would have been attended much more heavily in other years, was about 50% less full while folks were sitting in, hearing about the new hotness of variational autoencoders, or the incredible fidelity of multimodal neural networks, or just geeking out about the cool and surprising things you can do with convolutional neural networks.  It seemed these were very full papers, talks, and courses.  And rightly so: I flew to Vancouver primarily to learn about how this largely abstract tech has suddenly made strong inroads into my corner of the world and I had no idea it was coming so quickly.  There were off-handed comments in talks that will spawn papers and products in the coming years, such as Marco Salvi saying that it might be fruitful to simply train a network to be a complicated HLSL shader rather than use the shader itself, trained against a highly supersampled rendering, and manage to get higher quality results that handles distance LOD automatically, without branches.  Sounds amazing!  The new DLAA technique (that probably requires TensorCore acceleration) beats temporal anti-aliasing (TAA) hands-down is available in source code form, and is purportedly production-worthy (if accelerated?).  There were plenty of researchers applying ANNs in predictable, but interesting, ways… and a few who tried unpredictable things, like trying to denoise one image by training against another noisy image, and found that as long as the noise is zero-mean, a neural network will reveal the ground truth image very well.  And another who discovered a training regimen that can be used to lead a neural network to a lower energy/higher quality state by giving it multiple test in increasing difficulty in a set and training it until two tests pass, then dropping the easiest and adding yet another harder test, and continuing training, especially when providing “training wheels” to the network to kind of show it what you’re asking it to do and progressively removing the training aids.  There were a couple of great conversations about performance, care and feeding of the training data and how you train, and how to construct loss functions that imply the solution you want (or fitness function for those of us with other AI backgrounds).


Life changing?  Maybe.  I have a massive itch to scratch now, and will continue my own investigations.  But I’m equally looking forward to what the very near future brings, because there is much interesting discovery just over the horizon.



Siggraph 2018 here we go!

It’s been a really long time since I’ve been to Siggraph.  Maybe since 1998?  20 years?  Not because I don’t want to go, but because I work in games and rarely get a chance to take time off.  I’m an old man now, though, and can get my way when I want to.  So, this time around, I decided I needed to attend.

Here’s what I’m looking forward to in the conference:  Realtime high quality water rendering with wakes using wavelets for water surface representation.  A course on deep learning neural networks.  A paper on topologizing meshes with Hessian basis.  A paper on Gaussian material synthesis for shader tech.  A course on realtime DirectX raytracing.  A paper on practical tetrahedralization.  A course on physics-driven animation.  A paper on progressive refinement of parameterizations which could be used for better UV layouts.

Lots of stuff!  It always gets my creative (and technojunkie) juices flowing to see these kinds of topics and have a chance to ask questions.  If you didn’t know, Siggraph and ACM make previous years’ content available for free for the month surrounding the conference, so you can check out several previous years worth of content through Open Access to the ACM Digital Library.

If you’re going to be there, meet up with me and say hi.  My contact info is everywhere.


I’m on a panel, y’all.


Now’s your chance to head to the Pretty Swanky downtown Capital Factory incubator space in the Omni and heckle me and my co-panelists as we attempt to explain why we hire who we hire, what makes for good employees, and generally try to hem and haw our way through the presentation so we can get to the free pizza at the end. Slightly more seriously, it’s a good opportunity to meet and greet both newbie and veteran programmers, get your questions answered, or stump the chumps.

If you can’t make it, sometimes the raw video gets posted online, so you can check it out later.


July 12 at 7pm

Video Game Makers Unite!

Austin, TX
1,275 Gameristas

This is a group for anyone interested in creating video games or game companies of all types: core AAA, massive online/MMO, indie, social, casual, mobile, serious play, and au…

Next Meetup

Panel: Hiring and Managing a Game Technical Team

Thursday, Jul 12, 2018, 7:00 PM
16 Attending

Check out this Meetup Group →