High-quality A/V on a budget
Invariably the highlight of almost any demoparty is the compo showings, and in order to keep the attention on the compos themselves, the A/V chain delivering them to the viewers (whether they be at the party or at home, watching the stream) should be as unobtrusive as possible. Sesse from the Solskogen crew is going to walk you through the steps we take to make our A/V chain (and your viewing and streaming experience) as good as possible.
It’s perhaps surprising that in 2017, with high-quality consumer multimedia everywhere, this isn’t a solved problem, but demo compos present a few unique hurdles, so we’ve been hard at work in making Solskogen 2017 our smoothest yet.
Connecting every device directly to the projector would be the simplest solution, but it’s not really ideal; you need to show the information slide for each entry, and switching takes a lot of time on most projectors. (Also, you need some way to forward that signal on to the stream.) Most HDMI switchers have the same problem—long, annoying switching delays while the projector retrains, and besides, having multiple devices on a single HDMI bus tends to confuse a lot of equipment.
Being on a limited budget (both in terms of money and manpower), we can’t afford buying or renting a hardware video mixer; and in any case, many of them tend to have annoying limitations. For instance, it’s common to require that all inputs be the same resolution and refresh rate, or they can’t do picture-in-picture, which is important for showing off the live compos. Thus, we’re cobbling together equipment from various people, running our very own software mixer solution, based off of Nageru.
Nageru allows us to run multiple inputs, apply high-quality upscaling to 1080p if needed, and switch or fade between them as we’d like; and unlike most other software solutions, it’s low-latency (typically 2–3 frames, or around 50 ms delay) so demos and live compos won’t be perceived as out-of-sync. (We’ll be delaying audio from the compo machines to match the video delay, but for live
compos, this isn’t always an option.)
An important property of any A/V chain is transparency; if a coder puts a #ff0000 pixel on-screen, it’d better be #ff0000 on the projector and on people’s computers at home. Surprisingly, a lot of equipment will violate this property; HDMI handshaking is complicated and hard to get right, and there are many ways to mess it up, like using the wrong Y’CbCr transfer function. (Video players and browsers are no better, so you need to be very careful with what encoder parameters you use.)
Our strategy for solving this is two-pronged; first, we run most of the chain on SDI instead of on HDMI, converting as needed, which restricts it to a more limited set of parameters, which is harder to mess up. Second, we have been calibrating and testing every part of our chain extensively. For the main chain, you can simply run a loopback test; connect the output to an input, switch to it and see what happens. If you’ve messed it up, chances are it will look like this:
For inputs where you can’t loop easily, such as analog inputs (which we run through the Framemeister scaler), there’s another option; send a known signal through and use Nageru’s color dropper tool to verify that it ends up being interpreted correctly. You’d think this is paranoia, but it’s actually rare to find a device that doesn’t need some sort of adjustment; it’s easy to get crushed blacks, wrong gamma, or green leaking over into the red and blue channels. Defaults are rarely correct.
Finally, we want to bring the best possible experience not only to the partygoers, but also to those who for some reason cannot attend. We use a separate installation of Nageru for this purpose, which mixes in the bigscreen signal (delivered over SDI), scales it down to 720p, mixes in our cameras when needed, and finally encodes it to 5 Mbit/sec H.264. A key feature of Nageru in this context is that it supports variable frame rate (VFR), so when we switch the bigscreen to 50 Hz for oldschool content, the stream automatically follows suit. (We always run in 50 or 60 Hz, as demos look much better than in 25/30.) This way, we don’t have to drop or duplicate frames, and the viewer will get the most authentic signal possible.
Some demos are notoriously hard to encode, so Nageru applies x264 speed control; simple content is encoded at high-quality settings to get the best possible picture, and difficult content (affectionately known as “streamkillers”) run on lower presets so that the encoder doesn’t fall behind and need to drop frames. This resolves the typical dilemma of whether to optimize the stream for streamkillers or more normal content; you can simply have both, and we’ve been running through video content literally for days to make sure both types are presented optimally. (Note that the bitrate stays constant throughout; if we set that free, too, we’d end up hitting 30 Mbit/sec or more on the most difficult content, which would surely be a problem for most viewers.)
“Through the cracks” by Still (Revision 2016). Despite having no color at all, it thoroughly manages to bungle YouTube’s encoding.
In the end, however, A/V infrastructure is just that; infrastructure. If done right, it steps out of the way, yielding the spotlight to the compo entries themselves. So get those compilers going; we’ll be doing our best to get your stuff to the viewers!