Create a 20 game names in the form of “Adjective Noun Verb”
Amazing - this is such an incredible tool for sparking ideas. Writers block? HAH!
]]>One of the things that didn’t make the list above is a little debugging gem I added while working on Battle Royale. In Fortnite the team used a fair amount of Blueprint based Widgets, which means inevitably you’ll throw in a breakpoint in C++ but end up seeing a callstack like this when you’re looking for who is calling your code,
[Your C++ Code]
UFunction::Invoke(...)
UObject::ProcessEvent(...)
AActor::ProcessEvent(...)
[Rest of the stack]
I was working with a lot of code in Fortnite I was unfamiliar with, so it wasn’t always obvious where the calls might be coming from. So I added a C command you can run from the Immediate Window in Visual Studio that will print the current Blueprint/Scripting callstack.
When running an Editor build of your game, you would use the command, {,,UE4Editor-Core}::PrintScriptCallstack()
In a monolithic build you would use, ::PrintScriptCallstack()
This command is available in 4.18.
]]>Spending time improving our automated testing. If image comparison tests fail, we generate failure reports like this #UE4 pic.twitter.com/F43tIZX1Lc
— @NickDarnell@mastodon.gamedev.place 🧙♂️ (@NickDarnell) February 13, 2017
However rendered image comparison can be tricky because of differences caused by the following,
All these things make it difficult to compare rendered output. You could simplify matters by only testing on one kind of machine but that’s a pretty unrealistic testing environment.
I started by converting the comparison method from Resemble.js to C++, it’s a straight forward image comparison JS library. It supports per-channel and brightness tolerances. It also did neighbor similarity to attempt to account for anti-aliasing. A similar library that looks like it might have a few more features is Blink-Diff, which I found later.
The first mistake I made was comparing the pixels across the whole image and generate a percent difference. Looking at the picture below, you can immediately see the problem with that approach.
The black pixels only represent a 1.85% difference in the image. The minimum required global error I defaulted to was 2% before I considered it a problem. Lowering the required error to 1% would have worked, but I wanted to maintain a large enough margin to avoid false positives coming from usual non-deterministic differences. 2% might still be too high, and I may lower it anyway, but still if a material effect breaks, it may only create localized distortions.
To solve this problem I ended breaking up the images into 100 blocks (a spatial hash). I then accumulated error per block as well as global, which ends up producing blocks with 30%-40% error in the sample above, which was plenty to overcome my new maximum allowed block error of 10%.
This still wasn’t ideal, since depending on how the error shows up in the image it’s possible it is spread across enough blocks in just such a way as to not trigger the maximum error in any block.
The problem with the block error is that it assumes a particular shape, when error could come in any shape. Imagine a particularly faulty outline shader, it might be very broken, but due to the way it’s shaped it might not trigger either the local or global errors.
One idea I’ve been batting around is this idea of some kind of clustering error. Along the lines of having a small radius, say 3px radius, and then for every error pixel that can touch another error pixel within the radius, they merge into a cluster. The benefit here is that I can make tighter assumptions about error limits with clustering. Because it allows me to say, if you find a error cluster smaller than the global limit, but not insignificant (maybe 0.05% total pixels).
One of the things the Unreal Engine automation screenshot comparison system supports now is the ability to use any of the G-Buffers as input. The reason for this, is that while the final color matters a lot. If you actually perform tests on the individual buffers before they are factored into the final pixel color, you may detect errors sooner because while the difference may be obvious if you looked at say the Ambient Occlusion buffer in isolation, it may not show up clearly when comparing final pixel color.
I’m considering just adding a checkbox that makes the screenshot test take a shot of every G-Buffer and compare them all for a given scene. It would be a real space hog but super handy for testing some advanced rendering features in a lot of dimensions easily.
The part I’m hoping makes the approach I’m taking long lasting is the metadata I store for every image and the ability to store alternatives.
So I store the images like this,
CornellBox_Lit\Windows_D3D11_SM5\2806e638aac6982b11cbba723f004bb2.png
CornellBox_Lit\Windows_D3D11_SM5\2806e638aac6982b11cbba723f004bb2.json
Under the test folder, they’re put into a folder made up of PLATFORM_RHI_SHADERMODEL.
This broadly separates the images based on at least the most significant contributors to differences.
The files themselves are based on a unique identifier for the hardware, so there is an assumption right now we need to have stable results for a given piece of hardware - but if the need arises for multiple images for the same hardware. I would hash additional things into the unique id for the shot.
Due to the non-deterministic nature of the shots, one of the features I ended up adding that may or may not end up being valuable is the concept of alternatives. In the event two shots are both right, the system permits additional shots to be added as ground truth, and when comparison time comes, the system will choose the shot that is closest in terms of metadata matching to compare against. Will just need to see how that option evolves - it may just end up being a quick way to deal with sudden changes, that eventually need to have additional high level options baked into the rough separation of shot groups.
The thing I’m hoping saves a lot of headaches is having a json file per shot containing the shot metadata. In addition to having the per-testing constraints, it has and will have more information about features and rendering options currently enabled, in addition to things like driver version, which in the diffing tool we can highlight changes to machines as possibly being the cause of differences.
I looked at some other comparison approaches starting with perceptual comparison algorithms like Structural Similarity and Perceptual Hashing, even added a prototype SSIM approach to UE4. The problem with these approaches is that they may hide the existence of real errors just because a human couldn’t see them in the examples.
]]>Lame.
The problem is that your enum flags are not flags. You’ve probably got something like we had in the Unreal Engine 4 codebase. Someone innocently defined a flag of flags, in this example, PKG_InMemoryOnly. Probably as a time saver they defined this semantic non-unique flag to avoid bugs in the codebase by having a semantic definition of some common concept.
Hopefully the Visual Studio Team will improve the check to allow for semantic flags that perfectly overlap with some number of other flags at some point in the future.
To get around the problem it’s pretty simple, just extract all your semantic flags and make them #defines thusly,
Resulting in the much friendlier,
]]>Overlord Phil has ordered all of his minions (that means you and up to three of your friends) to bring him cows, lots and lots of cows!
Be the best minion you can be and get Phil the right cow he wants. He’s a very fickle overlord with a very short attention span, so don’t dally.
Paint Can: http://www.blendswap.com/blends/view/14365
Kenny Space UI Theme: http://opengameart.org/content/ui-pack-space-extension
Overlord Phil: http://opengameart.org/content/modular-reticulan-portraits
Lots of assets from the UE4 Content Examples project available on the UE4 Learn Tab
Developed using Unreal Engine 4.6, Audacity, Blender, BFXR
Tested on Windows and Mac, but the packaged release is for Windows only. Multiplayer requires multiple gamepads.
]]>It sounds terribly silly, but give it a shot. They say constraints breed creativity. Prepare to be creative :)
]]>http://nickdarnell.github.io/BrokenMagicJumperJSIL/Game/
It’s just as functional as the desktop version too!
]]>I’ve also faced this problem when jamming, but not wanting to invest tons of my own time into developing a native cross platform solution, compiling it on every platform, and then building installers ick, I thought – maybe it will work as a webtool?
So I scoured the net for a good implementation of the hq4x algorithm in javascript that Morgan referenced, found one on github and then built a solution around the library. All in all, I’d say I put 8 hours into it, just finding the right javascript libraries and stitching everything together. The more I mess with modern web tech the more impressed I am in the kinds of tools that can be built in the browser. It’s not great for a lot of things, but it’s definitely got some advantages in the case of making cross platform simple tools that would typically be command line (unfriendly) tools.
If you have a chance, check out The Depixelizer!
]]>The theme this year was the sound of a beating heart. So we created Undead Man Lover. You play a wizard trying to transform zombies back into living humans. But zombies love the sound of a beating heart, so they’ll attack anyone you transform. To combat their attempts to devour your new human friends, you have spells. But unfortunately casting a spell is a stressful affair that raises your own heartbeat, making the zombies think you are quite yummy. You’ll have to balance transforming zombies and not attracting too much attention before the time runs out.
I found it extremely handy to use the mad lib database we created for the last local triangle game jam to come up with a game concept for the jam’s theme. When all you have is the sound of a beating heart, one finds themselves creatively thinking in a very localized minima of themes and game mechanics (heart beat rythm game, suspense game with a heart beat, shooting blood as a heart, blood cell racer…etc). But with the jam theme in mind, the series of “Adjective Noun Verb” phrases that the mad lib generator spits out gives you much more creative applications of the jam’s theme and gets you out of the creative rut.
With nearly 100 combinations of titles generated I ended up pitching a concept for #14 – Broken Magic Jumper. Although a lot of people initially voted that they would like to see the game made, I was the only one on my team by the time everyone had finished picking teams.
The original pitch was that you’re a wizard solving a jumping platformer with spells and there are broken magic areas in the world that change a spells effect, and so you have to solve the puzzle by using the right combination of spells.
The game I ended up making was a bit of a departure from the initial concept, it took until about early Saturday afternoon to nail down the exact mechanics, all the while I was just getting the basics of a jumping platformer working and correctly loading levels from Tiled.
The wizard in charge of protecting the Jump Crystal – the source of all jump powers in the universe, has dropped and shattered the crystal. He must collect and use the jump crystal shards to find and piece the crystal back together.
Each type of jump shard will grant the possessor a special jump power – but you can only have one at a time. Use your jumping and problem solving skills to use the shards in the right order to solve each puzzle. I’ve uploaded final version of the game to play. Both the keyboard and Xbox controller will work. The links are below. Have fun!
Additional content was used under licenses from Oryx, Surt, Qubodup, Kevin MacLeod, and freesound.org.
]]>