SLS2026 - Reflections on my talk
Links
You can find my talk at the link below:
Alternatively, I also have a page on it on this site here
The good
Personally, I think I delivered my talk well. I didn’t go too fast or slow, and I had a bit of time at the end for questions. I think it is fair to say that my talk was probably one of the more technical talks of the symposium in terms of code. Almost all of my talk involved talking about code. This was intentional. I like to make sure that people can see and understand what I am doing, and code is often the best way to demonstrate that.
The not-so-good
The main issue with my talk I think stemmed from the fact that I only had 30 minutes (25 + 5 for questions) to present it. And that issue was the fact that I had time to talk about the implementation, but I gave very little time to the problem that I was solving. This was a problem that I was well aware of, and I tried to get around that fact by doing two things:
- Pointing people to my GDC talk which definitely covers the problem I am trying to solve in great detail.
- Littering my talk with links to my documentation and other resources rather than talking about those topics.
My intent here was that it would work fine for people watching on YouTube who could pause the video and follow the links as they came up. But of course, this would did not work well to a live audience who can’t just follow the links, and likely had never seen my GDC talk. Due to this, I don’t think I adequately conveyed when you would use this type of testing framework. This is something I intend on writing a blog on.
Answering Alan’s question
I don’t think I answered Alan’s question well. His question can be found at this timestamp. Alan’s question was as follows:
Have you tried to address the problem at all of determinism? i.e. You run the tests on different cards, you get different values
I’ll admit I got slightly confused by this question, and my answer was basically:
No, not really. There are no determinism problems because all of the tests are run on the GPU… It is as deterministic as the code that you are testing.
So yeah, not a great answer by any stretch. I know what I was trying to say with this, but I did not communicate it well at all. So, let’s try and rectify that!
If I was to answer this question now, my answer would be:
Different GPUs may provide slightly different results across GPUs if you are testing the exact results of floating point operations. If you are getting different results for integral based operations on different cards, then I would be worrying about the quality of your GPU/driver! But anyway, because all of the assertions happen on the same GPU, determinism issues between GPUs are not an issue.
The main problem that we are solving with this framework is removing the issues of comparing screenshot values taken from one GPU with another. Tests that you can write with this framework will be as deterministic as the code that you are testing. And by that I mean, given a certain GPU, the same test will return the same result given the same inputs to the shader. In exactly the same way as a unit testing framework in C++ would.
Now back to floating point. We all know that if you are testing some operation that involves doing math with floating point numbers you probably should not be testing for the exact value of the resulting floating point operation. Instead, we typically assert that the result is what we expect +/- some epsilon. And that epsilon might need to be tuned based on the range of GPUs that you are testing with." With that being said, I would like to think that my above statement would be true… given a certain GPU, the same test should return the same result given the same inputs to the shader. This should apply for floating point operations as well as anything else… If this doesn’t maintain true then we have problems!
I think this is a much better answer. This question has made me realize that I should give much more thought about how I would answer some common questions that might be asked. Alan’s question was not something I couldn’t have foreseen. It is a very obvious question to ask given the problem domain. Given this, I think I little bit of better preparation for the Q&A would probably be worthwhile for future conferences :)
References to my talk throughout the symposium
There were some nods to my talk throughout the conference which I found quite fun.
I mentioned in my talk that I had a library that was somewhat analogous to the C++ standard library but I didn’t want to be bold and call it the standard library (timestamp). In Abstraction done right, first do no harm, Francisco referenced this part of my talk when he said that they “weren’t as humble as Keith” (timestamp). I would just like to make it clear that I wasn’t referencing Francisco’s talk when I said this!
Lee Mighdoll made a reference to my talk when he was talking about the unit testing capabilities of WESL (timestamp). He even went as far to ask if what they were talking about was the same as what I was talking about in mine during the talk.
I somewhat enjoy putting the HLSL logo on any slide deck that I can…
Note
All of the images above are clickable and will bring you to the part of the talks shown in the screenshots :)
So, I found it amusing when Chris mentioned that he was amazed that his dog kept showing up in people’s slides (timestamp). I shall be taking this as a reference to my talk :)


