So in the interest of finally trying to push through my aging OpenGL knowledge, I finally sat down and started to learn Vulkan. One day later, I have an instance, I have devices, I have a swapchain, and I'm starting to try to attach images to it. This is slow progress, but I'm trying to ensure I understand what's going on, build a good interface for these things, and I'm also a horrifically distracted individual... (this bodes well for my future working career, I hope I can improe it...) So I'm still off from like, actually drawing a pixel, but I think I know what's going on, at least.
I'm also doing this completely pure for the learning experience, since I do want to be able to understand how this works before any helper libraries enter the picture, the only third party thing I'm using is the Volk loader, because I looked at LunarG and almost gave up right there because I wasn't sure how to use it cleanly. (in addition most of the projects were trying to use libraries like SDL, which as mentioned I'd rather avoid) The ability to just have a project like GZDoom that you can cmake, load in VS or whatever, and hit f5 and it just. fuckin. works. is so nice that I'd rather replicate that. This pure experience does mean I have to do pure Win32 programming and I'm going to have to write a heap allocator, even though I imagine any practical project I do I'd just pull in something like VkMemAlloc or whateer.
update: I've been reading Dustin Land's I am Graphics and So Can You blogposts about implementing Vulkan in Doom 3 as a sort of "map" for working this out, but I think it's damaged my understanding of pipeline barriers. This post in particular.
It goes into detail of the potential hazards that pipeline barriers are supposed to handle, but it goes into no detail whatsoever about why such a heavyhanded approach is being taken, and for the life of me I simply cannot understand the hazard here. The part on pipeline barriers being used for image layout transitions makes perfect sense, but not the more general barrier. Khronos's own samples
as well as this blog post
, as well as peeking at some frame captures in RenderDoc (which appears to not like vkQuake very much
) have shown some practical examples, but I still for the life of me cannot explain the barrier being used in vkDoom3
update 2: yeah uh that was fucking bullshit and didn't even make it into the first public release of vkDoom3. I suppose I need to look around for educational materials that actually understand the API it's trying to describe.