[QUOTE=Alfonse Reinheart;42375]That is a mischaracterization of the situation of OpenGL and heuristics/hints.
The problem with hints is not that app developers don’t know what they want. It’s that hints are the wrong medium for them to adequately describe what they want. To a developer, usage hints are like trying to have a highly technical conversation in Latin. The language simply doesn’t have the vocabulary to describe a computer, let alone technical aspects of programming.
Oh sure, you can muddle through, most of the time. But the result is going to be extremely awkward and compromised. It is most certainly not going to be an efficient conversation.
The memory “vocabulary” that has proven most effective for achieving performance is the “vocabulary” of the actual hardware. The specific pools of memory that can be allocated and their general performance characteristics. The specific ways of allocating that memory for access and usage.
Is this a perfect solution for all possible hardware? No. But it’s a better solution for prior hardware, current hardware, and predictable-future hardware than usage hints. And that’s good enough.[/QUOTE]We don’t have their general performance characteristics though. There’s no scalar values except size and granularity. No bandwidths to/from different targets. Also, no info about sequential-ness - although that one’s difficult to specify without lots of variables.
[QUOTE=Alfonse Reinheart;42375]And you still ignore the advantages of the low level approach. Being able to know exactly how much storage of different kinds is available allows developers to know what’s there and to adjust their applications accordingly. They can plan for different scenarios and come up with solutions that work best if contention for limited memory is an issue. Usage hints can’t do that; they’re the wrong vocabulary.[/QUOTE]The developer has access to this too. If you wanted to go to extreme detail the hints could actually score each memory type for each hint request. But you’ve already got access to everything in current Vulkan.
[QUOTE=Alfonse Reinheart;42375]At the end of the day, the purpose of such a hint would be to allow the implementation the liberty to decide where an allocation comes from. But Vulkan is, by design, an explicit API. If you allocate X bytes from memory pool Y, then X bytes are allocated from pool Y.
So what exactly would a hint accomplish? The memory still has to be allocated from pool Y, since that’s what the user said to do. So the only way a hint could change something is if implementation lies to the user. It would have to expose multiple hardware memory pools as a single memory pool, with the implementation selecting the actual hardware pool based on your usage hint.[/QUOTE]That wasn’t how I was thinking of doing it. I was thinking of letting the programmer request a hint, totally separate from the actual memory interaction. The memory interaction is explicit, but the programmer could just use the memory type returned from the hint request. It’s a standalone extension that doesn’t change anything.
[QUOTE=Alfonse Reinheart;42375]I said no such thing. I said, “IHVs have not given us tons of improvements on graphics. Since the advent of programmable hardware, IHVs have given us more power; it’s the users who have given us “a ton of improvements” towards better graphics, using the tools IHVs have provided.”
My point is that graphics only gets better by programmers using the tools that GPUs provide. The tools by themselves are not “graphics features”, any more than a hammer is a house.
And the most important tool that IHVs have provided is generality: giving more control over the rendering process to the user. That’s not a “graphics feature”, and yet it is the thing most responsible for the improvements in visual fidelity.[/QUOTE]
I think this is probably just a semantic debate. I think they have given us tons of improvements because I’m counting things like stencil, TMU & ROP improvements, tesselation etc (not saying I like them all) as graphics features. The final real-world improvements are usually a combined effort between the programmer, the IHVs and the API.
And some (e.g. filtering improvements, Z compression, etc) didn’t require changes to the APIs or the applications.
I’m often for the idea of increasing generic-ness, but I don’t mind going the other way if the benefits warrant it. (Plus, things like renderpasses are the opposite of generality).
For the shader code I don’t see why not. You would need to attach the images differently to regular textures. But you certainly don’t need to know the whole renderpass to do so.
And there are many hardware optimizations that you could do with that knowledge - you need less image info and you can potentially access the data in bulk without all the heavy TMU work.
[QUOTE=Alfonse Reinheart;42375]Who are you to decide what is “extra specific” stuff and what isn’t? You’re talking about an entire class of hardware. My analogy holds because it’s the same thing. AMD and Intel hardware are different; they have “extra specific” stuff that NVIDIA does not. Why not excise all non-NVIDIA hardware from OpenGL’s API and make those the AMD/Intel-only parts an extension?
Every abstraction over a range of hardware is going to have “extra specific” stuff in it, some concession to one piece of hardware or another. Render passes barely register on my radar; Vulkan has far more annoying things than that.
Like the primitive topology being a fixed and immutable part of the pipeline. Why? Because some pieces of hardware out there need that. D3D 12 puts the basic primitive type (point, line, triangle, patch) in the pipeline, but the specific primitive used (triangle strip vs. list, etc) is command buffer state. By contrast, Mantle, Metal, and Vulkan put the entire primitive type in the pipeline. Is that due to tile-based renderer needs? Well, Mantle putting it there seems to suggest that the need is broader than TBRs.
Should we consign any hardware that needed the full primitive type at pipeline building time to an “extension”? No. As much as I personally might want to, the gain from doing so is not significant enough to offset the potential costs. It creates a huge distinction between writing to different kinds of hardware.
Now, if all hardware could do it, that’d be great. But if all hardware couldn’t, I’d rather Vulkan’s core API support the lowest-common denominator, with an extension or optional feature allowing the more specific case.[/QUOTE]
It’s a good point I guess. I do acknowledge that I have a sociological bias against mobile (computers used to be tools owned by their owners, but almost all mobile devices are anticompetitive, exploitative walled gardens, and this means people can’t learn computing by toying around like the previous generation did. They’re filled with software that just wants to harvest their private data for profit. I do find it difficult to care).
So I do get that it comes down to what you decide to consider as your lowest common denominator. But the lower you go the more compromises you need to make. Potential new shader types that are totally unsuitable for TBRs, do they get overlooked? or scheduling hints that TBRs can’t follow? I don’t know what’s around the corner, but I know OpenGL ditched many alternate rendering modes when adding new features. APIs for decades have had a higher lowest common denominator, and lowering it when we could’ve just added an extension instead without too much extra cost seems bad.
But sure, it’s a line drawn in the sand, and they chose to draw it somewhere I don’t like.
Getting back to specifics, renderpasses to me just stick out as something totally counter to the goals Vulkan otherwise aims for - explicitness and close-to-the-metal-ness. There’s also lots of annoyances that they inflict that I haven’t mentioned yet, for example:
[ol]
[li]Breaking modularity. You need them when creating framebuffers and pipelines, even though conceptually there’s no need. Wouldn’t most people want to create the basic rendering framework (incl framebuffers) before building scene-specific objects? Yes, you can just create dummy-renderpasses (compatible ones), but it’s a pain. Same for pipelines - you can use them in multiple renderpasses, yet you need to create a renderpass first - a N-to-1 relationship where you need one of the N before you build the 1 - again, dummy-renderpasses solves it.[/li][li]Inflexible subpass list. You can’t use information generated during previous subpasses to choose which future subpasses need to be done (eg running a shader only if a certain surface type isn’t totally occluded) - you can just ignore some but that has costs.[/li][li]Converting software. Almost everything else should map fairly easily from OpenGL to Vulkan, but if you haven’t built your rendering system around a sequential set of passes then there’s lots of refactoring required, for example you need to supply textures at different times, which can mean moving the code that finds them. The rest of the API doesn’t really need much refactoring when converting unless you’re adding parallelization.[/li][/ol]
[QUOTE=Alfonse Reinheart;42375]Like other low-level APIs, Vulkan was not made for those people. Small-time game developers should be focused less on the minutiae of their rendering and more on what players actually care about: gameplay. University graphics courses should be focused less on the low-level details that Vulkan uses and more on high-level concepts of graphics. And so forth.[/QUOTE]I don’t agree, but it is subjective I guess. Previously, small-time developers became big ones via learning stuff while making games. Carmack etc. IMHO we want game developers learning this stuff so that they can have big ideas that cover both engine and gameplay.
[QUOTE=Alfonse Reinheart;42375]Vulkan exists to serve the needs of big studios and such. Others might be able to benefit, but Vulkan’s primary audience are big developers.[/QUOTE]Do we need another new cross-platform API then for the others? OpenGL is decrepit.
But many don’t care about tile-based renderers. Is that wrong, if they only want to work on non-mobile stuff.
[QUOTE=Alfonse Reinheart;42375]You did not present any evidence for any of the claims you made. Instead, you started describing what is effectively a conspiracy theory. That “TBR manufacturers have money and influence” and thus they have forced non-TBRs to break their API just for the needs of a powerful minority. You offer no evidence that the non-TBRs were forced into anything.[/QUOTE]I should’ve used a question mark. I didn’t mean to state it was so. And “money and influence” doesn’t necessarily mean strong-arming anyone. Money just gives them a say - membership to Khronos.
It could possibly be something like “Hey, do you think you could just squeeze in a little something so that we can get an overview of the whole rendering process? It’s really important for us - it’ll really speed things up for us”. “Well ok then, it doesn’t seem to cause us any slowdown and if you really need it”. Done. No hostility, no bad behavior or shenanigans, but still an outcome based on influence.
The gains to TBRs are big. The losses to non-TBRs are small. And the TBRs aren’t just Imagination and Qualcomm but also (indirectly) Apple and Samsung and Google and many other mobile-focused Khronos members. They’re a huge part of the membership so they’re not even really the minority - it’s their API too.
[QUOTE=Alfonse Reinheart;42375]You talk about this as though Vulkan hasn’t been out for a year or something. If there were any foundation to what you were saying, then we would already have seen some of the effects of this, right? If render passes were really that terrible of a thing, if their complexity was so great that it genuinely inhibited smaller/individual developers from adopting Vulkan, then wouldn’t we already have seen lower use of Vulkan from these smaller users?[/QUOTE]Lower than what though? We don’t have a control group.
If you divided up all the learning required for Vulkan, some percentage would be on renderpasses. When migrating from D3D12 it will be a large percentage. When going from OpenGL it will be, my guess, approx 10%. When learning from scratch, again IMHO, approx 4%.
The more there is to learn, the harder it is to learn it, the more its reputation for difficulty increases, the fewer people that try it. Add in a few positive-feedback loops too (support, word-of-mouth, etc). Measuring the actual effect is incredibly hard, and requires surveying.
I’m not sure what your data was showing? D3D12 vs Vulkan doesn’t provide much info. What is the null hypothesis (D3D1-11 vs OpenGL?), more “indie” use = more questions, simpler API better documentation = less questions, more “indie” use = more questions, website AI biases, there’s so much bias (and noise) in both directions I don’t think you can get any info that way.