Sunday, May 11, 2014

Things that drive me nuts about OpenGL

Here's a brain dump of the things that sometimes drive me crazy about OpenGL. (Note these are strictly my own opinions, not those of Valve or my coworkers. I'm also in a ranty-type mood today after grappling with OpenGL for several years now..) My major motivation to posting this: the GL API needs a reboot because IMO Mantle/D3D12 are going to most likely eat it for lunch soon, so we should start talking and thinking about this stuff now.

Some are minor issues, and some are specific to tracing the API, but all these issues add up to API "friction" that sometimes make it difficult to encourage other devs to get into the GL API or ecosystem:

1. 20 years of legacy, needs a reboot and major simplification pass
Circle the wagons around a core-style API only with no compatibility mode cruft.
Simplify, KISS principle, "if in doubt throw it out"!
Mantle and D3D12 are going to thoroughly leave GL behind (again!) on the performance and developer "mindshare" axes very soon.
Global context state and the binding pattern sucks. The DSA (direct state access)-style API should be standard/required.

Some bitter medicine/tough love: Most devs will take the easy path and port their PS4/Xbone rendering code to D3D12/Mantle. They will not bother to re-write their entire rendering pipeline to use super-aggressive batching, etc. like the GL community has been recently recommending to get perf up. GL will be treated like a second-class citizen and porting target until the API is modernized and greatly simplified.

2. GL context creation hell:
Creating modern GL contexts can be hair raisingly and mind numbingly tricky and incredibly error prone ("trampoline" contexts anyone?). The process is so error prone, and platform (and occasionally even driver) specific that I would almost always recommend to never go directly to the glX, wgl, etc. API's, and instead always use a library such as SDL or GLFW (and something like GLEW to retrieve the function/extension pointers).

The de-facto requirement to always pick from a small set of large 3rd party libraries just to get a real context rolling sucks. The API should be simplified and standardized so using a 3rd party lib shouldn't be a requirement just to get a real context going.

3. The thread's current context may be an implied "this" pointer:
Function pointers returned by GetProcAddress() cannot (or should not - depending on the platform!) be used globally because they may be strongly tied to the context ("context-dependent" vs. "context-independent" in GL-speak). In other words, calling GetProcAddress() on one context and using the returned func pointer on another is either bad form or just crashes.
So is GL a C API or not?
Can we just simplify and standardize all this cruft?

4. glGet() API deficiencies:
This is probably too tracing specific, but it impacts regular devs indirectly because if the tools suck or are non-existent because the API is hard to trace your life as a developer will be harder.
The glGet() series of API's (glGetIntegerv, glGetTexImage, etc.) don't have a "max_size" parameter, so it's possible for the driver to overwrite the passed in buffer depending on the passed in parameters or even the global context state. These functions should accept a "max_size" parameter and the functions should fail if the supplied max_size is too small, not overwrite memory.
Computing the exact size of texture buffers the driver will read or write depends on various global context state - bad bad bad.
There are hundreds of possible glGet() pname enum's, some accepted by only some drivers. If you're writing a tracer or some sort of debug helper, there is no official way to determine how many values will be written by the driver given a specific pname enum. There are no official tables to determine if the indexed variants of glGet() can be used with a specified enum, or determine the optimal (lossless) type to use given a specific enum.
Also, the behavior of indexed vs. non-indexed gets & sets is not always clear to new users of the API.
Alternately, perhaps just add some glGet() metadeta API's vs. publishing tables.

5. glGetError()
There is no glSetLastError() API like Win32, making tracing needlessly complex.
Some apps never call it, some call it once per frame, some only call it while creating resources. Some call it thousands of times at init, and never again. I've seen major shipped GL apps with per-frame GL errors. (Is this normal? Does the developer even know?)

6. Can't query key things such as texture targets
(I know some of this is being worked on - thanks Cass!) This makes tracing/snapshotting more complex due to shadowing.
Shadowing deeply interacts with glGetError()'s (we can't update our shadow until we know the call succeeded, which involves a call to glGetError(), which absorbs the context's current GL error requiring even more fancy footwork to not diverge the traced app's view of GL errors).

About the recent talk of getting rid of all glGet()'s: IMO either all state should be queryable (which is almost the case today), or the API should be written with maximum perf and scalability in mind like D3D12/Mantle. The value added by the API is clearly understood in either of these extremes.
Getting rid of glGet()'s will make writing tracers & context snapshotters even trickier.

7. DSA (Direct State Access) API variants are still not standard and used/supported everywhere
DSA can make a huge difference to call overhead in some apps (such as Source1's GL backend). Just get rid of the reliance on global state, please, and make DSA standard for once and for all.

8. Official spec is still not complete in 2014:
The XML spec still lacks strongly typed param information everywhere. For example:

    <proto>void <name>glBindVertexArray</name></proto>
    <param><ptype>GLuint</ptype> <name>array</name></param>
    <glx type="render" opcode="350"/>

apitrace's is still the only known public, reliable source of this information that I know of:

  GlFunction(Void, "glBindVertexArray", [(GLarray, "array")]),

Notice how defines the type as "GLarray", while the official spec just has the nondescript "GLuint" type.

Add glGet info() to official spec: Mentioned above. How many values does the pname enum return? What are the optimal types to use to losslessly retrieve the driver's shadow of this state? Is the pname ok to use with the indexed variants?

9. GLSL version of the week hell:
For early versions, the GLSL version may not sync up with the GL version it was first defined in, making things even more needlessly confusing. And this is before you add in things like GLSL extensions (*not* GL extensions). Can be overwhelming to beginners.

10. No equivalent of standard/official D3DX lib+tools for GL:
Texture/pixel format conversion helpers that don't rely on using the driver or a GL context
KTX format interchange hell: The few tools that read/write the KTX format (GL's equivalent of DDS) can't always read/write eachother's files.
Devs just need the equivalent of Direct3D's DXTEX tool, with source.
The KTX examples just show how to load a KTX file into a GL texture. We need tools to convert KTX files to/from other standard formats, such as DDS, PNG, etc.
A GLSL compiler should be part of this lib (just like you can compile HLSL shaders with D3DX).

11. GL extensions are written as diffs vs the official spec
So if you're not a OpenGL Specification Expert it can be extremely difficult to understand some/many extensions.

Related: The official spec is written for too many audiences. Most consumers of the spec will not be experts in parsing it. The spec should be divided up into a developer-friendly spec vs a deeper spec for the driver writers. Extensions should not be pure delta's vs. the spec - who can really understand that?

12. Documentation hell
We've got 20 years of GL API cruft out there that adds noise to Google searching for GL API information, and beginners can get easily tripped up by bad/legacy documentation/examples.

13. MakeCurrent() hell
Can be extremely expensive, hidden extra variable cost with some extensions (I'm looking at you NV bindless texturing!), can crash drivers (or even the GPU!) if called within a glBegin/glEnd bracket, etc.
The behavior and performance of this call needs to be better specified and communicated to devs.

14. Drivers should not crash the GPU or CPU, or lock up when called in undefined ways via the API
Should be obvious by now. Please hire real testers and bang on your drivers!
Better yet: Structure the API to minimize the # of undefined or unsafe patterns that are even possible to express via the API.

15. Object deletion with multiple contexts, cross-context refcounting rules, "zombie" objects:
Good luck if the object being deleted is currently bound on another context.
Trying to call glGet()'s on a deleted object (that is still partially "live" because it's bound or attached somewhere) - behavior can differ between drivers.
All of this is needless overhead/complexity IMO.
Makes 100% reliable snapshotting and restoring GL context state very, very difficult.
I see world-class developers screw this up without knowing it, which is a clear sign that the API and/or tool ecosystem is broken.

16. Shader compiling/program linking hell
Major performance implications to shader compiling/linking.
Tokenized shader programs work. Direct3D is the existence proof that this approach works. The overall amount of pain GLSL has caused developers porting from D3D and end users (due to slow load times) is incredible, yet GL still only accepts textual GLSL shaders.
Performance drastically varies between drivers. Shader compiling can be effectively a no-op on some drivers, but extremely expensive on others.
Program linking can take *huge* amounts of time.
Some drivers cache linked programs, some don't.
Program linking time can be unpredictable: fast if the program is cached, but there's no way to query if the program is already cached or not. Also no way to query if the driver even supports caching.
Some drivers support threaded compilation, some don't. No way to query if the driver supports threaded compilation.
Some drivers just deadlock or have race conditions when you try to exploit threaded compilation.
Just a bad API, making it hard to trace and snapshot: Shaders can be detached after linking. Lots of linked program state is just not queryable at all, requiring link time shadowing by tracers.
Just copy & paste what D3D is doing (again, it works and devs understand it).

17. Difficult API to trace, replay, and snapshot/restore
Hurts tool ecosystem, ultimately impacts all users of API.
API should either be written to be easily traced/replayed/snapshotted, or incredibly performant/scalable like Mantle/D3D12. Right now GL has none of these properties, putting it in a bad spot from a value proposition perspective.
API authors should focus more on VALUE ADDED and less on how things should work, or how we are different from D3D because we're smarter.

18. Endless maze of GL functions (thousands of them!)
Hey - do we really need dozens of glVertexAttrib variants? Who really even uses this API?
API needs a reboot/simplification pass. Boost the "signal to noise" ratio, please.

19. Legacy complexities around v3.x API transition:
"Forward compatible", "compatibility" vs. "core" profiles etc. etc. etc.
Devs should not have to master this stuff to just use the API to render shaded triangles.
"Core" should not even be in the lexicon.

20. Reliably locking a buffer with DISCARD-semantics on all drivers without stalling the pipeline:
Do you use a map flag? BufferData() with NULL? Both, either, etc.?
What lock flag or flags do you use? Or does the driver just completely ignore the flag?
Trivial in D3D, difficult to do reliably in GL without being an expert or having direct access to driver developers.
This stuff is incredibly important!
Special note to driver developers: What separates the REAL driver devs from wannabees is how well you implement and test stuff like this. Pipeline stalling is not an option in 2014!

21. BufferSubData() stalls when called with "too much" data on threaded drivers
No way to query what "too much" data is. Is it 4k? 8k? 256k?

22. Pipeline stalling
No official (or any) way to get a callback or debug message when the driver decides to throw up its hands and insert a giant pipeline stall into your rendering thread
This can be the #1 source of rendering bottlenecks, yet we still have almost zero tools (or API's to help us build these tools) to track them down

23. Threaded drivers hell
Some manufacturers decide to forceably auto-enable their buggy multithreaded drivers months after titles have been shipped & thoroughly tested by the developer. (And to make matters worse, they do this without informing the developer of the "app profile" change or additions.)
Some multithreaded drivers have buggy glGet()'s when threading is enabled - makes snapshotting a nightmare.
No official way to query or control whether or not the driver will use multithreading.
No way to specify to the driver that a tracer is active which may issue a lot of glGet()'s (that the app would not normally do)
Bone headed threaded drivers that slow to an absolute crawl and stay there when an app or tracer constantly issues glGet()'s (just use a heuristic and automatically turn threading off!)

24. Timestamp queries can stall the pipeline on some drivers
Makes them useless for cross platform, reliable GPU profiling. GL spec should strongly define when the driver is allowed to stall on these queries. Unnecessary stalling should be redefined as a driver bug (by sometimes lazy/incompetent driver developers who don't understand how key these little API's can be).
For reference, NVidia does this stuff correctly. If you are a driver writer working on pipeline query code, please measure your implementation vs. NVidia's driver before bothering to release it.

25. GL is really X different API's (one per driver, sometimes per platform!) masquerading as a single API.
You can't/shouldn't ship a GL product until after you've thoroughly tested for correctness and performance on all drivers (in both threaded and non-threaded modes). You will be surprised at the driver differences. This came as a big shock to me after working for so long with D3D.
This indicates to me that Khronos needs to be more proactive at testing and validating the drivers. GL needs the equivalent of the WHQL process.

26. Extension hell
One of the touted advantages of GL is its support for extensions. I argue that extensions actually harm the API overall, not help it.

I've been through a large amount of major and minor GL callstreams (intricately!) over the previous ~1.5 years. (Before that I was the dev actually making togl work and be shippable on all the drivers. I can't even begin to communicate how difficult and stressful that process was 2+ years ago.) Excluding the driver devs I've probably worked with more real GL callstreams than most GL devs out there. Sadly, in many cases, some to many of the most advanced "modern" extensions barely work yet (and sometimes vendors will even admit this fact publicly). Or, if you try to use a cool-sounding extension you quickly discover that you're pushing a little-used (and tested) path down the driver and the thing is useless for all practical purposes.

From studying and working with the callstreams, it's apparent that devs do a massive MIN() operation across the functionality implemented on the available/targeted drivers. This typically means core profile v3.x, maybe also a 4.x backend with very simple/safe operations. (Or it's a small title that just uses compatiblity GL like it was still 1998 or 2003 - because that's all they need.) They don't bother with most extensions (especially the "modern" ones) because they either don't work reliably (because the driver paths that implement them are not tested on real apps at all - the classic chicken and egg problem), or they are only supported (sometimes barely) by 1 driver, or the value add just isn't there to justify expanding the product testing matrix even more.

Additionally, some of these modern extensions are very difficult to trace, which means that whatever debugging tools employed by the developer aren't compatible with them. So you need a fallback anyway, and if the devs must implement a fallback they might as well just ship the fallback (which works everywhere) and not worry about the extension (unless it adds a significant amount of value to the product).

So unless it's non-extended GL it might as well not be there to a large number of devs who just want to ship a reliable/working product.


  1. I completely agree with this analysis! I have a 3 majors griefs with OpenGL that I can see impacting everything else (from driver makers to devs):

    1) An utterly cluttered API without any proper typed signature functions. The bandwagon of extensions popping all around is not helping in this area, to get a grasp of what is the clean and standardized way of programming GL today, but feels just like confusions added to an existing mess. With the state of the current API, all language bindings are fighting and duplicating the work against the same problem over and over (they are always trying to provide a clean typed API, so they are of course forced to duplicate their effort in this area, as it is not done at the core of the API)

    2) Lack of standardized shared front-end/middle-end IR bytecode and compiler that would allow end-users/developers and driver makers to agree on a solid common compiler infrastructure. A driver maker should not even be able to ship a driver or claim GL support without getting through an official test suite that is checking that everything is correctly implemented correctly according to the standard (not the cluttered standard of today, but a clean and skimmed standard)

    3) Drivers and Tools: This part is really suffering hard from both the state of 1) and 2)

    1. I've been working on OpenGL bindings for close to a decade now, and your first point resonates strongly.

      Here [1] is an example of the amount of work it takes to provide proper strong types for OpenGL core (1.0 - 4.4). Every binding out there either duplicates this or simply raises its hands and gives you an untyped C-like soup (Java devs, I'm looking at you.)


      This should have been part of the XML registry from the beginning, not something to be done downstream. It's a sad reflection of the health of Khronos that not a single IHV has seen fit to fund an intern for 6 months to clean this mess.

  2. I agree with everything you wrote. Just FYI though, OpenGL ES is the version of GL with all the cruft removed. It still has all the other problems you mentioned but it doesn't have the the cruft of 20 years (though it does have the influence of a 20 year old API)

  3. I wholeheartedly agree with your points. The complexity of GL is out of this world. As a personal pet peeve, I wish all the global state would just disappear from my life.

    I worry that OpenGL may be subject to too much bureaucracy to be properly rebooted any time soon.
    What are concrete steps we can take to create a next generation open standard for GPU programming? (LibreGL?)

    1. Convince/beg/coerce the other IHVs into adopting Mantle...

    2. Mantle is a proprietary, vendor-specific API. It's the last thing the gaming industry needs. I shouldn't have to elaborate, as the prospect of a GPU/CPU vendor steering a standard graphics API specification is just ridiculous.

    3. There is nothing in Mantle that would prevent it from being standardized and adopted by other vendors. Kronos could head this up if they wanted to. The hurdles here are political, not technical. See for AMD's take on the subject. Also, ironically: It's primarily the CPU/GPU vendors that steer OpenGL.

    4. I concur with Joshua. It also reflects Joan Andersson's vision of spreading Mantle on other vendors / devices / operating systems. He stated that the abstraction in Mantle would be high enough to enable vendor specific things through extensions, hence Nvidia and Intel could adopt it if they chose to. That's a big IF of course, but it would bring additonal value to ISV's, IHV's and the whole ecosystem alike (e.g. like a leaner development and support process). And that's what standardization is meant to achieve in the first place!

  4. Also the error codes aren't unique to the errors and many of them e.g. GL_INVALID_OPERATION are used multiple times for different things.. now, considering these return codes use 32 bits, that allows for 4,294,967,296 different types of error to be reported... which makes it inexcusable to do what they do IMO. might as well call it GL_SOMETHING_WENT_WRONG_GOOD_LUCK_FINDING_OUT_WHAT and make it a 64 bit int, for a laugh

  5. How could this be fixed? Can OpenGL be recovered, or does a new open spec need to come along to replace it? What's the possibility of, for example, a complete rewrite in OpenGL 5?

    1. Kinda like how the programming language Python got restructured I think. They went from Python 2 to 3 where they removed redundant or old libraries and updated existing or added new libraries. I think this can be done for OpenGL too, but it might be a bit more complicated. A team of experts that consists of both driver vendors and key developers like Valve should come together and work on the new spec to find out what is desired.

  6. I'm surprised you didn't mention the huge amounts of function overloading that's done in OpenGL. Things like void pointers as buffer offsets, while a very clever solution to keep backwards compatibility without breaking function signatures, often results in code that's very weird to look at and misunderstandings for new comers.

    Also, buffer binding points. The fact that you have to bind buffers to certain slots in order to upload data, even though those slots say nothing about how the buffer may be used, can be a huge source of confusion for newcomers. Once again, binding points are overloaded with both "implied 'this' pointer of the next buffer operation" and "buffer role for the next drawing operation as array/index/uniform buffer", which has lead people to believe that once you bind a buffer to eg. GL_ELEMENT_ARRAY_BUFFER, you can never use this buffer for anything else but indices.

    One thing I have to admit though is that thanks to ARB extensions, users with only OpenGL 2.0 capable cards but with modern drivers could still run my code even though it makes use of FBOs.

    Unfortunately, I doubt OpenGL will ever get rid of the legacy cruft, seeing as there's companies still running old GL apps that rely on it and vendors like Nvidia have no intentions of dropping them ( Khronos tried to introduce core contexts as a solution, but what does that help if everyone just keeps on creating compatibility contexts?

    By the way:
    > glGetError(): Some call it thousands of times at init, and never again.

    I believe that might be due to GLEW still being broken for core contexts (

  7. I'm so glad I chose DirectX for my app...

    1. It might be easier, but if you want any level of portability away from Windows DirectX will make the process much more painful; if you want to port to Mac, Linux or mobile you'll be out of luck. It's one of those "nothing worthwhile is easy" gotchas.

    2. You are correct. Fortunately, my ecosystem (companion apps, customers) is exclusively Windows.

    3. And that is the mark of a good developer. You understand your target, and have chosen the best tool for the job.

  8. yes, excellent points. from a teaching perspective the bizarre and ambiguous use of naming conventions is a major and unnecessary problem - "vertex attribute object" et. al.

  9. I agree so hard with all of this. I'm currently developing a cross-platform engine (desktop Linux, Windows, OS X, Android, iOS) and am being hit especially hard by extension hell, GLSL version variation, and even creating contexts (SDL solves some of that problem, but sometimes it successfully initializes a context at a certain version even though the device does not support that version).

    I would add another pain point: error messages for GLSL also vary a lot per driver.

    Choosing D3D is, of course, impossible for my goals, so I have no choice but to work a bit harder. Very much worth it in the long run, but boy does this struggle feel unnecessary.

    One important advantage is that once you solve these things, they are pretty much solved forever: your wrapper code or engine or whatever will now work on a huge range devices. So, let's remember that despite all these important criticisms, we should all be grateful that OpenGL exists. Considering the state of the industry, it's kinda a miracle that so many vendors came together to support it.

  10. And my favorite pain point: Lack of multi-threaded draw submission. DX11 at least attempted this, but in OGL its impossible by design.

  11. Did You worked with Mesa folks?

    Does it solve Your issues? (And to what extend?)

  12. "the GL API needs a reboot because IMO Mantle/D3D12 are going to most likely eat it for lunch soon".

    I think everyone commenting here may disagree with me, but the fact that OpenGL hasn't been rebooted _is_ a feature.

    If you have a huge 3-d codebase and you are willing to rewrite every single line of graphics code without shipping new features, and you are willing to run only on the most modern hardware (that can support clean, modern fast paths) then there's no point in rebooting OpenGL - you can go use Mantle now. :-)

    But that's a pretty huge pill to swallow for all but the largest organizations (organizations so large they can ship on multiple graphics APIs to begin with). So I think practical solutions to some of the more difficult problems with OpenGL need to mitigate the pain within the context of "we're not just going to rewrite every single app so that the API is incrementally less weird."

    1. This is precisely what led to the things that Rich is complaining about.

      If some organization desires OpenGL to stay exactly as is, so that it doesn't need to change its code, then this organization is part of the problem.
      Their inability to keep their code up to date is making a mess of things for the rest of us.

      If all they want is for legacy code to keep working, then they don't need to re-write it. There is enough legacy code kicking around that the hardware vendors will continue to keep the old APIs running on current hardware to avoid losing business.

      If, however, they want to be able to throw in support for a new feature here and there, then they should keep the rest of their code current in order to facilitate this.
      This is just the price of doing business, and it has not been a problem in the D3D world. The future API should not be compromised just to keep people from having to maintain their code.

    2. Hi Joshua,

      "The price of doing business" is spot on - but I think you must recognize that this is a continuum and not a black and white decision. Each decision to throw some stuff out in the API lowers the cost of doing business for the IHVs and anyone else 'maintaining the platform' and raises the price of doing business for app developers with existing code. There may be some 80-20 stuff going on here - that is, cutting some parts of the API may provide a bigger win for IHVs than others.

      I bring this up because one style of GL criticism is (loosely summarized) "state-based APIs suck, the lack of direct object access sucks, binding sucks, let's refactor every single API that uses these idioms to do be object or handle based." And the problem with that is that pretty much 100% of GL is state based, binding based, etc. and therefore you have to eat all of the cost of breaking the API, even if there is a smaller set of APIs that cause IHV support headaches, etc.

      I agree with almost everything Timothy posted here:
      In particular, if the higher level older stuff were a clean layer on top of a leaner core GL, I think that would be a huge step in the right direction. If you want to rewrite to get leaner-meaner-faster-better and I want to keep using my crufty APIs to avoid touching a gajillion lines of code, we could have it both ways. You'd throw out the emulation library layer and I'd use it - and bugs and gnarliness in that library would be my problem, not yours, and not the IHVs.

      Finally, I'm not sure what lesson we can draw from the opposite direction Apple took compared to NV and AMD. Both NV and AMD have chosen to support ARB compatibility - they opted in to support everything all of the time. By comparison, Apple did exactly what you suggest: they support GL 3 and 4 _only_ in the core profile; cleaning out all use of ARB compatibility is mandatory to get GL 3 and GL 4 core features on OS X. Unfortunately only Apple knows how many GL developers made the trade, cleaning legacy code to get to core profile. I can tell you that in our case we decided to do the modernization, but it has raised the amount of work to get to those new features compared to Windows, and the extra work will be of no direct benefit to our customers. (Although in theory if our modernizing lets Apple ship a better driver with the same dev team, maybe it is a win for our customers.)

    3. I think that what Timothy suggests could work, but in order for it to make sense, both code compatibility and ease of use need to be secondary goals for the lean core. The lean core should only provide enough abstraction to abstract across drivers. If we agree that state-based, etc. are bad for performance, but are unwilling to change them because it makes the compat profile too messy, then the lean core is going to be far less lean than it might otherwise be.

      This wouldn't be a problem if there were other ways at getting at the GPU, but there aren't any. IMO that's the crux of it. OpenGL is trying to be both a high and a low level abstraction at the same time.

    4. Also Vendor B and C open source drivers dev teams went for "ONLY CORE" option for OpenGL 3.2 and up. From their perspective that is masive simplification of driver developemnt, saves tone of time, and allow focusing on futureproof stuff.

  13. Ben: Mantle shares at least one issue with D3D; it is a single platform only API. Although AMD first positioned Mantle as a cross-platform API, actual development and support for the API in Linux has been non-forthcoming. I do believe AMD is on record with a statement to the effect that the company wasn't sure whether or not they *would* bring Mantle to Linux.

    The plain result is that I'm not entirely too sold on the idea that Mantle and D3D will win back the marketshare and mindshare that has been going back to OpenGL over the previous few years. Neither Mantle nor D3D will allow developers to work in a platform neutral fashion; if a developer wants to target non-Microsoft platforms they *will* have to go back to OpenGL.

    * * *

    Rich: If I recall correctly one of the main goals of the long-delayed OpenGL 5.x development was indeed to make a clean break from the accumulated cruft of the full OpenGL. I don't think anybody with membership in Khronos or a seat on the Architecture Review Board would argue against the full OpenGL specification containing far too many legacy layers and non-pertinent functions. Case in point would be the OpenGL ES releases which *do* implement a significant amount of simplification; albeit at the expense of newer features.

    A quick check of the Khronos site also shows a place where Valve could likely have a greater impact in regards to the future development of OpenGL: Currently Valve is only a contributor to Khronos, not a promoter.

    I would echo other posters in suggesting that Valve, and as extension all of the developers Valve serves as a publisher and storefront for, would be better off if Valve had a little bit more weight in Khronos.

    1. From the sounds of it, OpenGL ES is in a better state than OpenGL itself - would it be plausible to bolt all the missing features into an extended version of ES, and slowly deprecate the "classic" version of OpenGL, or would that be impossible/pointless/more work than fixing OpenGL 5 ?

    2. Yes, no, not really.

      The ES versions are designed for embedded systems which typically are lower resource processors typically found in non-desktop applications. Non desktop GPU vendors like ARM (Mali), Samsung (Exynos), Qualcomm (Snapdragon/Imageon), and Imagination Technology (PowerVR) have very little interest in designing graphics processors that have the throughput or rendering capabilities of a desktop gpu. Even Intel isn't all that interested, but the push into GPGPU and parallelization (OpenCL) has had the unintended backwards effect of forcing Intel to design architectures that are at least decent for graphics functions.

      Given that the majority of vendors designing chips and drivers around OpenGL ES are not interested in traditional gaming applications, extending that specification would likely draw no real adoption...

      There also is the issue that recent updates to OpenGL 4.x have ensured that the current OpenGL ES functions are a full subset of the 4.x specification... Simply adding extensions into OpenGL ES turns it back into something resembling full OpenGL...

      Then there is the issue, as noted by Rich, that Nvidia *LOVES* to use non-core profile extensions. I call it code-level sabotage; the most infamous to date possibly being Nvidia's Ultra-Shadow techniques. Given the amount of trouble caused by OpenGL accelerators outside the core profile as is... one can only imagine the heartache that would be generated by more and more vendor-specific OpenGL ES extensions... which already *ARE* a problem.

      To quote myself in regards to the solution, and in respect to AMD and EA's mantle: "An outside viewpoint then suggests that there is room for a hardware vendor-driven graphics API that has a lower barrier of entry to developers and publishers for feedback and development; is updated and developed on a rapid schedule; and is under a full Open-Source license across all platforms. So there is some logic to AMD and Electronic Arts having gone through the trouble of making the Mantle API in the first place."

      ... and thanks to a misclick I lost what else I was going to say after this... There were some bits about AMD's retreat on pushing Mantle as a cross-platform API, e.g.: Does AMD have the development resources to push a third graphics API in addition to legacy DirectX support and contemporary OpenGL support into the Catalyst Driver set?

      There was also something else about software vendors like Valve, Epic, and Sony working with a vendor like Nvidia to push a [i]"GameGL"[/i] version of OpenGL. Well, maybe not Nvidia since they don't grok open-source development models. I'll remember what all I had written eventually...

    3. AMD was still evaluating the feasibility at that time and indeed it seems to be a ressource problem for them.

      But that is a problem which can be overcome if other vendors or Khronos chose to adopt Mantle and put more effort behind it. As Mantle is said to be relatively close to Sony's PS4-API such a rumored "GameGL" and Mantle could have much in common.

    4. Mantle is definitely designed as a cross-platform and cross-vendor API, using a GL-like extension mechanism (but better, of course).

      But: It isn't even out yet; so it's silly to point at it not working on Linux yet and claiming that it's damned to a life of MS chains!

      It's still undergoing initial development by AMD and a small group of industry partners before it's officially released. Some of those partners have shipped Mantle support on Windows as part of that process... but if you or me want to use it right now, we have to wait until the actual release.

  14. I am globally agree, but I think OpenGL also suffer of the lack of interest from years due to biggest games companies always used DirectX from Microsoft and never contribute to OpenGL and the open source platforms.
    By choosing DirectX de facto games will hard to port.
    That nice now the market is moving around Mobiles (iPhone, Android,...) OpenGL importance is growing, and you are sad cause it didn't evolve enough last 10 years, this is what happens when nobody helps (open-source contribution or paid, write some articles like this one,...).

    I am a game developer for a small company that mainly targets mobile devices (From Pocket PC in 2006, Nintendo DS, iOS, Android), and we always used OpenGL only. I can't tell you how it was hard some days, but our size didn't allow help moving things a lot.
    But we helped Imagtec to fix a lot of bugs in there tools for PowerVR chips, mainly oriented on compressed textures (PVRTC).

    I can't tell you how much we are happy to see Valve, Cryteck, EPIC,... making tools around OpenGL, it will certainly help us much a day.

    I am dreaming to see OpenGL coming close to something like Mantle and stay highly portable (mobile, console, web,...).
    Driver's quality will follow, with the game dev demand and the necessity to provide somethings just working for users.

    Thank you for your positive step.

  15. and the solution?

    I have a suggestion
    have 2 versions of OpenGL
    one legacy that would be the last released (4.4) and create the new OpenGL 5.0 in partnership with Nvidia, Intel, AMD, Apple, Google, and Valve etc.
    this new OpenGL would have only the good features of OpenGL and adding new tools and better documentation and prepared for the future
    Having two versions of OpenGL would not be any problem, one legacy, and the other for new games.
    viable solution that does not create problems

    1. That was the plan with OpenGL "longs peak". They caved to pressure at the last minute and put backwards compatibility back in... :-(

  16. After all, the industry is talking/putting the mouth in front of developers where money was put. Although I'm not an expert here, barely, I can say a few words about this topic.

    First of all, I like Mantle. It's leaner, modern, and as marketed, close to the hardware. But one thing is really making me anxious and that is, foremost, is it that hard to develop tools, an API,and... Forget that, proper software engineering maybe? One day?

    Just engineer and develop the core, but don't let it be hardware bound. Let the hardware vendors do the bottom-layer implementation of the core (main layer). I know it is hard to engineer such an API and tools around it, but it's feasible.

    Of the current state of these developer unfriendly tools, I'm anxious. Very! But what can I do when all this is just good old business, nothing more. Nonetheless this is not a healthy business.

  17. This comment has been removed by the author.

  18. My thoughts:
    Up until recently desktop OpenGL was a bit of a neglected child. CAD programs that used it, used very old school OpenGL and new programs, like games, were few and far between. Additionally, if something used high end GL, it meant that it was a high price, low volume product so the software vendor specified the GPU vendor that was cool. Now we are in a world where consumer, high volume, is trying to target OpenGL and we find that it is a hard ride. There was an attempt years and years ago to make a leaner, meaner, GL API: long peaks. That attempt failed, likely for backwards compatibility. What we have left is just Core vs Compatibility profile which in no way really deals with the core issues of the GL API: lots of state, lots of binding and so on.

    On the other hand OpenGL ES is the mobile graphics API. To be frank, it sucks too. It suffers from all the downfalls of desktop GL: bind to use and lots of state. In fact, one can view OpenGL ES2 as basically a subset of GL2 + FBO and OpenGL ES3 as a subset of OpenGL 3 (mostly). It suffers from all the same issues as an API. The main saving grace for OpenGL ES has been Apple: namely the iPhone. There, one has one driver and one platform. That is it. Does not matter if it follows the spec exactly. Does not matter if it might act weird because all iPhone's of a fixed generation (and even really all) do the same things the same way mostly, so no surprises. Android OpenGL ES development on the other hand is a horror, far, far worse then desktop GL because of the massive variety of hardware and drivers.

    And now a crazy thought which has issues in itself: Gallium. Usually Gallium is pitched as a way to write a GL driver, but it is actually an API. A relatively simple and direct API that is expressive enough to do all what OpenGL 3 and D3D10 require. There are warts: I am not convinced that TGSI, the shader instruction set on Gallium, is expressive enough and the API has changed some over time. On the other hand it has potential and it is in such a way that making tools on top of Gallium is not so bad. Gallium API lacks some important exciting features like bindless, but nevertheless it could be addressed.

  19. This calls for an OpenGL cleanup project.

    ValveGL? :)

  20. In the printer world we just got our first "nice" open api in ages, including a self certification test suite everyone is testing against. We got it because Apple leveraged the iPhone as incentive and got every printer manufacturer to the table.

    I think rewriting the API would be a mistake at least at this point in time. Instead Rich have you considered creating an automated test suite? From the sounds of it the issues you've hit related to pipeline stalls are beyond the OpenGL spec. Getting extra constraints into OpenGL would be an uphill battle but benchmarks and test suites might have better success.

    Users and media will not understand the issues but will understand when a test they trust says there is a problem. A fantastic showcase of this was the ACID series of browser conformance tests. The original ACID test was written by a Mozilla developer. The test got popular with users and the media which forced even Microsoft to play ball. We're now up to ACID3 and even Microsoft has released a standard compliant browser.

    Rich, writing a brand new standard might not be in the cards but OpenGL can still improve.

  21. There are a ton of reasons why having a GL conformance test would be totally awesome and solve a lot of the real problems of GL dev...
    - It's something the community can collaborate in - easily open sourced and scalable. Centralization is not needed - the spec is already centralized.
    - IHVs are clearly playing the 80-20 rule with the matrix of possible GL interactions (and none of us can blame them, especially if we're calling for a smaller API). Conformance test results would give developers insight into what was considered heavily tested or reliable.
    - IF backward compatibility is really a cost driver for IHVs, a conformance test would make it apparent - test count would be a good proxy for how much weird stuff compatibility is inducing.
    - App developers could protect correct functionality of their code by building off well-tested cases in the conformance test.

  22. OpenGL already tried a reboot with OpenGL 3.1, and people are STILL whining about how bad the removed features were, and it also caused point 19. A second reboot is probably not going to help anything.

    A completely new, redesigned API would be cool, but would require massive vendor buy-in to be useful, which I doubt will happen anytime soon.

  23. This comment has been removed by the author.

  24. This comment has been removed by the author.

  25. This comment has been removed by the author.

  26. I think OpenGL definitely needs a big reboot.

    What I'd really like to see is a migration of OpenGL code to OpenCL; i.e - you write your renderer in a form of OpenCL using OpenGL calls, and the whole thing then runs on the GPU, leaving your program to just send updates or queries (depending upon what you want to get out of it).

    If this were to happen, we could get a clean OpenGL within an OpenCL based API, but keep older OpenGL calls for legacy applications (or people who hate change), and map them onto a default OpenCL program. By basing things within OpenCL in this way, we shouldn't even need much in the way of driver support at all as all of the actual OpenGL calls would be taking place on the GPU itself (so firmware supported) while the driver just needs to support loading the renderer and passing messages (though granted, some of these would necessarily need to contain content, but that should be a lot easier for driver vendors to optimise).

    Do something like that and OpenGL could potentially leap well ahead of Mantle and DirectX12 and finally be something that's a privilege to use, rather than something you only feel you should use because it's open, or you are forced to use because your target platform doesn't support anything else.

  27. I was trying to port my game from SDL's bad and slow immediate mode to more optimized vertexArrays and VBO but gave up after a few days working on it. OpenGL is just retardedly, uselessly complicated for doing simple things.

    1. Did you turn on GL debug contexts and error logging? Also, have you tried Renderdoc to debug the problem?