ATI Driver issue ?

I have a large number of tristrips drawn. They contain somewhere between 4000-8000 vertices.

When I first render the strips using normal vertex arrays it runs at 85 Hz. But when I change the modelview to a different from the first one it runs only at 15 Hz after that ? This is not the case on NVidia HW, where it runs smoothly all the time.

If I decrese the length of strips , the fenomenon gets much more rare, but still it can occur…

My HW is a ATI Fire GL X1 with driver 1032.

Any clues ??

Does the graphics card support hardware T&L ?
If not, it would explain why the identity modelview matrix helps so much in the computations.

Originally posted by ToolTech:
[b]I have a large number of tristrips drawn. They contain somewhere between 4000-8000 vertices.

When I first render the strips using normal vertex arrays it runs at 85 Hz. But when I change the modelview to a different from the first one it runs only at 15 Hz after that ? This is not the case on NVidia HW, where it runs smoothly all the time.

If I decrese the length of strips , the fenomenon gets much more rare, but still it can occur…

My HW is a ATI Fire GL X1 with driver 1032.

Any clues ??[/b]

Not really. I did run into a case which I put in a GLUT test framework and sent off to atidev@ati.com or what the heck the address is. My problem was that when I made (alot) of strips of length 224, put them in VBAs and rendered them with a light, I got N frames per second and when I made the strip length 220(ie less), I got N/2 fps, ie half the frame rate. The problem disappeared when I turned off lighting. I suppose there’s a ton of tiny, small, annoying bugs in all the drivers, optimization issues that fail in certain cases, etc.

My suggestion is that you try to create a small test app. If you can recreate the problem there, you should send it off to ATI to have them look at it.

Good luck!

I have helped ATI on the MAC driver section (Jason) but when I have tried to get in contact with ATI on the dev rel for Win32 I have been not gettin any answers.

I must say that I am surprised that their professional card HW has these kind of anomalies ?

If someone from ATI would like to get into contact with me I would be happy to help you with some samples…

I’m not sure it can be called an “anomaly”, since it’s not a bug.

ToolTech, you may contact mailto:devrel@ati.com (they may reply for Mac as well as Win32 messages)

[This message has been edited by vincoof (edited 04-10-2003).]

I have contacted ATI several times the last month about it but never gotten any reply !

I have also asked to participate in driver testing as reccomended by Humus but never got any reply from ATI about that either…

A bit dissapointed…

/starting rant mode

I’ve had a guy from ATI’s driver development team the other day at phone, for professionnal reasons. The discussion turned about performance, and when i told him about these issues and strange slowdowns, he seemed really surprized…

It’s not the first time we’ve seen this kind of post on this board. I even did one myself, something like a year ago… i’m guessing there’s been hundred of people mailing them about it.

Yet nothing is done. In strange conditions, the performance suffers horribly. I mean, i have seen myself a Radeon 8500 run a standard glDrawElements scene at around 2 fps for 25000 triangles. With the latest drivers.

/end rant mode

Y.

I don’t get it – I’ve found devrel@ati.com to be very responsive. Maybe you guys just aren’t being friendly enough?

– Tom

I wonder if they work better with personal contacts. I have contacted them through ToolTech, my company. I found them very good when fixing a bug in Mac OS X but when I have tried to fix bugs for Gire GL X1 on Win32 I have gotten no responce at all…

All in all I think all would gain if we could help ATI with driver bugs etc. so if you ATI still hear this I would love to help you get the ATI drivers faster and better… I have got loads of test programs that can be used to find performance bottlenecks etc.

I remember reading that devrel@ati.com does not necessarily reply, but that doesn’t mean the problem is not filed. In other words, if they have nothing interesting to reply they won’t, but don’t take for granted that your mail was sent to the trash.

Here’s my suggestion for helping ATI devrel help you, from having worked with them for two years:

I have found ATI devrel to be very courteous, and willing to help, within their abilities. They also seem to have either more people or more time now than two years ago – I think firing on all twelve with the R300 did good things for their internal morale.

However, I have found that it’s really hard to get them to replicate and address a problem from simply a description. Trying to describe bugs and have them come to a conclusion has very seldom worked. Sending them a minidump and/or a stack trace sometimes gets progress, sometimes not. Sending them a program of any size WILL get the problem fixed, within a fairly quick time. I’ve sent single 2 kB source snippets, or a CD full of development data, requiring a custom MySQL install; sending them the product seems absolute key.

The frustration comes when there’s a bug that hits once a week, on a random machine out of 40 in the QA lab, and there’s no clear reproduction path, and maybe not even clear whether it’s a problem in our code or a driver. If anyone has ideas about how to get further on this, without flying their devteam to our place for a week, I’d appreciate it :slight_smile:

By the way: ATI is not the only graphics company that seems to tick in this particular manner – I have had very similar results from three others.

>>>When I first render the strips using normal vertex arrays it runs at 85 Hz. But when I change the modelview to a different from the first one it runs only at 15 Hz after that ? This is not the case on NVidia HW, where it runs smoothly all the time.<<<

If I were you, I would first make a small demo and have people here test it.
Or try it yourself with different drivers.

In this case, changing the mdoelview and getting a huge change in FPS is ridiculous. What are these 2 modelviews. If the first was identity, then its possible I suppose.

And you don’t have a bug really. Just some oddity. That makes it low priority.

I’ll second what jwatte said. A sample app really helps, both because it’s so much easier to find the problem when you can step through the code in a debugger and see where things go wrong, but also because the devrel guys can see the problem themselves and confirm that the bug is on their side. If you just pass a description, then the driver guys will have to write a test app themselves, and that in itself will cause you to end up low on the priority list. If they are unsure about the guy that sended the bug description they may even think that the problem most likely is on the application side, for instance if you use poor language and sound like a 13 year old or if you seam to confuse concepts.
Also, you don’t neccesarily need to write a small extremely simple app, you may just send the app you found the problem in. That’s what I use to do. Just hint where the important code is.

I have uploaded a demo of the driver issue.
http://www.tooltech-software.com/downloads/gizmo3d/binaries/win32/performance.zip

When run on a PIII 800 Mhz with 256 MByte RAM and a ATI Fire GL X1 it initially ruins at 87 FPS.

When hitting the left arror key and ajust the initial modelview transform a bit, the FPS drops to 9 FPS per sEcond

I would be happy if you guys downloaded it and told me your performance on your HW, even if it is not ATI hw…

Thanx !

I get about 24FPS constantly.

Edit:
With a Radeon 9700 Pro BTW.

[This message has been edited by NitroGL (edited 04-12-2003).]

Constant 24-25 fps on a R8500.

Y.

39 fps on a GF4 Ti4200, Athlon 1400 MHz, 384 MB RAM

There’s something strange about your app. I tested it on a geforce2go. With my desktop set to 16 bit, your app runs at less than 1fps, with a 32bit desktop it runs around 9-10 fps.

Can you check if you get similar behaviour, and explain why if so.

Thanks,

Heath.

This is hardly strange, my guess he’s using the stencil buffer, which is not available in 16 bits, which makes in turn OpenGL fall back to software mode.

Y.