blaiming the hardware anymore is a bit of a load. back in the day when you had no standards for graphics and were writing everything in assembly to squeeze out every last bit of juice, when you had no plug and play and no api's, and were tweaking your config.sys and autoexec.bat just to play a game, that was a hardware situation. sure some graphics cards occassionally fail to implement every aspect of direct x properly, creating some problems, but this is rather rare, and on the whole, this is for new, state of the art features, and you are bringing it on yourself when you try to use them(leave an option to shut them off so you can easily check them), if you write for directx, it will work on compliant graphics cards.
the complexities of the engine are implementing level of detail systems, putting in a memory manager, and a resource mananger, handling multithreading safely, sorting directx calls to optimize performance, properly releasing resources, sending network messages and synchronizing, etc, its logic, and getting the components you wrote to work together, or work with some 3rd party(and well established and tested) api. in short, writing code.
its not trying to make it work for a million different hardware combinations. yeah there will be anomolies, but it isn't the norm. you don't write for an ati and an nvidia, cause generally speaking all you do is make it work for directx, or an amd and an intel, or kingston and pny, you just make it work for windows. if the problems really and truely were hardware(at the very least, someone elses software) they wouldn't be patching the game, they'd be telling you to go omplain to someone else. people would be having the almost exact same problems with the various games if it were really hardware configurations. this game is using shader 2.0, and this game is using shader 2.0, if it works for one and not the other, how likely is it really hardware at that point?
when civilization 4 came out, one of the big problems was a massive, massive memory buildup that caused an eventual crash, the cause, a very silly rookie mistake, each and every unit was duplicating resources that should have been shared. it was solved by a russian hacker long before the programmers even acknowledged a problem(which is the only reason we actually know what the cause was). they calimed it was hardware, then said huge maps were unsupported(like it didn't hapeen on medium and large). sure it was hardware, a lack of infinite memory.
most of what gets called 'configuration bugs' these days, seems to be doing something that you never should have done in the first place, but that one provider allowed, let you get away with, but is not part of the standard. the biggest problems in programming(just take a look at the civ4 sdk) is a complete lack of documentation and useful commenting, along with doing stupid things like nesting if statements 20 deep with massive else statement branching, leaving people too afraid to touch anything, even when its painfully obvious its redundant and messy, or broken, for fear they might be missing something in that mess.
you dealing with hardware that generally has a very specific purpose and very finite set of requirements to meet to qualify as compliant. you have a fixed set of tests to gauge that compliance. your graphics card doesn't talk to your keyboard, or your mouse to your monitor, they talk to windows. and only in a set number of ways.
my point, there is a very finite amount of blame that you can put on a lack of Q&A machine configurations, atleast nine times out of ten, its a conveinent scape goat that everyone is far to eager to believe. cause things are just "sooo" complex.
Btw, had a trash-80, those were the good old days.