Since in theory changing either the brightness or the contrast control ought not to change the gamma for reasonable values of all three, there really isn't a hard-and-fast rule. A properly engineered display will, if you compress the dynamic range too much, lower the effective gamma to maintain evenness between black and white. In other words, the gamma will fall as black and white get closer together (e.g., brightness too high, contrast too low). That being said, with current technologies, a well-engineered display ought never to get that far short of "stupid human tricks".OK. So your responses made me think so hard it tok way longer to ask the nedxt question than I had intended... Thanks!
I appreciate the detail of not all machines operating the same way, and some being "stiff" at certain levels and the like, I hadn't thought about that... it's my hope to better understand how things should work in the theoretical world, as well as how often/not that applies in the real world...
To that end, I think you both said, yes, that's a good theoretical model even though it holds up only to some degree in the real world...
So the next question is, I believe I've read somewhere that on some displays that don't have gamma adjustments per se, that sometimes to can slightly tweak gamma by adjusting brightness and contrast, so I ask, using the rope analogy, if you increase contrast, looking at a luminance curve (not yet normalized) as you increase contrast you would expect the high end of the curve to rise....
The question is, as it rises, what happens to the length of the rope, does it stay the same, giving less contour to the curve? Or does it elongate, but not proportionately, so there's less contour to the curve? Or does it elongate ot the point where the contour stays the same?
I'm sure you can guess the next question, which is what then happens when looking at the normalized luminance curve, but I think I've answered that one already... I think the answer will be that in the ideal world, the since you're not ajdjusting it specifically, the gamma would stay the same, and the before/after curves would overlap, assuming you weren't driving one into run-out or the like...?
Good video engineering isn't necessarily cost-prohibitive on the BOM, AFAIK. What might push it higher into the food chain is the cost of the people and giving them the time to do it right.And how many of modern displays are "properly engineered" in this way, as in, how high end do you have to go to find this behavior vs how low end do you have to go to find these controls affecting gamma more than they should, or in ways they shouldn't?
Well, this is getting off-topic, but if MFRs can't get the volume they need at the price they need to recoup the cost of the development itself, the business model isn't viable. The return must recoup the investment. Whether it's reflected in the BOM is immaterial. The BOM is just the beginning of costing a product.Selling price is whatever the market agrees to pay, and time-to-market is a critical piece of that puzzle. As a result, many manufacturers may not be able to spend the incremental cycles trying to get the engineering right while they are busy pushing next year's model through the development pipe.
As I said, the increased cost usually isn't reflected in the BOM (i.e., the components and assembly costs), but in the cost of development itself.
It's very much material, pardon the pun. BOM costs are roughly equivalent to variable costs, whereas development engineering costs are fixed (insensitive to volume). Given that a lot of display development is done as assembly and integration, especially at the low-end of the market, if you can trace the components back to their source, you can probably find a vendor who is "not doing it right" and then extrapolate to other, similar displays using the same chipsets.Well, this is getting off-topic, but if MFRs can't get the volume they need at the price they need to recoup the cost of the development itself, the business model isn't viable. The return must recoup the investment. Whether it's reflected in the BOM is immaterial. The BOM is just the beginning of costing a product.
This thread has been relatively lightly traveled, so I don't mind going into a bit more detail. The higher-end companies have, often, done their own in-house engineering work, while the lower-end companies basically buy "kits" and assemble them (this is gross oversimplification, but work with me...). If I'm a Sanyo or Westinghouse or Vizio, then I buy a package from someone like Sigma, who then gets to spread its own engineering talent around a lot of different models from a lot of different OEMs. Conversely, if I'm someone like Sharp or Panasonic, then I (might) use internally-developed sub-assemblies (e.g., Uniphier) and spread those over large swathes of my product range. At that point, developing a specific model is about setting options in a firmware platform and tying it to the hardware.Very punny. You do make some good points. But even so, I was asking more about the higher end of the market. Also, would the chipsets from companies "doing it right" not cost more, as in why else would chipsets "doing it wrong" get bought? Unless the lower cost MFRs don't bother testing whether it's right or wrong, but that again implies the ones who do would have higher development costs...
I suppose at some point, to avoid this going further OT, I'll have to simply accept what you're saying even though it doesn't make sense to me... :huh:
Maybe I should start a new thread to discuss this further. I really don't want to cloud the otherwise-very-useful "calibration" thread with this talk...
Now we really do start going off the reservation. Don't necessarily mistake a generalized firmware process as either a) the only way to develop a TV set, or b) necessarily being solely a software process. In many cases, you not only "turn off" something in software, but entire sub-assemblies are then not included in hardware. If you pull the boards out of a TV, you can often find "blank" soldering pads and even connectors where something might have gone to create a higher-end TV.Ah. So what makes the difference (sometimes at least) is the MFR making a conscious decision to disable or not use certain things they've developed as an excuse to offer a lower grade product at a lower price point, or basically positioning. I have no problem buying that (pardon my own pun). But don't they tend to roll out improved processing/features in the higher end sets first, and after a while (year or more) bring those down to the lower end models once they have something better for their high end models, in effect charging more for the best or most up to date stuff, and isn't that equivalent to the "early adopters" of said improvements paying for the proprietary costs?
But that would bring us back to a real BOM difference. :bigsmile:In many cases, you not only "turn off" something in software, but entire sub-assemblies are then not included in hardware. If you pull the boards out of a TV, you can often find "blank" soldering pads and even connectors where something might have gone to create a higher-end TV.
Ah, similar to the precision resistors I try to avoid designing in where I can avoid it...As for early adopters paying higher prices, that's basically a given. The question is how much for what features. I am distinctly not a fan of the 240Hz LCDs, but for other people they are a must-have item. The way semiconductor manufacturing works, those panels might undergo exactly the same manufacturing steps to create "the glass" as a panel that only targets a 60Hz refresh rate, but since it passes a tighter quality check at the end, it gets "binned" into the 240Hz stack. The economic cost is the same, but because of the miracle of the bell curve and supply/demand, people pay more for it. The company producing it may then load more of the "costs" of production onto that panel, rather than another one, but the economic processing through a certain point would be similar.
Bravo, Maestro! :T I actually saw that coming just a few seconds early, but kudos nonetheless!Such a concept doesn't hold up anywhere along the production chain, which is why you have "binning" and why, wait for it, copying settings from one display to another is more likely to be wrong than it is to be right.
How's that for getting us back on topic? :help::sneeky:addle::T