PCI: ignore bit0 of _OSC return code
Currently acpi_run_osc() checks all the bits in _OSC result code (the first DWORD in the capabilities buffer) to see error condition. But the bit 0, which doesn't indicate any error, must be ignored. The bit 0 is used as the query flag at _OSC invocation time. Some platforms clear it during _OSC evaluation, but the others don't. On latter platforms, current acpi_run_osc() mis-detects error when _OSC is evaluated with query flag set because it doesn't ignore the bit 0. Because of this, the __acpi_query_osc() always fails on such platforms. And this is the cause of the problem that pci_osc_control_set() doesn't work since the commit 4e39432f which changed pci_osc_control_set() to use __acpi_query_osc(). Tested-by: "Tomasz Czernecki <czernecki@gmail.com> Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Showing
Please register or sign in to comment