• Kenji Kaneshige's avatar
    PCI: ignore bit0 of _OSC return code · 2485b867
    Kenji Kaneshige authored
    Currently acpi_run_osc() checks all the bits in _OSC result code (the
    first DWORD in the capabilities buffer) to see error condition. But the
    bit 0, which doesn't indicate any error, must be ignored.
    
    The bit 0 is used as the query flag at _OSC invocation time. Some
    platforms clear it during _OSC evaluation, but the others don't. On
    latter platforms, current acpi_run_osc() mis-detects error when _OSC is
    evaluated with query flag set because it doesn't ignore the bit 0.
    Because of this, the __acpi_query_osc() always fails on such platforms.
    
    And this is the cause of the problem that pci_osc_control_set() doesn't
    work since the commit 4e39432f which
    changed pci_osc_control_set() to use __acpi_query_osc().
    Tested-by: default avatar"Tomasz Czernecki <czernecki@gmail.com>
    Signed-off-by: default avatarKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
    Signed-off-by: default avatarJesse Barnes <jbarnes@virtuousgeek.org>
    2485b867
pci-acpi.c 10.3 KB