- 17 Feb, 2015 1 commit
-
-
Scot Doyle authored
Remove unusual pr_info() visual emphasis introduced in ad479e7f "ACPI / EC: Introduce STARTED/STOPPED flags to replace BLOCKED flag". Signed-off-by: Scot Doyle <lkml14@scotdoyle.com> [ rjw: Change pr_info() to pr_debug() too in those places. ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 11 Feb, 2015 2 commits
-
-
Rafael J. Wysocki authored
Revert commit f252cb09 (ACPI / EC: Add query flushing support), because it breaks system suspend on Acer Aspire S5. The machine just hangs solid at the last stage of suspend (after taking non-boot CPUs offline). Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
Revert commit b5bca896 (ACPI / EC: Add GPE reference counting debugging messages), because it depends on commit f252cb09 (ACPI / EC: Add query flushing support) which breaks system suspend on Acer Aspire S5 and needs to be reverted. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 06 Feb, 2015 5 commits
-
-
Lv Zheng authored
This patch enhances debugging with the GPE reference count messages added. No functional changes. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
This patch implementes the QR_EC flushing support. Grace periods are implemented from the detection of an SCI_EVT to the submission/completion of the QR_EC transaction. During this period, all EC command transactions are allowed to be submitted. Note that query periods and event periods are intentionally distiguished to allow further improvements. 1. Query period: from the detection of an SCI_EVT to the sumission of the QR_EC command. This period is used for storming prevention, as currently QR_EC is deferred to a work queue rather than directly issued from the IRQ context even there is no other transactions pending, so malicous SCI_EVT GPE can act like "level triggered" to trigger a GPE storm. We need to be prepared for this. And in the future, we may change it to be a part of the advance_transaction() where we will try QR_EC submission in appropriate positions to avoid such GPE storming. 2. Event period: from the detection of an SCI_EVT to the completion of the QR_EC command. We may extend it to the completion of _Qxx evaluation. This is actually a grace period for event flushing, but we only flush queries due to the reason stated in known issue 1. That's also why we use EC_FLAGS_EVENT_xxx. During this period, QR_EC transactions need to pass the flushable submission check. In this patch, the following flags are implemented: 1. EC_FLAGS_EVENT_ENABLED: this is derived from the old EC_FLAGS_QUERY_PENDING flag which can block SCI_EVT handlings. With this flag, the logics implemented by the original flag are extended: 1. Old logic: unless both of the flags are set, the event poller will not be scheduled, and 2. New logic: as soon as both of the flags are set, the evet poller will be scheduled. 2. EC_FLAGS_EVENT_DETECTED: this is also derived from the old EC_FLAGS_QUERY_PENDING flag which can block SCI_EVT detection. It thus can be used to indicate the storming prevention period for query submission. acpi_ec_submit_request()/acpi_ec_complete_request() are invoked to implement this period so that acpi_set_gpe() can be invoked under the "reference count > 0" condition. 3. EC_FLAGS_EVENT_PENDING: this is newly added to indicate the grace period for event flushing (query flushing for now). acpi_ec_submit_request()/acpi_ec_complete_request() are invoked to implement this period so that the flushing process can wait until the event handling (query transaction for now) to be completed. Link: https://bugzilla.kernel.org/show_bug.cgi?id=82611 Link: https://bugzilla.kernel.org/show_bug.cgi?id=77431Signed-off-by: Lv Zheng <lv.zheng@intel.com> Tested-by: Ortwin Glück <odi@odi.ch> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
This patch refines EC command storm prevention support. Current command storming code is wrong, when the storming condition is detected, it only flags the condition without doing anything for the current command but performing storming prevention for the follow-up commands. So: 1. The first command which suffers from the storming still suffers from storming. 2. The follow-up commands which may not suffer from the storming are unconditionally forced into the storming prevention mode. Ideally, we should only enable storm prevention immediately after detection for the current command so that the next command can try the power/performance efficient interrupt mode again. This patch improves the command storm prevention by disabling GPE right after the detection and re-enabling it right before completing the command transaction using the GPE storming prevention APIs. This thus deploys the following GPE handling model: 1. acpi_enable_gpe()/acpi_disable_gpe() for reference count changes: This set of APIs are used for EC usage reference counting. 2. acpi_set_gpe(ACPI_GPE_ENABLE)/acpi_set_gpe(ACPI_GPE_DISABLE): This set of APIs are used for preventing GPE storm. They must be invoked when the reference count > 0. Note that as the storming prevention should always happen when there is an outstanding request, or GPE enabling value will be messed up by the races. This patch also adds BUG_ON() to enforces this rule to prevent future bugs. The msleep(1) used after completing a transaction is useless now as this sounds like a guard time only useful for platforms that need the EC_FLAGS_MSI quirks while we have fixed GPE race issues using the previous raw handler mode enabling. It is kept to avoid regressions. A seperate patch which deletes EC_FLAGS_MSI quirks should take care of deleting it. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
This patch implements the EC command flushing support. During the grace period indicated by EC_FLAGS_STARTED and EC_FLAGS_STOPPED, all submitted EC command transactions can be completed and new submissions are prevented before suspending so that the EC hardware can be ensured to be in the idle state when the system is resumed. There is a good indicator for flush support: All acpi_ec_submit_request() is invoked after checking driver state with acpi_ec_started() except the first one. This means all code paths can be flushed as fast as possible by discarding the requests occurred after the flush operation. The reference increased for such kind of code path is wrapped by acpi_ec_submit_flushable_request(). Signed-off-by: Lv Zheng <lv.zheng@intel.com> Tested-by: Ortwin Glück <odi@odi.ch> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
By using the 2 flags, we can indicate an inter-mediate state where the current transactions should be completed while the new transactions should be dropped. The comparison of the old flag and the new flags: Old New about to set BLOCKED STOPPED set / STARTED set BLOCKED set STOPPED clear / STARTED clear BLOCKED clear STOPPED clear / STARTED set A new period can be indicated by the 2 flags. The new period is between the point where we are about to set BLOCKED and the point when the BLOCKED is set. The new flags facilitate us with acpi_ec_started() check to allow the EC transaction to be submitted during the new period. This period thus can be used as a grace period for the EC transaction flushing. The only functional change after applying this patch is: 1. The GPE enabling/disabling is protected by the EC specific lock. We can do this because of recent ACPICA GPE API enhancement. This is reasonable as the GPE disabling/enabling state should only be determined by the EC driver's state machine which is protected by the EC spinlock. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Tested-by: Ortwin Glück <odi@odi.ch> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 05 Feb, 2015 17 commits
-
-
Lv Zheng authored
The bug fixes around GPE races have been done to the EC driver by the previous commits. This patch increases the revision to 3 to indicate the behavior differences between the old and the new drivers. The copyright/authorship notices are also updated. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
Timeout in the ec_poll() doesn't refer to the last register access time. It thus can win the competition against the acpi_ec_gpe_handler() if a transaction takes longer than 1ms but individual register accesses are less than 1ms. In some cases, it can make the following silicon bug easier to be triggered: GPE EN is not wired to the GPE trigger line, so when GPE STS is already set when 1 is written to GPE EN, no GPE can be triggered. This patch adds register access timestamp reference support for ec_poll() to reduce the number of ec_poll() invocations. Reported-by: Venkat Raghavulu <venkat.raghavulu@intel.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
This patch switches EC driver into ACPI_GPE_DISPATCH_RAW_HANDLER mode where the GPE lock is not held for acpi_ec_gpe_handler() and the ACPICA internal GPE enabling/disabling/clearing operations are bypassed so that further improvements are possible with the GPE APIs. There are 2 strong reasons for deploying raw GPE handler mode in the EC driver: 1. Some hardware logics can control their interrupts via their own registers, so their interrupts can be disabled/enabled/acknowledged without using the super IRQ controller provided functions. While there is no mean (EC commands) for the EC driver to achieve this. 2. During suspending, the EC driver is still working for a while to complete the platform firmware provided functionailities using ec_poll() after all GPEs are disabled (see acpi_ec_block_transactions()), which means the EC driver will drive the EC GPE out of the GPE core's control. Without deploying the raw GPE handler mode, we can see many races between the EC driver and the GPE core due to the above restrictions: 1. There is a race condition due to ACPICA internal GPE disabling/clearing/enabling logics in acpi_ev_gpe_dispatch(): Orignally EC GPE is disabled (EN=0), cleared (STS=0) before invoking a GPE handler and re-enabled (EN=1) after invoking a GPE handler in acpi_ev_gpe_dispatch(). When re-enabling appears, GPE may be flagged (STS=1). ================================================================= (event pending A) ================================================================= acpi_ev_gpe_dispatch() ec_poll() EN=0 STS=0 acpi_ec_gpe_handler() ***************************************************************** (event handling A) Lock(EC) advance_transaction() EC_SC read ================================================================= (event pending B) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** ***************************************************************** (event handling B) Lock(EC) advance_transaction() EC_SC read ================================================================= (event pending C) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** EN=1 This race condition is the root cause of different issues on different silicon variations. A. Silicon variation A: On some platforms, GPE will be triggered due to "writing 1 to EN when STS=1". This is because both EN and STS lines are wired to the GPE trigger line. 1. Issue 1: We can see no-op acpi_ec_gpe_handler() invoked on such platforms. This is because: a. event pending B: An event can arrive after ACPICA's GPE clearing performed in acpi_ev_gpe_dispatch(), this event may fail to be detected by EC_SC read that is performed before its arrival; b. event handling B: The event can be handled in ec_poll() because EC lock is released after acpi_ec_gpe_handler() invocation; c. There is no code in ec_poll() to clear STS but the GPE can still be triggered by the EN=1 write performed in acpi_ev_finish_gpe(), this leads to a no-op EC GPE handler invocation; d. As no-op GPE handler invocations are counted by the EC driver to trigger the command storming conditions, the wrong no-op GPE handler invocations thus can easily trigger wrong command storming conditions. Note 1: If we removed GPE disabling/enabling code from acpi_ev_gpe_dispatch(), we could still see no-op GPE handlers triggered by the event arriving after the GPE clearing and before the GPE handling on both silicon variation A and B. This can only occur if the CPU is very slow (timing slice between STS=0 write and EC_SC read should be short enough before hardware sets another GPE indication). Thus this is very rare and is not what we need to fix. B. Silicon variation B: On other platforms, GPE may not be triggered due to "writing 1 to EN when STS=1". This is because only STS line is wired to the GPE trigger line. 2. Issue 2: We can see GPE loss on such platforms. This is because: a. event pending B vs. event handling A: An event can arrive after ACPICA's GPE handling performed in acpi_ev_gpe_dispatch(), or event pending C vs. event handling B: An event can arrive after Linux's GPE handling performed in ec_poll(), these events may fail to be detected by EC_SC read that is performed before their arrival; b. The GPE cannot be triggered by EN=1 write performed in acpi_ev_finish_gpe(); c. If no polling mechanism is implemented in the driver for the pending event (for example, SCI_EVT), this event is lost due to no GPE being triggered. Note 2: On most platforms, there might be another rule that GPE may not be triggered due to "writing 1 to STS when STS=1 and EN=1". Then on silicon variation B, an even worse case is if the issue 2 event loss happens, further events may never trigger GPE again on such platforms due to being blocked by the current STS=1. Unless someone clears STS, all events have to be polled. 2. There is a race condition due to lacking in GPE status checking in EC driver: Originally, GPE status is checked in ACPICA core but not checked in the GPE handler. Thus since the status checking and handling is not locked, it can be interrupted by another handling path. ================================================================= (event pending A) ================================================================= acpi_ev_gpe_detect() ec_poll() if (EN==1 && STS==1) ***************************************************************** (event handling A) Lock(EC) advance_transaction() EC_SC read EC_SC handled Unlock(EC) ***************************************************************** acpi_ev_gpe_dispatch() EN=0 STS=0 acpi_ec_gpe_handler() ***************************************************************** (event handling B) Lock(EC) advance_transaction() EC_SC read Unlock(EC) ***************************************************************** 3. Issue 3: We can see no-op acpi_ec_gpe_handler() invoked on both silicon variation A and B. This is because: a. event pending A: An event can arrive to trigger an EC GPE and ACPICA checks it and is about to invoke the EC GPE handler; b. event handling A: The event can be handled in ec_poll() because EC lock is not held after the GPE status checking; c. event handling B: Then when the EC GPE handler is invoked, it becomes a no-op GPE handler invocation. d. As no-op GPE handler invocations are counted by the EC driver to trigger the command storming conditions, the wrong no-op GPE handler invocations thus can easily trigger wrong command storming conditions. Note 3: This no-op GPE handler invocation is rare because the time between the IRQ arrival and the acpi_ec_gpe_handler() invocation is less than the timeout value waited in ec_poll(). So most of the no-op GPE handler invocations are caused by the reason described in issue 1. 3. There is a race condition due to ACPICA internal GPE clearing logic in acpi_enable_gpe(): During runtime, acpi_enable_gpe() can be invoked by the EC storming prevention code. When it is invoked, GPE may be flagged (STS=1). ================================================================= (event pending A) ================================================================= acpi_ev_gpe_dispatch() acpi_ec_transaction() EN=0 STS=0 acpi_ec_gpe_handler() ***************************************************************** (event handling A) Lock(EC) advance_transaction() EC_SC read EC_SC handled Unlock(EC) ***************************************************************** EN=1 ? Lock(EC) Unlock(EC) ================================================================= (event pending B) ================================================================= acpi_enable_gpe() STS=0 EN=1 4. Issue 4: We can see GPE loss on both silicon variation A and B platforms. This is because: a. event pending B: An event can arrive right before ACPICA's GPE clearing performed in acpi_enable_gpe(); b. If the GPE is cleared when GPE is disabled, then EN=1 write in acpi_enable_gpe() cannot trigger this GPE; c. If no polling mechanism is implemented in the driver for this event (for example, SCI_EVT), this event is lost due to no GPE being triggered. Note 4: Currently we don't have this issue, but after we switch the EC driver into ACPI_GPE_DISPATCH_RAW_HANDLER mode, we need to take care of handling this because the EN=1 write in acpi_ev_gpe_dispatch() will be abandoned. There might be more race issues for the current GPE handler usages. This is because the EC IRQ's enabling/disabling/checking/clearing/handling operations should be locked by a single lock that is under the EC driver's control to achieve the serialization. Which means we need to invoke GPE APIs with EC driver's lock held and all ACPICA internal GPE operations related to the GPE handler should be abandoned. Invoking GPE APIs inside of the EC driver lock and bypassing ACPICA internal GPE operations requires the ACPI_GPE_DISPATCH_RAW_HANDLER mode where the same lock used by the APIs are released prior than invoking the handlers. Otherwise, we can see dead locks due to circular locking dependencies (see Reference below). This patch then switches the EC driver into the ACPI_GPE_DISPATCH_RAW_HANDLER mode so that it can perform correct GPE operations using the GPE APIs: 1. Bypasses EN modifications performed in acpi_ev_gpe_dispatch() by using acpi_install_gpe_raw_handler() and invoking all GPE APIs with EC spin lock held. This can fix issue 1 as it makes a non frequent GPE enabling/disabling environment. 2. Bypasses STS clearing performed in acpi_enable_gpe() by replacing acpi_enable_gpe()/acpi_disable_gpe() with acpi_set_gpe(). This can fix issue 4. And this can also help to fix issue 1 as it makes a no sudden GPE clearing environment when GPE is frequently enabled/disabled. 3. Ensures STS acknowledged before handling by invoking acpi_clear_gpe() in advance_transaction(). This can finally fix issue 1 even in a frequent GPE enabling/disabling environment. And this can also finally fix issue 3 when issue 2 is fixed. Note 3: GPE clearing is edge triggered W1C, which means we can clear it right before handling it. Since all EC GPE indications are handled in advance_transaction() by previous commits, we can now move GPE clearing into it to implement the correct GPE clearing. Note 4: We can use acpi_set_gpe() which is not shared GPE safer instead of acpi_enable_gpe()/acpi_disable_gpe() because EC GPE is not shared by other hardware, which is mentioned in the ACPI specification 5.0, 12.6 Interrupt Model: "OSPM driver treats this as an edge event (the EC SCI cannot be shared)". So we can stop using shared GPE safer APIs acpi_enable_gpe()/acpi_disable_gpe() in the EC driver. Otherwise cleanups need to be made in acpi_ev_enable_gpe() to bypass the GPE clearing logic before keeping acpi_enable_gpe(). This patch also invokes advance_transaction() when GPE is re-enabled in the task context which: 1. Ensures EN=1 can trigger GPE by checking and handling EC status register right after EN=1 writes. This can fix issue 2. After applying this patch, without frequent GPE enablings considered: ================================================================= (event pending A) ================================================================= acpi_ec_gpe_handler() ec_poll() ***************************************************************** (event handling A) Lock(EC) advance_transaction() if STS==1 STS=0 EC_SC read ================================================================= (event pending B) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** ***************************************************************** (event handling B) Lock(EC) advance_transaction() if STS==1 STS=0 EC_SC read ================================================================= (event pending C) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** The event pending for issue 1 (event pending B) can arrive as a next GPE due to the previous IRQ context STS=0 write. And if it is handled by ec_poll() (event handling B), as it is also acknowledged by ec_poll(), the event pending for issue 2 (event pending C) can properly arrive as a next GPE after the task context STS=0 write. So no GPE will be lost and never triggered due to GPE clearing performed in the wrong position. And since all GPE handling is performed after a locked GPE status checking, we can hardly see no-op GPE handler invocations due to issue 1 and 3. We may still see no-op GPE handler invocations due to "Note 1", but as it is inevitable, it needn't be fixed. After applying this patch, with frequent GPE enablings considered: ================================================================= (event pending A) ================================================================= acpi_ec_gpe_handler() acpi_ec_transaction() ***************************************************************** (event handling A) Lock(EC) advance_transaction() if STS==1 STS=0 EC_SC read ================================================================= (event pending B) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** ***************************************************************** (event handling B) Lock(EC) EN=1 if STS==1 advance_transaction() if STS==1 STS=0 EC_SC read ================================================================= (event pending C) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** The event pending for issue 2 can be manually handled by advance_transaction(). And after the STS=0 write performed in the manual triggered advance_transaction(), GPE can always arrive. So no GPE will be lost due to frequent GPE disabling/enabling performed in the driver like issue 4. Note 5: It's ideally when EN=1 write occurred, an IRQ thread should be woken up to handle the GPE when the GPE was raised. But this requires the IRQ thread to contain the poller code for all EC GPE indications, while currently some of the indications are handled in the user tasks. It then is very hard for the code to determine whether a user task should be invoked or the poller work item should be scheduled. So we have to invoke advance_transaction() directly now and it leaves us such a restriction for the GPE re-enabling: it must be performed in the task context to avoid starving the GPEs. As a conclusion: we can see the EC GPE is always handled in serial after deploying the raw GPE handler mode: Lock(EC) if (STS==1) STS=0 EC_SC read EC_SC handled Unlock(EC) The EC driver specific lock is responsible to make the EC GPE handling processes serialized so that EC can handle its GPE from both IRQ and task contexts and the next IRQ can be ensured to arrive after this process. Note 6: We have many EC_FLAGS_MSI qurik users in the current driver. They all seem to be suffering from unexpected GPE triggering source lost. And they are false root caused to a timing issue. Since EC communication protocol has already flow control defined, timing shouldn't be the root cause, while this fix might be fixing the root cause of the old bugs. Link: https://lkml.org/lkml/2014/11/4/974 Link: https://lkml.org/lkml/2014/11/18/316 Link: https://www.spinics.net/lists/linux-acpi/msg54340.htmlSigned-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Rafael J. Wysocki authored
-
Lv Zheng authored
ACPICA commit da9a83e1a845f2d7332bdbc0632466b2595e5424 For acpi_set_gpe()/acpi_enable_gpe(), our target is to purify them to be APIs that can be used for various GPE handling models, so we need them to be pure GPE enabling APIs. GPE enabling/disabling has 2 use cases: 1. Driver may permanently enable/disable GPEs according to the usage counts. 1. When upper layers (the users of the driver) submit requests to the driver, it means they care about the underlying hardware. GPE need to be enabled for the first request submission and disabled for the last request completion. 2. When the GPE is shared between 2+ silicon logics. GPE need to be enabled for either silicon logic's driver and disabled when all of the drivers are not started. For these cases, acpi_enable_gpe()/acpi_disable_gpe() should be used. When the usage count is increased from 0 to 1, the GPE is enabled and it is disabled when the usage count is decrased from 1 to 0. 2. Driver may temporarily disables the GPE to enter an GPE polling mode and wants to re-enable it later. 1. Prevent GPE storming: when a driver cannot fully solve the condition that triggered the GPE in the GPE context, in order not to trigger GPE storm, driver has to disable GPE to switch into the polling mode and re-enables it in the non interrupt context after the storming condition is cleared. 2. Meet throughput requirement: some IO drivers need to poll hardware again and again until nothing indicated instead of just handling once for one interruption, this need to be done in the polling mode or the IO flood may prevent the GPE handler from returning. 3. Meet realtime requirement: in order not to block CPU to handle higher realtime prioritized GPEs, lower priority GPEs can be handled in the polling mode. For these cases, acpi_set_gpe() should be used to switch to/from the polling mode. This patch adds unconditional GPE enabling support into acpi_set_gpe() so that this API can be used by the drivers to switch back from the GPE polling mode unconditionally. Originally this function includes GPE clearing logic in it. First, the GPE clearing is typically used in the GPE handling code to: 1. Acknowledge the GPE when we know there is an edge triggered GPE raised and is about to handle it, otherwise the unexpected clearing may lead to a GPE loss; 2. Issue actions after we have handled a level triggered GPE, otherwise the unexpected clearing may trigger unwanted OSPM actions to the hardware (for example, clocking in out-dated write FIFO data). Thus the GPE clearing is not suitable to be used in the GPE enabling APIs. Second, the combination of acknowledging and enabling may also not be expected by the hardware drivers. For GPE clearing, we have a seperate API acpi_clear_gpe(). There are cases drivers do want the 2 operations to be split. So splitting these 2 operations could facilitates drivers the maximum possibilities to achieve success. For a combined one, we already have acpi_finish_gpe() ready to be invoked. Given the fact that drivers should complete all outstanding requests before putting themselves into the sleep states, as this API is executed for outstanding requests, it should also have nothing to do with the "RUN"/"WAKE" distinguishing. That's why the acpi_set_gpe(ACPI_GPE_ENABLE) should not be implemented by acpi_hw_low_set_gpe(ACPI_GPE_CONDITIONAL_ENABLE). This patch thus converts acpi_set_gpe(ACPI_GPE_ENABLE) into acpi_hw_low_set_gpe(ACPI_GPE_ENABLE) to achieve a seperate GPE enabling API. Drivers then are encouraged to use this API when they need to switch to/from the GPE polling mode. Note that the acpi_set_gpe()/acpi_finish_gpe() should be first introduced to Linux using a divergence reduction patch before sending a linuxized version of this patch. Lv Zheng. Link: https://github.com/acpica/acpica/commit/da9a83e1Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
This can help to reduce source code differences between Linux and ACPICA upstream. Further driver cleanups also require these APIs to eliminate GPE storms. 1. acpi_set_gpe(): An API that driver should invoke in the case it wants to disable/enable IRQ without honoring the reference count implemented in the acpi_disable_gpe() and acpi_enable_gpe(). Note that this API should only be invoked inside a acpi_enable_gpe()/acpi_disable_gpe() pair. 2. acpi_finish_gpe(): Drivers returned ACPI_REENABLE_GPE unset from the GPE handler should use this API instead of invoking acpi_set_gpe()/acpi_enable_gpe() to re-enable the GPE. The GPE APIs can be invoked inside of a GPE handler or in the task context with a driver provided lock held. This driver provided lock is safe to be held in the GPE handler by the driver. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
ACPICA commit 199cad16530a45aea2bec98e528866e20c5927e1 Since whether the GPE should be disabled/enabled/cleared should only be determined by the GPE driver's state machine: 1. GPE should be disabled if the driver wants to switch to the GPE polling mode when a GPE storm condition is indicated and should be enabled if the driver wants to switch back to the GPE interrupt mode when all of the storm conditions are cleared. The conditions should be protected by the driver's specific lock. 2. GPE should be enabled if the driver has accepted more than one request and should be disabled if the driver has completed all of the requests. The request count should be protected by the driver's specific lock. 3. GPE should be cleared either when the driver is about to handle an edge triggered GPE or when the driver has completed to handle a level triggered GPE. The handling code should be protected by the driver's specific lock. Thus the GPE enabling/disabling/clearing operations are likely to be performed with the driver's specific lock held while we currently cannot do this. This is because: 1. We have the acpi_gbl_gpe_lock held before invoking the GPE driver's handler. Driver's specific lock is likely to be held inside of the handler, thus we can see some dead lock issues due to the reversed locking order or recursive locking. In order to solve such dead lock issues, we need to unlock the acpi_gbl_gpe_lock before invoking the handler. BZ 1100. 2. Since GPE disabling/enabling/clearing should be determined by the GPE driver's state machine, we shouldn't perform such operations inside of ACPICA for a GPE handler to mess up the driver's state machine. BZ 1101. Originally this patch includes a logic to flush GPE handlers, it is dropped due to the following reasons: 1. This is a different issue; 2. Linux OSL has fixed this by flushing SCI in acpi_os_wait_events_complete(). We will pick up this topic when the Linux OSL fix turns out to be not sufficient. Note that currently the internal operations and the acpi_gbl_gpe_lock are also used by ACPI_GPE_DISPATCH_METHOD and ACPI_GPE_DISPATCH_NOTIFY. In order not to introduce regressions, we add one ACPI_GPE_DISPATCH_RAW_HANDLER type to be distiguished from ACPI_GPE_DISPATCH_HANDLER. For which the acpi_gbl_gpe_lock is unlocked before invoking the GPE handler and the internal enabling/disabling operations are bypassed to allow drivers to perform them at a proper position using the GPE APIs and ACPI_GPE_DISPATCH_RAW_HANDLER users should invoke acpi_set_gpe() instead of acpi_enable_gpe()/acpi_disable_gpe() to bypass the internal GPE clearing code in acpi_enable_gpe(). Lv Zheng. Known issues: 1. Edge-triggered GPE lost for frequent enablings On some buggy silicon platforms, GPE enable line may not be directly wired to the GPE trigger line. In that case, when GPE enabling is frequently performed for edge-triggered GPEs, GPE status may stay set without being triggered. This patch may maginify this problem as it allows GPE enabling to be parallel performed during the process the GPEs are handled. This is an existing issue, because: 1. For task context: Current ACPI_GPE_DISPATCH_METHOD practices have proven that this isn't a real issue - we can re-enable edge-triggered GPE in a work queue where the GPE status bit might already be set. 2. For IRQ context: This can even happen when the GPE enabling occurs before returning from the GPE handler and after unlocking the GPE lock. Thus currently no code is included to protect this. Link: https://github.com/acpica/acpica/commit/199cad16Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
David E. Box authored
ACPICA commit e06b1624b02dc8317d144e9a6fe9d684c5fa2f90 Version 20150204. Link: https://github.com/acpica/acpica/commit/e06b1624Signed-off-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
David E. Box authored
ACPICA commit 8990e73ab2aa15d6a0068b860ab54feff25bee36 Link: https://github.com/acpica/acpica/commit/8990e73aSigned-off-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
David E. Box authored
ACPICA commit 490ec7f7839bf7ee5e8710a34d1d1a78d54a49b6 In function acpi_hw_low_set_gpe(), cast enable_mask to u8 before storing. The mask was read from a 32 bit register but is an 8 bit value. Fixes Visual Studio compiler warning. Link: https://github.com/acpica/acpica/commit/490ec7f7Signed-off-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
ACPICA commit 7926d5ca9452c87f866938dcea8f12e1efb58f89 There is an issue in acpi_install_gpe_handler() and acpi_remove_gpe_handler(). The code to obtain the GPE dispatcher type from the Handler->original_flags is wrong: if (((Handler->original_flags & ACPI_GPE_DISPATCH_METHOD) || (Handler->original_flags & ACPI_GPE_DISPATCH_NOTIFY)) && ACPI_GPE_DISPATCH_NOTIFY is 0x03 and ACPI_GPE_DISPATCH_METHOD is 0x02, thus this statement is TRUE for the following dispatcher types: 0x01 (ACPI_GPE_DISPATCH_HANDLER): not expected 0x02 (ACPI_GPE_DISPATCH_METHOD): expected 0x03 (ACPI_GPE_DISPATCH_NOTIFY): expected There is no functional issue due to this because Handler->original_flags is only set in acpi_install_gpe_handler(), and an earlier checker has excluded the ACPI_GPE_DISPATCH_HANDLER: if ((gpe_event_info->Flags & ACPI_GPE_DISPATCH_MASK) == ACPI_GPE_DISPATCH_HANDLER) { Status = AE_ALREADY_EXISTS; goto free_and_exit; } ... Handler->original_flags = (u8) (gpe_event_info->Flags & (ACPI_GPE_XRUPT_TYPE_MASK | ACPI_GPE_DISPATCH_MASK)); We need to clean this up before modifying the GPE dispatcher type values. In order to prevent such issue from happening in the future, this patch introduces ACPI_GPE_DISPATCH_TYPE() macro to be used to obtain the GPE dispatcher types. Lv Zheng. Link: https://github.com/acpica/acpica/commit/7926d5caSigned-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
ACPICA: Events: Cleanup to move acpi_gbl_global_event_handler invocation out of acpi_ev_gpe_dispatch() ACPICA commit 04f25acdd4f655ae33f83de789bb5f4b7790171c This patch follows acpi_ev_fixed_event_detect(), which invokes acpi_gbl_global_event_handler instead of invoking it in acpi_ev_fixed_event_dispatch(), moves acpi_gbl_global_event_handler from acpi_ev_gpe_dispatch() to acpi_ev_gpe_detect(). This makes further cleanups around acpi_ev_gpe_dispatch() simpler. Lv Zheng. Link: https://github.com/acpica/acpica/commit/04f25acdSigned-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
ACPICA commit b2b18bb38045404e253f10787b8a4ae6e94cdee6 This patch prevents acpi_remove_gpe_handler() from leaking the stale gpe_event_info->Dispatch.Handler to the caller to avoid possible NULL pointer references. Lv Zheng. Link: https://github.com/acpica/acpica/commit/b2b18bb3Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
David E. Box authored
ACPICA commit 8e21180050270897499652e922c6a41b8eb388b6 Recent changes to acpi_ev_asynch_execute_gpe_method left Status variable uninitialized before use. Initialize to AE_OK. Link: https://github.com/acpica/acpica/commit/8e211800Signed-off-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
ACPICA commit 8823b44ff53859ab24ecfcfd3fba8cc56b17d223 Currently we rely on the logic that GPE blocks will never be deleted, otherwise we can be broken by the race between acpi_ev_create_gpe_block(), acpi_ev_delete_gpe_block() and acpi_ev_gpe_detect(). On the other hand, if we want to protect GPE block creation/deletion, we need to use a different synchronization facility to protect the period between acpi_ev_gpe_dispatch() and acpi_ev_asynch_enable_gpe(). Which leaves us no choice but abandoning the ACPI_MTX_EVENTS used during this period. This patch removes ACPI_MTX_EVENTS used during this period and the acpi_ev_valid_gpe_event() to reflect current restriction. Lv Zheng. Link: https://github.com/acpica/acpica/commit/8823b44fSigned-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
ACPICA commit ca10324788bc9bdaf47fa9e51145129c1299144d This patch deletes a sanity check from acpi_ev_enable_gpe(). This kind of check is already done in acpi_enable_gpe()/acpi_remove_gpe_handler()/acpi_update_all_gpes() before invoking acpi_ev_enable_gpe(): 1. acpi_enable_gpe(): same check (skip if DISPATCH_NONE) is now implemented. 2. acpi_remove_gpe_handler(): a more strict check (skip if !DISPATCH_HANDLER) is implemented. 3. acpi_update_all_gpes(): a more strict check (skip if DISPATCH_NONE || DISPATCH_HANDLER || CAN_WAKE) 4. acpi_set_gpe(): since it is invoked by the OSPM driver where the GPE handler is known to be available, such check isn't needed. So we can simply remove this duplicated check from acpi_ev_enable_gpe(). Lv Zheng. Link: https://github.com/acpica/acpica/commit/ca103247Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
This is a back port result of the Linux commit: Commit c50f13c6 Subject: ACPICA: Save current masks of enabled GPEs after enable register writes Besides of the indent divergences, only a missing prototype added due to the ACPICA internal coding style. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 Jan, 2015 2 commits
-
-
Lv Zheng authored
struct acpi_resource_address and struct acpi_resource_extended_address64 share substracts just at different offsets. To unify the parsing functions, OSPMs like Linux need a new ACPI_ADDRESS64_ATTRIBUTE as their substructs, so they can extract the shared data. This patch also synchronizes the structure changes to the Linux kernel. The usages are searched by matching the following keywords: 1. acpi_resource_address 2. acpi_resource_extended_address 3. ACPI_RESOURCE_TYPE_ADDRESS 4. ACPI_RESOURCE_TYPE_EXTENDED_ADDRESS And we found and fixed the usages in the following files: arch/ia64/kernel/acpi-ext.c arch/ia64/pci/pci.c arch/x86/pci/acpi.c arch/x86/pci/mmconfig-shared.c drivers/xen/xen-acpi-memhotplug.c drivers/acpi/acpi_memhotplug.c drivers/acpi/pci_root.c drivers/acpi/resource.c drivers/char/hpet.c drivers/pnp/pnpacpi/rsparser.c drivers/hv/vmbus_drv.c Build tests are passed with defconfig/allnoconfig/allyesconfig and defconfig+CONFIG_ACPI=n. Original-by: Thomas Gleixner <tglx@linutronix.de> Original-by: Jiang Liu <jiang.liu@linux.intel.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
ACPICA has implemented acpi_unload_parent_table() which can exactly replace the acpi_get_id()/acpi_unload_table_id() implemented in Linux kernel. The acpi_unload_parent_table() has been unit tested in ACPICA simulation environment. This patch can also help to reduce the source code differences between Linux and ACPICA. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Tested-by: Octavian Purdila <octavian.purdila@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 23 Jan, 2015 6 commits
-
-
Lv Zheng authored
The QR_EC related code pieces have redundants, this patch merges them into acpi_ec_query() which invokes acpi_ec_transaction() where EC mutex and the global lock are already held. After doing so, query handler traversal still need to be locked by EC mutex after invoking acpi_ec_transaction(). Note that EC event handling is sequential. We fetch one event from firmware event queue and process it until 0x00 or error returned. So we don't need to hold mutex for whole acpi_ec_clear() process to determine whether we should continue to drain. And for the same reason, we don't need to hold mutex for the whole procedure from the QR_EC transaction to the query handler traversal. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
This patch fixes 2 issues related to the draining behavior. But it doesn't implement the draining support, it only cleans up code so that further draining support is possible. The draining behavior is expected by some platforms (for example, Samsung) where SCI_EVT is set only once for a set of events and might be cleared for the very first QR_EC command issued after SCI_EVT is set. EC firmware on such platforms will return 0x00 to indicate "no outstanding event". Thus after seeing an SCI_EVT indication, EC driver need to fetch events until 0x00 returned (see acpi_ec_clear()). Issue 1 - acpi_ec_submit_query(): It's reported on Samsung laptops that SCI_EVT isn't checked when the transactions are advanced in ec_poll(), which leads to SCI_EVT triggering source lost: If no EC GPE IRQs are arrived after that, EC driver cannot detect this event and handle it. See comment 244/247 for kernel bugzilla 44161. This patch fixes this issue by moving SCI_EVT checks into advance_transaction(). So that SCI_EVT is checked each time we are going to handle the EC firmware indications. And this check will happen for both IRQ context and task context. Since after doing that, SCI_EVT is also checked after completing a transaction, ec_check_sci() and ec_check_sci_sync() can be removed. Issue 2 - acpi_ec_complete_query(): We expect to clear EC_FLAGS_QUERY_PENDING to allow queuing another draining QR_EC after writing a QR_EC command and before reading the event. After reading the event, SCI_EVT might be cleared by the firmware, thus it may not be possible to queue such a draining QR_EC at that time. But putting the EC_FLAGS_QUERY_PENDING clearing code after start_transaction() is wrong as there are chances that after start_transaction(), QR_EC can fail to be sent. If this happens, EC_FLAG_QUERY_PENDING will be cleared earlier. As a consequence, the draining QR_EC will also be queued earlier than expected. This patch also moves this code into advance_transaction() where QR_EC is just sent (ACPI_EC_COMMAND_POLL flagged) to fix this issue. Notes: 1. After introducing the 2 SCI_EVT related handlings into advance_transaction(), a next QR_EC can be queued right after writing the current QR_EC command and before reading the event. But this still hasn't implemented the draining behavior as the draining support requires: If a previous returned event value isn't 0x00, a draining QR_EC need to be issued even when SCI_EVT isn't set. 2. In this patch, acpi_os_execute() is also converted into a seperate work item to avoid invoking kmalloc() in the atomic context. We can do this because of the previous global lock fix. 3. Originally, EC_FLAGS_EVENT_PENDING is also used to avoid queuing up multiple work items (created by acpi_os_execute()), this can be covered by only using a single work item. But this patch still keeps this flag as there are different usages in the driver initialization steps relying on this flag. Link: https://bugzilla.kernel.org/show_bug.cgi?id=44161Reported-by: Kieran Clancy <clancy.kieran@gmail.com> Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
Currently QR_EC is queued up on CPU 0 to be safe with SMM because there is no global lock held for acpi_ec_gpe_query(). As we are about to move QR_EC to a non CPU 0 bound work queue to avoid invoking kmalloc() in advance_transaction(), we have to acquire global lock for the new QR_EC work item to avoid regressions. Known issue: 1. Global lock for acpi_ec_clear(). This is an existing issue that acpi_ec_clear() which invokes acpi_ec_sync_query() also suffers from the same issue. But this patch's target is only the code to invoke acpi_ec_sync_query() in a CPU 0 bound work queue item, and the acpi_ec_clear() can be automatically fixed by further patch that merges the redundant code, so it is left unchanged. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
The returning value of acpi_os_execute() is erroneously handled as errno. This patch corrects it by returning EBUSY to indicate the work queue item creation failure. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
This patch adds reference counting for query handlers in order to eliminate kmalloc()/kfree() usage. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Tested-by: Steffen Weber <steffen.weber@gmail.com> Tested-by: Ortwin Glück <odi@odi.ch> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Lv Zheng authored
This patch moves transaction wakeup code into advance_transaction(). No functional changes. Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 22 Jan, 2015 1 commit
-
-
Octavian Purdila authored
acpi_tb_delete_namespace_by_owner() expects ACPI_MTX_INTERPRETER to be taken. This fixes the following issue: ACPI Error: Mutex [0x0] is not acquired, cannot release (20141107/utmutex-322) Call Trace: [<ffffffff81b0bd28>] dump_stack+0x4f/0x7b [<ffffffff81546bfc>] acpi_ut_release_mutex+0x47/0x67 [<ffffffff81542cf1>] acpi_tb_delete_namespace_by_owner+0x57/0x8d [<ffffffff81543ef1>] acpi_unload_table_id+0x3a/0x5e Signed-off-by: Octavian Purdila <octavian.purdila@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 18 Jan, 2015 4 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-socLinus Torvalds authored
Pull ARM SoC fixes from Olof Johansson: "We've been sitting on our fixes branch for a while, so this batch is unfortunately on the large side. A lot of these are tweaks and fixes to device trees, fixing various bugs around clocks, reg ranges, etc. There's also a few defconfig updates (which are on the late side, no more of those). All in all the diffstat is bigger than ideal at this time, but nothing in here seems particularly risky" * tag 'armsoc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (31 commits) reset: sunxi: fix spinlock initialization ARM: dts: disable CCI on exynos5420 based arndale-octa drivers: bus: check cci device tree node status ARM: rockchip: disable jtag/sdmmc autoswitching on rk3288 ARM: nomadik: fix up leftover device tree pins ARM: at91: board-dt-sama5: add phy_fixup to override NAND_Tree ARM: at91/dt: sam9263: Add missing clocks to lcdc node ARM: at91: sama5d3: dt: correct the sound route ARM: at91/dt: sama5d4: fix the timer reg length ARM: exynos_defconfig: Enable LM90 driver ARM: exynos_defconfig: Enable options for display panel support arm: dts: Use pmu_system_controller phandle for dp phy ARM: shmobile: sh73a0 legacy: Set .control_parent for all irqpin instances ARM: dts: berlin: correct BG2Q's SM GPIO location. ARM: dts: berlin: add broken-cd and set bus width for eMMC in Marvell DMP DT ARM: dts: berlin: fix io clk and add missing core clk for BG2Q sdhci2 host ARM: dts: Revert disabling of smc91x for n900 ARM: dts: imx51-babbage: Fix ULPI PHY reset modelling ARM: dts: dra7-evm: fix qspi device tree partition size ARM: omap2plus_defconfig: use CONFIG_CPUFREQ_DT ...
-
git://git.linaro.org/people/mike.turquette/linuxLinus Torvalds authored
Pull clock driver fixes from Mike Turquette: "Small number of fixes for clock drivers and a single null pointer dereference fix in the framework core code. The driver fixes vary from fixing section mismatch warnings to preventing machines from hanging (and preventing developers from crying)" * tag 'clk-fixes-for-linus' of git://git.linaro.org/people/mike.turquette/linux: clk: fix possible null pointer dereference Revert "clk: ppc-corenet: Fix Section mismatch warning" clk: rockchip: fix deadlock possibility in cpuclk clk: berlin: bg2q: remove non-exist "smemc" gate clock clk: at91: keep slow clk enabled to prevent system hang clk: rockchip: fix rk3288 cpuclk core dividers clk: rockchip: fix rk3066 pll lock bit location clk: rockchip: Fix clock gate for rk3188 hclk_emem_peri clk: rockchip: add CLK_IGNORE_UNUSED flag to fix rk3066/rk3188 USB Host
-
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds authored
Pull SCSI fixes from James Bottomley: "This is one fix for a Multiqueue sleeping in invalid context problem and a MAINTAINER file update for Qlogic" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: ->queue_rq can't sleep MAINTAINERS: Update maintainer list for qla4xxx
-
- 17 Jan, 2015 2 commits
-
-
Stanimir Varbanov authored
The commit 646cafc6 (clk: Change clk_ops->determine_rate to return a clk_hw as the best parent) opens a possibility for null pointer dereference, fix this. Signed-off-by: Stanimir Varbanov <svarbanov@mm-sol.com> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Michael Turquette <mturquette@linaro.org>
-
Kevin Hao authored
This reverts commit da788acb. That commit tried to fix the section mismatch warning by moving the ppc_corenet_clk_driver struct to init section. This is definitely wrong because the kernel would free the memories occupied by this struct after boot while this driver is still registered in the driver core. The kernel would panic when accessing this driver struct. Cc: stable@vger.kernel.org # 3.17 Signed-off-by: Kevin Hao <haokexin@gmail.com> Acked-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Michael Turquette <mturquette@linaro.org>
-