- 03 Feb, 2012 23 commits
-
-
Cliff Wickman authored
commit d2ebc71d upstream. Initialize two spinlocks in tlb_uv.c and also properly define/initialize the uv_irq_lock. The lack of explicit initialization seems to be functionally harmless, but it is diagnosed when these are turned on: CONFIG_DEBUG_SPINLOCK=y CONFIG_DEBUG_MUTEXES=y CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_LOCKDEP=y Signed-off-by: Cliff Wickman <cpw@sgi.com> Cc: Dimitri Sivanich <sivanich@sgi.com> Link: http://lkml.kernel.org/r/E1RnXd1-0003wU-PM@eag09.americas.sgi.com [ Added the uv_irq_lock initialization fix by Dimitri Sivanich ] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stefan Berger authored
commit a927b813 upstream. This patch adds a delay after aborting a command. Some TPMs need this and will not process the subsequent command correctly otherwise. It's worth noting that a TPM randomly failing to process a command, maps to randomly failing suspend/resume operations. Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com> Signed-off-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexey Dobriyan authored
commit 51fc6dc8 upstream. For rounds 16--79, W[i] only depends on W[i - 2], W[i - 7], W[i - 15] and W[i - 16]. Consequently, keeping all W[80] array on stack is unnecessary, only 16 values are really needed. Using W[16] instead of W[80] greatly reduces stack usage (~750 bytes to ~340 bytes on x86_64). Line by line explanation: * BLEND_OP array is "circular" now, all indexes have to be modulo 16. Round number is positive, so remainder operation should be without surprises. * initial full message scheduling is trimmed to first 16 values which come from data block, the rest is calculated before it's needed. * original loop body is unrolled version of new SHA512_0_15 and SHA512_16_79 macros, unrolling was done to not do explicit variable renaming. Otherwise it's the very same code after preprocessing. See sha1_transform() code which does the same trick. Patch survives in-tree crypto test and original bugreport test (ping flood with hmac(sha512). See FIPS 180-2 for SHA-512 definition http://csrc.nist.gov/publications/fips/fips180-2/fips180-2withchangenotice.pdfSigned-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexey Dobriyan authored
commit 84e31fdb upstream. commit f9e2bca6 aka "crypto: sha512 - Move message schedule W[80] to static percpu area" created global message schedule area. If sha512_update will ever be entered twice, hash will be silently calculated incorrectly. Probably the easiest way to notice incorrect hashes being calculated is to run 2 ping floods over AH with hmac(sha512): #!/usr/sbin/setkey -f flush; spdflush; add IP1 IP2 ah 25 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000025; add IP2 IP1 ah 52 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000052; spdadd IP1 IP2 any -P out ipsec ah/transport//require; spdadd IP2 IP1 any -P in ipsec ah/transport//require; XfrmInStateProtoError will start ticking with -EBADMSG being returned from ah_input(). This never happens with, say, hmac(sha1). With patch applied (on BOTH sides), XfrmInStateProtoError does not tick with multiple bidirectional ping flood streams like it doesn't tick with SHA-1. After this patch sha512_transform() will start using ~750 bytes of stack on x86_64. This is OK for simple loads, for something more heavy, stack reduction will be done separatedly. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit 353b67d8 upstream. When we reach cleanup_journal_tail(), there is no guarantee that checkpointed buffers are on a stable storage - especially if buffers were written out by log_do_checkpoint(), they are likely to be only in disk's caches. Thus when we update journal superblock, effectively removing old transaction from journal, this write of superblock can get to stable storage before those checkpointed buffers which can result in filesystem corruption after a crash. A similar problem can happen if we replay the journal and wipe it before flushing disk's caches. Thus we must unconditionally issue a cache flush before we update journal superblock in these cases. The fix is slightly complicated by the fact that we have to get log tail before we issue cache flush but we can store it in the journal superblock only after the cache flush. Otherwise we risk races where new tail is written before appropriate cache flush is finished. I managed to reproduce the corruption using somewhat tweaked Chris Mason's barrier-test scheduler. Also this should fix occasional reports of 'Bit already freed' filesystem errors which are totally unreproducible but inspection of several fs images I've gathered over time points to a problem like this. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johannes Berg authored
commit bc4934bc upstream. When deauth is requested while an auth or assoc work item is in progress, we currently delete it without regard for any state it might need to clean up. Fix it by cleaning up for those items. In the case Pontus found, the problem manifested itself as such: authenticate with 00:23:69:aa:dd:7b (try 1) authenticated failed to insert Dummy STA entry for the AP (error -17) deauthenticating from 00:23:69:aa:dd:7b by local choice (reason=2) It could also happen differently if the driver uses the tx_sync callback. We can't just call the ->done() method of the work items because that will lock up due to the locking in cfg80211. This fix isn't very clean, but that seems acceptable since I have patches pending to remove this code completely. Reported-by: Pontus Fuchs <pontus.fuchs@gmail.com> Tested-by: Pontus Fuchs <pontus.fuchs@gmail.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stanislaw Gruszka authored
commit f96b08a7 upstream. This patch workaround live deadlock problem caused by infinite loop in brcms_c_wait_for_tx_completion(). I do not consider the patch as the proper fix, which should fix the real reason of tx queue flush failure, but patch helps with system lockup. Reference: https://bugzilla.kernel.org/show_bug.cgi?id=42576Reported-and-tested-by: Patrick <ragamuffin@datacomm.ch> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Brown authored
commit a14304ed upstream. We should be allowing a 5ms delay after the charge pump is started in order to ensure it has finished ramping. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Brown authored
commit 495174a8 upstream. These are all to either uncached registers or fixes to register defaults, in the former case the cache won't do anything and in the latter case we're fixing things so the cache sync will do the right thing. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Brown authored
commit fed22007 upstream. With a low frequency SYSCLK and a fast I2C clock register synchronisation may occasionally take too long to take effect, causing I/O issues. Disable synchronisation in order to avoid any issues. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Brown authored
commit e53e4173 upstream. Writing to the registers won't work if we do actually manage to hit a fully powered off state. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit 9b025eb3 upstream. Commit b52a360b forgot to call xfs_iunlock() when it detected corrupted symplink and bailed out. Fix it by jumping to 'out' instead of doing return. CC: Carlos Maiolino <cmaiolino@redhat.com> Signed-off-by: Jan Kara <jack@suse.cz> Reviewed-by: Alex Elder <elder@kernel.org> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Hellstrom authored
commit 598781d7 upstream. If the master tries to authenticate a client using drm_authmagic and that client has already closed its drm file descriptor, either wilfully or because it was terminated, the call to drm_authmagic will dereference a stale pointer into kmalloc'ed memory and corrupt it. Typically this results in a hard system hang. This patch fixes that problem by removing any authentication tokens (struct drm_magic_entry) open for a file descriptor when that file descriptor is closed. Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 3a47824d upstream. dig transmitter control table only has ENABLE/DISABLE actions on DCE4.1/DCE5. Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=44955Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 386d4d75 upstream. Needs to happen earlier in the mode set. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 44517c44 upstream. Interrupts only work with MSIs. https://bugs.freedesktop.org/show_bug.cgi?id=37679Reported-by: Dmitry Podgorny <pasis.uax@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tyler Hicks authored
commit 58ded24f upstream. If pages passed to the eCryptfs extent-based crypto functions are not mapped and the module parameter ecryptfs_verbosity=1 was specified at loading time, a NULL pointer dereference will occur. Note that this wouldn't happen on a production system, as you wouldn't pass ecryptfs_verbosity=1 on a production system. It leaks private information to the system logs and is for debugging only. The debugging info printed in these messages is no longer very useful and rather than doing a kmap() in these debugging paths, it will be better to simply remove the debugging paths completely. https://launchpad.net/bugs/913651Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tyler Hicks authored
commit a261a039 upstream. Most filesystems call inode_change_ok() very early in ->setattr(), but eCryptfs didn't call it at all. It allowed the lower filesystem to make the call in its ->setattr() function. Then, eCryptfs would copy the appropriate inode attributes from the lower inode to the eCryptfs inode. This patch changes that and actually calls inode_change_ok() on the eCryptfs inode, fairly early in ecryptfs_setattr(). Ideally, the call would happen earlier in ecryptfs_setattr(), but there are some possible inode initialization steps that must happen first. Since the call was already being made on the lower inode, the change in functionality should be minimal, except for the case of a file extending truncate call. In that case, inode_newsize_ok() was never being called on the eCryptfs inode. Rather than inode_newsize_ok() catching maximum file size errors early on, eCryptfs would encrypt zeroed pages and write them to the lower filesystem until the lower filesystem's write path caught the error in generic_write_checks(). This patch introduces a new function, called ecryptfs_inode_newsize_ok(), which checks if the new lower file size is within the appropriate limits when the truncate operation will be growing the lower file. In summary this change prevents eCryptfs truncate operations (and the resulting page encryptions), which would exceed the lower filesystem limits or FSIZE rlimits, from ever starting. Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Reviewed-by: Li Wang <liwang@nudt.edu.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tyler Hicks authored
commit 5e6f0d76 upstream. ecryptfs_write() handles the truncation of eCryptfs inodes. It grabs a page, zeroes out the appropriate portions, and then encrypts the page before writing it to the lower filesystem. It was unkillable and due to the lack of sparse file support could result in tying up a large portion of system resources, while encrypting pages of zeros, with no way for the truncate operation to be stopped from userspace. This patch adds the ability for ecryptfs_write() to detect a pending fatal signal and return as gracefully as possible. The intent is to leave the lower file in a useable state, while still allowing a user to break out of the encryption loop. If a pending fatal signal is detected, the eCryptfs inode size is updated to reflect the modified inode size and then -EINTR is returned. Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tim Gardner authored
commit 30373dc0 upstream. Print inode on metadata read failure. The only real way of dealing with metadata read failures is to delete the underlying file system file. Having the inode allows one to 'find . -inum INODE`. [tyhicks@canonical.com: Removed some minor not-for-stable parts] Signed-off-by: Tim Gardner <tim.gardner@canonical.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tyler Hicks authored
commit db10e556 upstream. A malicious count value specified when writing to /dev/ecryptfs may result in a a very large kernel memory allocation. This patch peeks at the specified packet payload size, adds that to the size of the packet headers and compares the result with the write count value. The resulting maximum memory allocation size is approximately 532 bytes. Signed-off-by: Tyler Hicks <tyhicks@canonical.com> Reported-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit b4ead019 upstream. The recent change of the power-widget handling for IDT codecs caused the silent output from the docking-station line-out jack. This was partially fixed by the commit f2cbba76 "ALSA: hda - Fix the lost power-setup of seconary pins after PM resume". But the line-out on the docking-station is still silent when booted with the jack plugged even by this fix. The remainig bug is that the power-widget is set off in stac92xx_init() because the pins in cfg->line_out_pins[] aren't checked there properly but only hp_pins[] are checked in is_nid_hp_pin(). This patch fixes the problem by checking both HP and line-out pins and leaving the power-map correctly. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=42637Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 52409aa6 upstream. The commit 2ae66c26 ALSA: hda: option to enable arbitrary buffer/period sizes introduced a regression on machines with Intel controller and Nvidia HDMI. The reason is that the driver modifies the global variable align_buffer_size when an Intel controller is found, and the Nvidia HDMI controller is probed after Intel although Nvidia chips require the aligned buffers. This patch fixes the problem by moving the flag into the local struct so that it's not affected by other controllers. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=42567Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 26 Jan, 2012 17 commits
-
-
Greg Kroah-Hartman authored
-
Hugh Dickins authored
commit 24513264 upstream. Commit cc39c6a9 ("mm: account skipped entries to avoid looping in find_get_pages") correctly fixed an infinite loop; but left a problem that find_get_pages() on shmem would return 0 (appearing to callers to mean end of tree) when it meets a run of nr_pages swap entries. The only uses of find_get_pages() on shmem are via pagevec_lookup(), called from invalidate_mapping_pages(), and from shmctl SHM_UNLOCK's scan_mapping_unevictable_pages(). The first is already commented, and not worth worrying about; but the second can leave pages on the Unevictable list after an unusual sequence of swapping and locking. Fix that by using shmem_find_get_pages_and_swap() (then ignoring the swap) instead of pagevec_lookup(). But I don't want to contaminate vmscan.c with shmem internals, nor shmem.c with LRU locking. So move scan_mapping_unevictable_pages() into shmem.c, renaming it shmem_unlock_mapping(); and rename check_move_unevictable_page() to check_move_unevictable_pages(), looping down an array of pages, oftentimes under the same lock. Leave out the "rotate unevictable list" block: that's a leftover from when this was used for /proc/sys/vm/scan_unevictable_pages, whose flawed handling involved looking at pages at tail of LRU. Was there significance to the sequence first ClearPageUnevictable, then test page_evictable, then SetPageUnevictable here? I think not, we're under LRU lock, and have no barriers between those. Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: Rik van Riel <riel@redhat.com> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michel Lespinasse <walken@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Hugh Dickins authored
commit 85046579 upstream. scan_mapping_unevictable_pages() is used to make SysV SHM_LOCKed pages evictable again once the shared memory is unlocked. It does this with pagevec_lookup()s across the whole object (which might occupy most of memory), and takes 300ms to unlock 7GB here. A cond_resched() every PAGEVEC_SIZE pages would be good. However, KOSAKI-san points out that this is called under shmem.c's info->lock, and it's also under shm.c's shm_lock(), both spinlocks. There is no strong reason for that: we need to take these pages off the unevictable list soonish, but those locks are not required for it. So move the call to scan_mapping_unevictable_pages() from shmem.c's unlock handling up to shm.c's unlock handling. Remove the recently added barrier, not needed now we have spin_unlock() before the scan. Use get_file(), with subsequent fput(), to make sure we have a reference to mapping throughout scan_mapping_unevictable_pages(): that's something that was previously guaranteed by the shm_lock(). Remove shmctl's lru_add_drain_all(): we don't fault in pages at SHM_LOCK time, and we lazily discover them to be Unevictable later, so it serves no purpose for SHM_LOCK; and serves no purpose for SHM_UNLOCK, since pages still on pagevec are not marked Unevictable. The original code avoided redundant rescans by checking VM_LOCKED flag at its level: now avoid them by checking shp's SHM_LOCKED. The original code called scan_mapping_unevictable_pages() on a locked area at shm_destroy() time: perhaps we once had accounting cross-checks which required that, but not now, so skip the overhead and just let inode eviction deal with them. Put check_move_unevictable_page() and scan_mapping_unevictable_pages() under CONFIG_SHMEM (with stub for the TINY case when ramfs is used), more as comment than to save space; comment them used for SHM_UNLOCK. Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: Rik van Riel <riel@redhat.com> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michel Lespinasse <walken@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Stanislaw Gruszka authored
commit 68acc4af upstream. Patch fix firmware error on "iw dev wlan0 scan passive" for hardware scanning (with disable_hw_scan=0 module parameter). iwl3945 0000:03:00.0: Microcode SW error detected. Restarting 0x82000008. iwl3945 0000:03:00.0: Loaded firmware version: 15.32.2.9 iwl3945 0000:03:00.0: Start IWL Error Log Dump: iwl3945 0000:03:00.0: Status: 0x0002A2E4, count: 1 iwl3945 0000:03:00.0: Desc Time asrtPC blink2 ilink1 nmiPC Line iwl3945 0000:03:00.0: SYSASSERT (0x5) 0041263900 0x13756 0x0031C 0x00000 764 iwl3945 0000:03:00.0: Error Reply type 0x000002FC cmd C_SCAN (0x80) seq 0x443E ser 0x00340000 iwl3945 0000:03:00.0: Command C_SCAN failed: FW Error iwl3945 0000:03:00.0: Can't stop Rx DMA. We have disable ability to change passive scanning to active on particular channel when traffic is detected on that channel. Otherwise firmware will report error, when we try to do passive scan on radar channels. Reported-and-debugged-by: Pedro Francisco <pedrogfrancisco@gmail.com> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Wey-Yi Guy authored
commit b2ccccdc upstream. Check and report WARN only when its invalid Resolves: https://bugzilla.kernel.org/show_bug.cgi?id=42621 https://bugzilla.redhat.com/show_bug.cgi?id=766071Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Michal Hocko authored
commit 687875fb upstream. Fix the following NULL ptr dereference caused by cat /sys/devices/system/memory/memory0/removable Pid: 13979, comm: sed Not tainted 3.0.13-0.5-default #1 IBM BladeCenter LS21 -[7971PAM]-/Server Blade RIP: __count_immobile_pages+0x4/0x100 Process sed (pid: 13979, threadinfo ffff880221c36000, task ffff88022e788480) Call Trace: is_pageblock_removable_nolock+0x34/0x40 is_mem_section_removable+0x74/0xf0 show_mem_removable+0x41/0x70 sysfs_read_file+0xfe/0x1c0 vfs_read+0xc7/0x130 sys_read+0x53/0xa0 system_call_fastpath+0x16/0x1b We are crashing because we are trying to dereference NULL zone which came from pfn=0 (struct page ffffea0000000000). According to the boot log this page is marked reserved: e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved) and early_node_map confirms that: early_node_map[3] active PFN ranges 1: 0x00000010 -> 0x0000009c 1: 0x00000100 -> 0x000bffa3 1: 0x00100000 -> 0x00240000 The problem is that memory_present works in PAGE_SECTION_MASK aligned blocks so the reserved range sneaks into the the section as well. This also means that free_area_init_node will not take care of those reserved pages and they stay uninitialized. When we try to read the removable status we walk through all available sections and hope that the zone is valid for all pages in the section. But this is not true in this case as the zone and nid are not initialized. We have only one node in this particular case and it is marked as node=1 (rather than 0) and that made the problem visible because page_to_nid will return 0 and there are no zones on the node. Let's check that the zone is valid and that the given pfn falls into its boundaries and mark the section not removable. This might cause some false positives, probably, but we do not have any sane way to find out whether the page is reserved by the platform or it is just not used for whatever other reasons. Signed-off-by: Michal Hocko <mhocko@suse.cz> Acked-by: Mel Gorman <mgorman@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Will Deacon authored
commit 85e72aa5 upstream. /proc/pid/clear_refs is used to clear the Referenced and YOUNG bits for pages and corresponding page table entries of the task with PID pid, which includes any special mappings inserted into the page tables in order to provide things like vDSOs and user helper functions. On ARM this causes a problem because the vectors page is mapped as a global mapping and since ec706dab ("ARM: add a vma entry for the user accessible vector page"), a VMA is also inserted into each task for this page to aid unwinding through signals and syscall restarts. Since the vectors page is required for handling faults, clearing the YOUNG bit (and subsequently writing a faulting pte) means that we lose the vectors page *globally* and cannot fault it back in. This results in a system deadlock on the next exception. To see this problem in action, just run: $ echo 1 > /proc/self/clear_refs on an ARM platform (as any user) and watch your system hang. I think this has been the case since 2.6.37 This patch avoids clearing the aforementioned bits for reserved pages, therefore leaving the vectors page intact on ARM. Since reserved pages are not candidates for swap, this change should not have any impact on the usefulness of clear_refs. Signed-off-by: Will Deacon <will.deacon@arm.com> Reported-by: Moussa Ba <moussaba@micron.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: David Rientjes <rientjes@google.com> Cc: Russell King <rmk@arm.linux.org.uk> Acked-by: Nicolas Pitre <nico@linaro.org> Cc: Matt Mackall <mpm@selenic.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Ananth N Mavinakayanahalli authored
commit d496aab5 upstream. Commit ef53d9c5 ("kprobes: improve kretprobe scalability with hashed locking") introduced a bug where we can potentially leak kretprobe_instances since we initialize a hlist head after having used it. Initialize the hlist head before using it. Reported by: Jim Keniston <jkenisto@us.ibm.com> Acked-by: Jim Keniston <jkenisto@us.ibm.com> Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Srinivasa D S <srinivasa@in.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Jeff Layton authored
commit ce91acb3 upstream. We've had some reports of servers (namely, the Solaris in-kernel CIFS server) that don't deal properly with writes that are "too large" even though they set CAP_LARGE_WRITE_ANDX. Change the default to better mirror what windows clients do. Cc: Pavel Shilovsky <piastry@etersoft.ru> Reported-by: Nick Davis <phireph0x@yahoo.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Dan Rosenberg authored
commit c25a785d upstream. If the provided system call number is equal to __NR_syscalls, the current check will pass and a function pointer just after the system call table may be called, since sys_call_table is an array with total size __NR_syscalls. Whether or not this is a security bug depends on what the compiler puts immediately after the system call table. It's likely that this won't do anything bad because there is an additional NULL check on the syscall entry, but if there happens to be a non-NULL value immediately after the system call table, this may result in local privilege escalation. Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com> Cc: Chen Liqin <liqin.chen@sunplusct.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Eugene Teo <eugeneteo@kernel.sg> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Toshiharu Okada authored
commit ff35e8b1 upstream. This patch modified the setting value of I2C Bus Transfer Rate Setting Counter regisrer. Signed-off-by: Toshiharu Okada <toshiharu-linux@dsn.okisemi.com> Signed-off-by: Ben Dooks <ben-linux@fluff.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Dave Chinner authored
commit b1c770c2 upstream When finding the longest extent in an AG, we read the value directly out of the AGF buffer without endian conversion. This will give an incorrect length, resulting in FITRIM operations potentially not trimming everything that it should. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ben Myers <bpm@sgi.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Stanislaw Gruszka authored
commit dfd00c4c upstream. Same devices can generate interrupt without properly setting bit in INT_SOURCE_CSR register (spurious interrupt), what will cause IRQ line will be disabled by interrupts controller driver. We discovered that clearing INT_MASK_CSR stops such behaviour. We previously first read that register, and then clear all know interrupt sources bits and do not touch reserved bits. After this patch, we write to all register content (I believe writing to reserved bits on that register will not cause any problems, I tested that on my rt2800pci device). This fix very bad performance problem, practically making device unusable (since worked without interrupts), reported in: https://bugzilla.redhat.com/show_bug.cgi?id=658451 We previously tried to workaround that issue in commit 4ba7d999 "rt2800pci: handle spurious interrupts", but it was reverted in commit 82e5fc2a as thing, that will prevent to detect real spurious interrupts. Reported-and-tested-by: Amir Hedayaty <hedayaty@gmail.com> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Acked-by: Gertjan van Wingerde <gwingerde@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Felix Fietkau authored
commit 7a532fe7 upstream. Documentation states that the KeyMiss flag is only valid if RxFrameOK is unset, however empirical evidence has shown that this is false. When KeyMiss is set (and RxFrameOK is 1), the hardware passes a valid frame which has not been decrypted. The driver then falsely marks the frame as decrypted, and when using CCMP this corrupts the rx CCMP PN, leading to connection hangs. Signed-off-by: Felix Fietkau <nbd@openwrt.org> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Cliff Wickman authored
commit c5d35d39 upstream. This patch implements a workaround for a UV2 hardware bug. The bug is a non-atomic update of a memory-mapped register. When hardware message delivery and software message acknowledge occur simultaneously the pending message acknowledge for the arriving message may be lost. This causes the sender's message status to stay busy. Part of the workaround is to not acknowledge a completed message until it is verified that no other message is actually using the resource that is mistakenly recorded in the completed message. Part of the workaround is to test for long elapsed time in such a busy condition, then handle it by using a spare sending descriptor. The stay-busy condition is eventually timed out by hardware, and then the original sending descriptor can be re-used. Most of that logic change is in keeping track of the current descriptor and the state of the spares. The occurrences of the workaround are added to the BAU statistics. Signed-off-by: Cliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116211947.GC5767@sgi.comSigned-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Cliff Wickman authored
commit d059f9fa upstream. Move the call to enable_timeouts() forward so that BAU_MISC_CONTROL is initialized before using it in calculate_destination_timeout(). Fix the calculation of a BAU destination timeout for UV2 (in calculate_destination_timeout()). Signed-off-by: Cliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116211848.GB5767@sgi.comSigned-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Cliff Wickman authored
commit da87c937 upstream. Update the use of the Broadcast Assist Unit on SGI Altix UV2 to the use of native UV2 mode on new hardware (not the legacy mode). UV2 native mode has a different format for a broadcast message. We also need quick differentiaton between UV1 and UV2. Signed-off-by: Cliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116211750.GA5767@sgi.comSigned-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-