- 03 Feb, 2012 23 commits
-
-
Srinidhi KASAGAR authored
commit d65015f7 upstream. This applies ARM errata 764369 for all ux500 platforms. Signed-off-by:
Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Signed-off-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jonathan Nieder authored
commit f2c0d026 upstream. syslog-ng versions before 3.3.0beta1 (2011-05-12) assume that CAP_SYS_ADMIN is sufficient to access syslog, so ever since CAP_SYSLOG was introduced (2010-11-25) they have triggered a warning. Commit ee24aebf ("cap_syslog: accept CAP_SYS_ADMIN for now") improved matters a little by making syslog-ng work again, just keeping the WARN_ONCE(). But still, this is a warning that writes a stack trace we don't care about to syslog, sets a taint flag, and alarms sysadmins when nothing worse has happened than use of an old userspace with a recent kernel. Convert the WARN_ONCE to a printk_once to avoid that while continuing to give userspace developers a hint that this is an unwanted backward-compatibility feature and won't be around forever. Reported-by:
Ralf Hildebrandt <ralf.hildebrandt@charite.de> Reported-by:
Niels <zorglub_olsen@hotmail.com> Reported-by:
Paweł Sikora <pluto@agmk.net> Signed-off-by:
Jonathan Nieder <jrnieder@gmail.com> Liked-by:
Gergely Nagy <algernon@madhouse-project.org> Acked-by:
Serge Hallyn <serge@hallyn.com> Acked-by:
James Morris <jmorris@namei.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Cc: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paulo Zanoni authored
commit ba68e086 upstream. This is a revert of 81a14b46. We already set the mode polarity using the SDVO commands with struct intel_sdvo_dtd. We have at least 3 bugs that get fixed with this patch. The documentation, despite not clear, can also be interpreted in a way that suggests this patch is needed. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=15766 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=42174 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=43333Reviewed-by:
Eric Anholt <eric@anholt.net> Reviewed-by:
Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by:
Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by:
Keith Packard <keithp@keithp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit b3a81520 upstream. The very same problem is seen on Haier W18 laptop with ALC861 as seen on ASUS A6Rp, which was fixed by the commit 3b25eb69. Now we just need to add a new SSID entry pointing to the same fixup. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=42656Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 3b25eb69 upstream. The refactoring of Realtek codec driver in 3.2 kernel caused a regression for ASUS A6Rp laptop; it doesn't give any output. The reason was that this machine has a secret master mute (or EAPD) control via NID 0x0f VREF. Setting VREF50 on this node makes the sound working again. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=42588Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andreas Herrmann authored
commit 5b68edc9 upstream. We've decided to provide CPU family specific container files (starting with CPU family 15h). E.g. for family 15h we have to load microcode_amd_fam15h.bin instead of microcode_amd.bin Rationale is that starting with family 15h patch size is larger than 2KB which was hard coded as maximum patch size in various microcode loaders (not just Linux). Container files which include patches larger than 2KB cause different kinds of trouble with such old patch loaders. Thus we have to ensure that the default container file provides only patches with size less than 2KB. Signed-off-by:
Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: <stable@kernel.org> Link: http://lkml.kernel.org/r/20120120164412.GD24508@alberich.amd.com [ documented the naming convention and tidied the code a bit. ] Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Russ Anderson authored
commit 5a51467b upstream. uv_gpa_to_soc_phys_ram() was inadvertently ignoring the shift values. This fix takes the shift into account. Signed-off-by:
Russ Anderson <rja@sgi.com> Link: http://lkml.kernel.org/r/20120119020753.GA7228@sgi.comSigned-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Chinner authored
commit b1c770c2 upstream When finding the longest extent in an AG, we read the value directly out of the AGF buffer without endian conversion. This will give an incorrect length, resulting in FITRIM operations potentially not trimming everything that it should. Note, for 3.0-stable this has been modified to apply to fs/xfs/linux-2.6/xfs_discard.c instead of fs/xfs/xfs_discard.c. -bpm Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Ben Myers <bpm@sgi.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nick Bowler authored
commit 4b90a603 upstream. When the ahash driver returns -EBUSY, AH4/6 input functions return NET_XMIT_DROP, presumably copied from the output code path. But returning transmit codes on input doesn't make a lot of sense. Since NET_XMIT_DROP is a positive int, this gets interpreted as the next header type (i.e., success). As that can only end badly, remove the check. Signed-off-by:
Nick Bowler <nbowler@elliptictech.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jiri Olsa authored
commit 30fb6aa7 upstream. Multiple users of the function tracer can register their functions with the ftrace_ops structure. The accounting within ftrace will update the counter on each function record that is being traced. When the ftrace_ops filtering adds or removes functions, the function records will be updated accordingly if the ftrace_ops is still registered. When a ftrace_ops is removed, the counter of the function records, that the ftrace_ops traces, are decremented. When they reach zero the functions that they represent are modified to stop calling the mcount code. When changes are made, the code is updated via stop_machine() with a command passed to the function to tell it what to do. There is an ENABLE and DISABLE command that tells the called function to enable or disable the functions. But the ENABLE is really a misnomer as it should just update the records, as records that have been enabled and now have a count of zero should be disabled. The DISABLE command is used to disable all functions regardless of their counter values. This is the big off switch and is not the complement of the ENABLE command. To make matters worse, when a ftrace_ops is unregistered and there is another ftrace_ops registered, neither the DISABLE nor the ENABLE command are set when calling into the stop_machine() function and the records will not be updated to match their counter. A command is passed to that function that will update the mcount code to call the registered callback directly if it is the only one left. This means that the ftrace_ops that is still registered will have its callback called by all functions that have been set for it as well as the ftrace_ops that was just unregistered. Here's a way to trigger this bug. Compile the kernel with CONFIG_FUNCTION_PROFILER set and with CONFIG_FUNCTION_GRAPH not set: CONFIG_FUNCTION_PROFILER=y # CONFIG_FUNCTION_GRAPH is not set This will force the function profiler to use the function tracer instead of the function graph tracer. # cd /sys/kernel/debug/tracing # echo schedule > set_ftrace_filter # echo function > current_tracer # cat set_ftrace_filter schedule # cat trace # tracer: nop # # entries-in-buffer/entries-written: 692/68108025 #P:4 # # _-----=> irqs-off # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / delay # TASK-PID CPU# |||| TIMESTAMP FUNCTION # | | | |||| | | kworker/0:2-909 [000] .... 531.235574: schedule <-worker_thread <idle>-0 [001] .N.. 531.235575: schedule <-cpu_idle kworker/0:2-909 [000] .... 531.235597: schedule <-worker_thread sshd-2563 [001] .... 531.235647: schedule <-schedule_hrtimeout_range_clock # echo 1 > function_profile_enabled # echo 0 > function_porfile_enabled # cat set_ftrace_filter schedule # cat trace # tracer: function # # entries-in-buffer/entries-written: 159701/118821262 #P:4 # # _-----=> irqs-off # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / delay # TASK-PID CPU# |||| TIMESTAMP FUNCTION # | | | |||| | | <idle>-0 [002] ...1 604.870655: local_touch_nmi <-cpu_idle <idle>-0 [002] d..1 604.870655: enter_idle <-cpu_idle <idle>-0 [002] d..1 604.870656: atomic_notifier_call_chain <-enter_idle <idle>-0 [002] d..1 604.870656: __atomic_notifier_call_chain <-atomic_notifier_call_chain The same problem could have happened with the trace_probe_ops, but they are modified with the set_frace_filter file which does the update at closure of the file. The simple solution is to change ENABLE to UPDATE and call it every time an ftrace_ops is unregistered. Link: http://lkml.kernel.org/r/1323105776-26961-3-git-send-email-jolsa@redhat.comSigned-off-by:
Jiri Olsa <jolsa@redhat.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steven Rostedt authored
commit 072126f4 upstream. Currently, if set_ftrace_filter() is called when the ftrace_ops is active, the function filters will not be updated. They will only be updated when tracing is disabled and re-enabled. Update the functions immediately during set_ftrace_filter(). Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steven Rostedt authored
commit 41fb61c2 upstream. Whenever the hash of the ftrace_ops is updated, the record counts must be balance. This requires disabling the records that are set in the original hash, and then enabling the records that are set in the updated hash. Moving the update into ftrace_hash_move() removes the bug where the hash was updated but the records were not, which results in ftrace triggering a warning and disabling itself because the ftrace_ops filter is updated while the ftrace_ops was registered, and then the failure happens when the ftrace_ops is unregistered. The current code will not trigger this bug, but new code will. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexey Dobriyan authored
commit 51fc6dc8 upstream. For rounds 16--79, W[i] only depends on W[i - 2], W[i - 7], W[i - 15] and W[i - 16]. Consequently, keeping all W[80] array on stack is unnecessary, only 16 values are really needed. Using W[16] instead of W[80] greatly reduces stack usage (~750 bytes to ~340 bytes on x86_64). Line by line explanation: * BLEND_OP array is "circular" now, all indexes have to be modulo 16. Round number is positive, so remainder operation should be without surprises. * initial full message scheduling is trimmed to first 16 values which come from data block, the rest is calculated before it's needed. * original loop body is unrolled version of new SHA512_0_15 and SHA512_16_79 macros, unrolling was done to not do explicit variable renaming. Otherwise it's the very same code after preprocessing. See sha1_transform() code which does the same trick. Patch survives in-tree crypto test and original bugreport test (ping flood with hmac(sha512). See FIPS 180-2 for SHA-512 definition http://csrc.nist.gov/publications/fips/fips180-2/fips180-2withchangenotice.pdfSigned-off-by:
Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexey Dobriyan authored
commit 84e31fdb upstream. commit f9e2bca6 aka "crypto: sha512 - Move message schedule W[80] to static percpu area" created global message schedule area. If sha512_update will ever be entered twice, hash will be silently calculated incorrectly. Probably the easiest way to notice incorrect hashes being calculated is to run 2 ping floods over AH with hmac(sha512): #!/usr/sbin/setkey -f flush; spdflush; add IP1 IP2 ah 25 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000025; add IP2 IP1 ah 52 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000052; spdadd IP1 IP2 any -P out ipsec ah/transport//require; spdadd IP2 IP1 any -P in ipsec ah/transport//require; XfrmInStateProtoError will start ticking with -EBADMSG being returned from ah_input(). This never happens with, say, hmac(sha1). With patch applied (on BOTH sides), XfrmInStateProtoError does not tick with multiple bidirectional ping flood streams like it doesn't tick with SHA-1. After this patch sha512_transform() will start using ~750 bytes of stack on x86_64. This is OK for simple loads, for something more heavy, stack reduction will be done separatedly. Signed-off-by:
Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Kara authored
commit 9b025eb3 upstream. Commit b52a360b forgot to call xfs_iunlock() when it detected corrupted symplink and bailed out. Fix it by jumping to 'out' instead of doing return. CC: Carlos Maiolino <cmaiolino@redhat.com> Signed-off-by:
Jan Kara <jack@suse.cz> Reviewed-by:
Alex Elder <elder@kernel.org> Reviewed-by:
Dave Chinner <dchinner@redhat.com> Signed-off-by:
Ben Myers <bpm@sgi.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Thomas Hellstrom authored
commit 598781d7 upstream. If the master tries to authenticate a client using drm_authmagic and that client has already closed its drm file descriptor, either wilfully or because it was terminated, the call to drm_authmagic will dereference a stale pointer into kmalloc'ed memory and corrupt it. Typically this results in a hard system hang. This patch fixes that problem by removing any authentication tokens (struct drm_magic_entry) open for a file descriptor when that file descriptor is closed. Signed-off-by:
Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by:
Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alex Deucher authored
commit 44517c44 upstream. Interrupts only work with MSIs. https://bugs.freedesktop.org/show_bug.cgi?id=37679Reported-by:
Dmitry Podgorny <pasis.uax@gmail.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Dave Airlie <airlied@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tyler Hicks authored
commit 58ded24f upstream. If pages passed to the eCryptfs extent-based crypto functions are not mapped and the module parameter ecryptfs_verbosity=1 was specified at loading time, a NULL pointer dereference will occur. Note that this wouldn't happen on a production system, as you wouldn't pass ecryptfs_verbosity=1 on a production system. It leaks private information to the system logs and is for debugging only. The debugging info printed in these messages is no longer very useful and rather than doing a kmap() in these debugging paths, it will be better to simply remove the debugging paths completely. https://launchpad.net/bugs/913651Signed-off-by:
Tyler Hicks <tyhicks@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tyler Hicks authored
commit a261a039 upstream. Most filesystems call inode_change_ok() very early in ->setattr(), but eCryptfs didn't call it at all. It allowed the lower filesystem to make the call in its ->setattr() function. Then, eCryptfs would copy the appropriate inode attributes from the lower inode to the eCryptfs inode. This patch changes that and actually calls inode_change_ok() on the eCryptfs inode, fairly early in ecryptfs_setattr(). Ideally, the call would happen earlier in ecryptfs_setattr(), but there are some possible inode initialization steps that must happen first. Since the call was already being made on the lower inode, the change in functionality should be minimal, except for the case of a file extending truncate call. In that case, inode_newsize_ok() was never being called on the eCryptfs inode. Rather than inode_newsize_ok() catching maximum file size errors early on, eCryptfs would encrypt zeroed pages and write them to the lower filesystem until the lower filesystem's write path caught the error in generic_write_checks(). This patch introduces a new function, called ecryptfs_inode_newsize_ok(), which checks if the new lower file size is within the appropriate limits when the truncate operation will be growing the lower file. In summary this change prevents eCryptfs truncate operations (and the resulting page encryptions), which would exceed the lower filesystem limits or FSIZE rlimits, from ever starting. Signed-off-by:
Tyler Hicks <tyhicks@canonical.com> Reviewed-by:
Li Wang <liwang@nudt.edu.cn> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tyler Hicks authored
commit 5e6f0d76 upstream. ecryptfs_write() handles the truncation of eCryptfs inodes. It grabs a page, zeroes out the appropriate portions, and then encrypts the page before writing it to the lower filesystem. It was unkillable and due to the lack of sparse file support could result in tying up a large portion of system resources, while encrypting pages of zeros, with no way for the truncate operation to be stopped from userspace. This patch adds the ability for ecryptfs_write() to detect a pending fatal signal and return as gracefully as possible. The intent is to leave the lower file in a useable state, while still allowing a user to break out of the encryption loop. If a pending fatal signal is detected, the eCryptfs inode size is updated to reflect the modified inode size and then -EINTR is returned. Signed-off-by:
Tyler Hicks <tyhicks@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tim Gardner authored
commit 30373dc0 upstream. Print inode on metadata read failure. The only real way of dealing with metadata read failures is to delete the underlying file system file. Having the inode allows one to 'find . -inum INODE`. [tyhicks@canonical.com: Removed some minor not-for-stable parts] Signed-off-by:
Tim Gardner <tim.gardner@canonical.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Tyler Hicks <tyhicks@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tyler Hicks authored
commit db10e556 upstream. A malicious count value specified when writing to /dev/ecryptfs may result in a a very large kernel memory allocation. This patch peeks at the specified packet payload size, adds that to the size of the packet headers and compares the result with the write count value. The resulting maximum memory allocation size is approximately 532 bytes. Signed-off-by:
Tyler Hicks <tyhicks@canonical.com> Reported-by:
Sasha Levin <levinsasha928@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit b4ead019 upstream. The recent change of the power-widget handling for IDT codecs caused the silent output from the docking-station line-out jack. This was partially fixed by the commit f2cbba76 "ALSA: hda - Fix the lost power-setup of seconary pins after PM resume". But the line-out on the docking-station is still silent when booted with the jack plugged even by this fix. The remainig bug is that the power-widget is set off in stac92xx_init() because the pins in cfg->line_out_pins[] aren't checked there properly but only hp_pins[] are checked in is_nid_hp_pin(). This patch fixes the problem by checking both HP and line-out pins and leaving the power-map correctly. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=42637Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 26 Jan, 2012 17 commits
-
-
Greg Kroah-Hartman authored
-
Artem Bityutskiy authored
commit 1f5d78dc upstream. We switch to dynamic debugging in commit 56e46742 but did not take into account that now we do not control anymore whether a specific message is enabled or not. So now we lock the "dbg_lock" and release it in every debugging macro, which make them not so light-weight. This commit removes the "dbg_lock" protection from the debugging macros to fix the issue. The downside is that now our DBGKEY() stuff is broken, but this is not critical at all and will be fixed later. Signed-off-by:
Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Stanislaw Gruszka authored
commit 68acc4af upstream. Patch fix firmware error on "iw dev wlan0 scan passive" for hardware scanning (with disable_hw_scan=0 module parameter). iwl3945 0000:03:00.0: Microcode SW error detected. Restarting 0x82000008. iwl3945 0000:03:00.0: Loaded firmware version: 15.32.2.9 iwl3945 0000:03:00.0: Start IWL Error Log Dump: iwl3945 0000:03:00.0: Status: 0x0002A2E4, count: 1 iwl3945 0000:03:00.0: Desc Time asrtPC blink2 ilink1 nmiPC Line iwl3945 0000:03:00.0: SYSASSERT (0x5) 0041263900 0x13756 0x0031C 0x00000 764 iwl3945 0000:03:00.0: Error Reply type 0x000002FC cmd C_SCAN (0x80) seq 0x443E ser 0x00340000 iwl3945 0000:03:00.0: Command C_SCAN failed: FW Error iwl3945 0000:03:00.0: Can't stop Rx DMA. We have disable ability to change passive scanning to active on particular channel when traffic is detected on that channel. Otherwise firmware will report error, when we try to do passive scan on radar channels. Reported-and-debugged-by:
Pedro Francisco <pedrogfrancisco@gmail.com> Signed-off-by:
Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Wey-Yi Guy authored
commit b2ccccdc upstream. Check and report WARN only when its invalid Resolves: https://bugzilla.kernel.org/show_bug.cgi?id=42621 https://bugzilla.redhat.com/show_bug.cgi?id=766071Signed-off-by:
Wey-Yi Guy <wey-yi.w.guy@intel.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Michal Hocko authored
commit 687875fb upstream. Fix the following NULL ptr dereference caused by cat /sys/devices/system/memory/memory0/removable Pid: 13979, comm: sed Not tainted 3.0.13-0.5-default #1 IBM BladeCenter LS21 -[7971PAM]-/Server Blade RIP: __count_immobile_pages+0x4/0x100 Process sed (pid: 13979, threadinfo ffff880221c36000, task ffff88022e788480) Call Trace: is_pageblock_removable_nolock+0x34/0x40 is_mem_section_removable+0x74/0xf0 show_mem_removable+0x41/0x70 sysfs_read_file+0xfe/0x1c0 vfs_read+0xc7/0x130 sys_read+0x53/0xa0 system_call_fastpath+0x16/0x1b We are crashing because we are trying to dereference NULL zone which came from pfn=0 (struct page ffffea0000000000). According to the boot log this page is marked reserved: e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved) and early_node_map confirms that: early_node_map[3] active PFN ranges 1: 0x00000010 -> 0x0000009c 1: 0x00000100 -> 0x000bffa3 1: 0x00100000 -> 0x00240000 The problem is that memory_present works in PAGE_SECTION_MASK aligned blocks so the reserved range sneaks into the the section as well. This also means that free_area_init_node will not take care of those reserved pages and they stay uninitialized. When we try to read the removable status we walk through all available sections and hope that the zone is valid for all pages in the section. But this is not true in this case as the zone and nid are not initialized. We have only one node in this particular case and it is marked as node=1 (rather than 0) and that made the problem visible because page_to_nid will return 0 and there are no zones on the node. Let's check that the zone is valid and that the given pfn falls into its boundaries and mark the section not removable. This might cause some false positives, probably, but we do not have any sane way to find out whether the page is reserved by the platform or it is just not used for whatever other reasons. Signed-off-by:
Michal Hocko <mhocko@suse.cz> Acked-by:
Mel Gorman <mgorman@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Will Deacon authored
commit 85e72aa5 upstream. /proc/pid/clear_refs is used to clear the Referenced and YOUNG bits for pages and corresponding page table entries of the task with PID pid, which includes any special mappings inserted into the page tables in order to provide things like vDSOs and user helper functions. On ARM this causes a problem because the vectors page is mapped as a global mapping and since ec706dab ("ARM: add a vma entry for the user accessible vector page"), a VMA is also inserted into each task for this page to aid unwinding through signals and syscall restarts. Since the vectors page is required for handling faults, clearing the YOUNG bit (and subsequently writing a faulting pte) means that we lose the vectors page *globally* and cannot fault it back in. This results in a system deadlock on the next exception. To see this problem in action, just run: $ echo 1 > /proc/self/clear_refs on an ARM platform (as any user) and watch your system hang. I think this has been the case since 2.6.37 This patch avoids clearing the aforementioned bits for reserved pages, therefore leaving the vectors page intact on ARM. Since reserved pages are not candidates for swap, this change should not have any impact on the usefulness of clear_refs. Signed-off-by:
Will Deacon <will.deacon@arm.com> Reported-by:
Moussa Ba <moussaba@micron.com> Acked-by:
Hugh Dickins <hughd@google.com> Cc: David Rientjes <rientjes@google.com> Cc: Russell King <rmk@arm.linux.org.uk> Acked-by:
Nicolas Pitre <nico@linaro.org> Cc: Matt Mackall <mpm@selenic.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Ananth N Mavinakayanahalli authored
commit d496aab5 upstream. Commit ef53d9c5 ("kprobes: improve kretprobe scalability with hashed locking") introduced a bug where we can potentially leak kretprobe_instances since we initialize a hlist head after having used it. Initialize the hlist head before using it. Reported by: Jim Keniston <jkenisto@us.ibm.com> Acked-by:
Jim Keniston <jkenisto@us.ibm.com> Signed-off-by:
Ananth N Mavinakayanahalli <ananth@in.ibm.com> Acked-by:
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Srinivasa D S <srinivasa@in.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Dan Rosenberg authored
commit c25a785d upstream. If the provided system call number is equal to __NR_syscalls, the current check will pass and a function pointer just after the system call table may be called, since sys_call_table is an array with total size __NR_syscalls. Whether or not this is a security bug depends on what the compiler puts immediately after the system call table. It's likely that this won't do anything bad because there is an additional NULL check on the syscall entry, but if there happens to be a non-NULL value immediately after the system call table, this may result in local privilege escalation. Signed-off-by:
Dan Rosenberg <drosenberg@vsecurity.com> Cc: Chen Liqin <liqin.chen@sunplusct.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Eugene Teo <eugeneteo@kernel.sg> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Toshiharu Okada authored
commit ff35e8b1 upstream. This patch modified the setting value of I2C Bus Transfer Rate Setting Counter regisrer. Signed-off-by:
Toshiharu Okada <toshiharu-linux@dsn.okisemi.com> Signed-off-by:
Ben Dooks <ben-linux@fluff.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Dirk Eibach authored
commit f42af6c4 upstream. Since commit "74888760... dt/net: Eliminate users of of_platform_{,un}register_driver" there are two platform drivers named "mdio-gpio" registered. I renamed the of variant to "mdio-ofgpio". Signed-off-by:
Dirk Eibach <eibach@gdsys.de> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Boaz Harrosh authored
commit fe0fe835 upstream. As mandated by the standard. In case of an IO error, a pNFS objects layout driver must return it's layout. This is because all device errors are reported to the server as part of the layout return buffer. This is implemented the same way PNFS_LAYOUTRET_ON_SETATTR is done, through a bit flag on the pnfs_layoutdriver_type->flags member. The flag is set by the layout driver that wants a layout_return preformed at pnfs_ld_{write,read}_done in case of an error. (Though I have not defined a wrapper like pnfs_ld_layoutret_on_setattr because this code is never called outside of pnfs.c and pnfs IO paths) Without this patch 3.[0-2] Kernels leak memory and have an annoying WARN_ON after every IO error utilizing the pnfs-obj driver. Signed-off-by:
Boaz Harrosh <bharrosh@panasas.com> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Boaz Harrosh authored
commit 5c0b4129 upstream. Some time along the way pNFS IO errors were switched to communicate with a special iodata->pnfs_error member instead of the regular RPC members. But objlayout was not switched over. Fix that! Without this fix any IO error is hanged, because IO is not switched to MDS and pages are never cleared or read. Signed-off-by:
Boaz Harrosh <bharrosh@panasas.com> Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Stanislaw Gruszka authored
commit dfd00c4c upstream. Same devices can generate interrupt without properly setting bit in INT_SOURCE_CSR register (spurious interrupt), what will cause IRQ line will be disabled by interrupts controller driver. We discovered that clearing INT_MASK_CSR stops such behaviour. We previously first read that register, and then clear all know interrupt sources bits and do not touch reserved bits. After this patch, we write to all register content (I believe writing to reserved bits on that register will not cause any problems, I tested that on my rt2800pci device). This fix very bad performance problem, practically making device unusable (since worked without interrupts), reported in: https://bugzilla.redhat.com/show_bug.cgi?id=658451 We previously tried to workaround that issue in commit 4ba7d999 "rt2800pci: handle spurious interrupts", but it was reverted in commit 82e5fc2a as thing, that will prevent to detect real spurious interrupts. Reported-and-tested-by:
Amir Hedayaty <hedayaty@gmail.com> Signed-off-by:
Stanislaw Gruszka <sgruszka@redhat.com> Acked-by:
Gertjan van Wingerde <gwingerde@gmail.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Cliff Wickman authored
commit d059f9fa upstream. Move the call to enable_timeouts() forward so that BAU_MISC_CONTROL is initialized before using it in calculate_destination_timeout(). Fix the calculation of a BAU destination timeout for UV2 (in calculate_destination_timeout()). Signed-off-by:
Cliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116211848.GB5767@sgi.comSigned-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Alexander Aring authored
commit 2727b175 upstream. Correct OMAP_I2C_SYSC_REG offset in omap4 register map. Offset 0x20 is reserved and OMAP_I2C_SYSC_REG has 0x10 as offset. Signed-off-by:
Alexander Aring <a.aring@phytec.de> [khilman@ti.com: minor changelog edits] Signed-off-by:
Kevin Hilman <khilman@ti.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Roland Dreier authored
commit 895f3022 upstream. The target code was not setting the additional sense length field in the sense data it returned, which meant that at least the Linux stack ignored the ASC/ASCQ fields. For example, without this patch, on a tcm_loop device: # sg_raw -v /dev/sda 2 0 0 0 0 0 gives cdb to send: 02 00 00 00 00 00 SCSI Status: Check Condition Sense Information: Fixed format, current; Sense key: Illegal Request Raw sense data (in hex): 70 00 05 00 00 00 00 00 while after the patch we correctly get the following (which matches what a regular disk returns): cdb to send: 02 00 00 00 00 00 SCSI Status: Check Condition Sense Information: Fixed format, current; Sense key: Illegal Request Additional sense: Invalid command operation code Raw sense data (in hex): 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Roland Dreier authored
commit ce136176 upstream. Current SCSI specs say that the "response format" field in the standard INQUIRY response should be set to 2, and all the real SCSI devices I have do put 2 here. So let's do that too. Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-