- 12 Jan, 2012 13 commits
-
-
Lars-Peter Clausen authored
In ancient times it was necessary to manually initialize the bus field of an spi_driver to spi_bus_type. These days this is done in spi_driver_register(), so we can drop the manual assignment. The patch was generated using the following coccinelle semantic patch: // <smpl> @@ identifier _driver; @@ struct spi_driver _driver = { .driver = { - .bus = &spi_bus_type, }, }; // </smpl> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Guennadi Liakhovetski authored
Replace ilog2(__rounddown_pow_of_two(x)) with the equivalent but much simpler fls(x) - 1. Reported-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Johan Rudholm authored
If the read or write buffer size associated with the command sent through the mmc_blk_ioctl is zero, do not prepare data buffer. This enables a ioctl(2) call to for instance send a MMC_SWITCH to set a byte in the ext_csd. Signed-off-by: Johan Rudholm <johan.rudholm@stericsson.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Tony Lin authored
1ms is enough for hardware to change the clock to stable. 100ms is too long in the tasklet. Signed-off-by: Tony Lin <tony.lin@freescale.com> CC: Xiaobo Xie <X.Xie@freescale.com> CC: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
If max_seg_size is unaligned, mmc_test_map_sg() may create sg element sizes that are not aligned with 512 byte. Fix, align max_seg_size at mmc_test_area_init(). Signed-off-by: Per Forlin <per.forlin@stericsson.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Signed-off-by: Per Forlin <per.forlin@stericsson.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Philip Rakity authored
This patch adds support for sdio UHS cards per the version 3.0 spec. UHS mode is only enabled for version 3.0 cards when both the host and the controller support UHS modes. 1.8v signaling support is removed if both the card and the host do not support UHS. This is done to maintain compatibility and some system/card combinations break when 1.8v signaling is enabled when the host does not support UHS. Signed-off-by: Philip Rakity <prakity@marvell.com> Signed-off-by: Aaron Lu <Aaron.lu@amd.com> Reviewed-by: Arindam Nath <arindam.nath@amd.com> Tested-by: Bing Zhao <bzhao@marvell.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Viresh Kumar authored
Suspend/Resume is missing from sdhci-spear driver. This patch adds support for suspend/resume for this driver. Signed-off-by: Viresh Kumar <viresh.kumar@st.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Sujit Reddy Thumma authored
Current clock gating framework disables the MCI clock as soon as the request is completed and enables it when a request arrives. This aggressive clock gating framework, when enabled, cause following issues: When there are back-to-back requests from the Queue layer, we unnecessarily end up disabling and enabling the clocks between these requests since 8MCLK clock cycles is a very short duration compared to the time delay between back to back requests reaching the MMC layer. This overhead can effect the overall performance depending on how long the clock enable and disable calls take which is platform dependent. For example on some platforms we can have clock control not on the local processor, but on a different subsystem and the time taken to perform the clock enable/disable can add significant overhead. Also if the host controller driver decides to disable the host clock too when mmc_set_ios function is called with ios.clock=0, it adds additional delay and it is highly possible that the next request had already arrived and unnecessarily blocked in enabling the clocks. This is seen frequently when the processor is executing at high speeds and in multi-core platforms thus reduces the overall throughput compared to if clock gating is disabled. Fix this by delaying turning off the clocks by posting request on delayed workqueue. Also cancel the unscheduled pending work, if any, when there is access to card. sysfs entry is provided to tune the delay as needed, default value set to 200ms. Signed-off-by: Sujit Reddy Thumma <sthumma@codeaurora.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Chris Ball authored
No functional change; adds macros for card manufacturer IDs. Signed-off-by: Chris Ball <cjb@laptop.org> Cc: Andrei E. Warkentin <andrey.warkentin@gmail.com> Cc: Stefan Nilsson XK <stefan.xk.nilsson@stericsson.com>
-
Giuseppe CAVALLARO authored
This patch is to expose the actual SDCLK frequency in /sys/kernel/debug/mmcX/ios entry. For example, if the max clk for a normal speed card is 20MHz this is reported in /sys/kernel/debug/mmcX/ios. Unfortunately the actual SDCLK frequency (i.e. Baseclock / divisor) is not reported at all: for example, in that case, on Arasan HC, it should be 48/4=12 (MHz). Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Stefan Nilsson XK authored
This patch allows any block size to be set on the SDIO link, and still have an arbitrary sized packet (adjusted in size by using sdio_align_size) transferred in an optimal way (preferably one transfer). Previously if the block size was larger than the default of 512 bytes and the transfer size was exactly one block size (possibly thanks to using sdio_align_size to get an optimal transfer size), it was sent as a number of byte transfers instead of one block transfer. Also if the number of blocks was (max_blocks * N) + 1, the tranfer would be conducted with a number of blocks and finished off with a number of byte transfers. When doing this change it was also possible to break out the quirk for broken byte mode in a much cleaner way, and collect the logic of when to do byte or block transfer in one function instead of two. Signed-off-by: Stefan Nilsson XK <stefan.xk.nilsson@stericsson.com> Signed-off-by: Ulf Hansson <ulf.hansson@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Qiang Liu authored
Add new macros for the high speed 50MHz case, rather than having a confusing reuse of the value for UHS SDR50, which is 100MHz. Reported-by: Aaron Lu <aaron.lu@amd.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
- 24 Dec, 2011 8 commits
-
-
Greg Ungerer authored
We have a duplicate name and definition for the offset of the thread.info struct within the task struct in our asm-offsets.c code. Remove one of them, and consolidate to use a single define, TASK_INFO. Signed-off-by: Greg Ungerer <gerg@uclinux.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
-
Greg Ungerer authored
The init_task code can be the same for both mmu and non-mmu targets. None of the alignment carried out in the the current init_task code is necessary. The linker script takes care of aligning the init_thread structure to a THREAD SIZE boundary, and that is all we need. So use the init_task.c code for all target types, that makes m68k code consistent with what most other architectures do. Signed-off-by: Greg Ungerer <gerg@uclinux.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
-
Greg Ungerer authored
The fasthandler code was removed long ago. Remove the now unused declaration of it. Signed-off-by: Greg Ungerer <gerg@uclinux.org>
-
Mark Brown authored
gpiolib provides __gpio_to_irq() to map gpiolib gpios to interrupts - hook that up on m68k. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Ungerer <gerg@uclinux.org>
-
john stultz authored
Updated to merge the valid bits of the two m68k patches. This converts the m86k clocksources to use clocksource_register_hz/khz This is untested, so any assistance in testing would be appreciated! CC: Geert Uytterhoeven <geert@linux-m68k.org> CC: Greg Ungerer <gerg@uclinux.org> Signed-off-by: John Stultz <johnstul@us.ibm.com> Signed-off-by: Greg Ungerer <gerg@uclinux.org>
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds authored
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: VFS: Fix race between CPU hotplug and lglocks
-
git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linuxLinus Torvalds authored
for linus: writeback reason binary tracing format fix * tag 'writeback' of git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linux: writeback: show writeback reason with __print_symbolic
-
- 23 Dec, 2011 14 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuildLinus Torvalds authored
* 'rc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild: kconfig: adapt update-po-config to new UML layout
-
git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-mediaLinus Torvalds authored
* 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: [media] omap3isp: Fix crash caused by subdevs now having a pointer to devnodes
-
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfsLinus Torvalds authored
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: Btrfs: call d_instantiate after all ops are setup Btrfs: fix worker lock misuse in find_worker
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparcLinus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc: sparc64: Fix MSIQ HV call ordering in pci_sun4v_msiq_build_irq().
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: netfilter: xt_connbytes: handle negation correctly net: relax rcvbuf limits rps: fix insufficient bounds checking in store_rps_dev_flow_table_cnt() net: introduce DST_NOPEER dst flag mqprio: Avoid panic if no options are provided bridge: provide a mtu() method for fake_dst_ops
-
git://1984.lsi.us.es/netDavid S. Miller authored
-
Florian Westphal authored
"! --connbytes 23:42" should match if the packet/byte count is not in range. As there is no explict "invert match" toggle in the match structure, userspace swaps the from and to arguments (i.e., as if "--connbytes 42:23" were given). However, "what <= 23 && what >= 42" will always be false. Change things so we use "||" in case "from" is larger than "to". This change may look like it breaks backwards compatibility when "to" is 0. However, older iptables binaries will refuse "connbytes 42:0", and current releases treat it to mean "! --connbytes 0:42", so we should be fine. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
-
Al Viro authored
This closes races where btrfs is calling d_instantiate too soon during inode creation. All of the callers of btrfs_add_nondir are updated to instantiate after the inode is fully setup in memory. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Dan Carpenter noticed that we were doing a double unlock on the worker lock, and sometimes picking a worker thread without the lock held. This fixes both errors. Signed-off-by: Chris Mason <chris.mason@oracle.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
-
Eric Dumazet authored
skb->truesize might be big even for a small packet. Its even bigger after commit 87fb4b7b (net: more accurate skb truesize) and big MTU. We should allow queueing at least one packet per receiver, even with a low RCVBUF setting. Reported-by: Michal Simek <monstr@monstr.eu> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Xi Wang authored
Setting a large rps_flow_cnt like (1 << 30) on 32-bit platform will cause a kernel oops due to insufficient bounds checking. if (count > 1<<30) { /* Enforce a limit to prevent overflow */ return -EINVAL; } count = roundup_pow_of_two(count); table = vmalloc(RPS_DEV_FLOW_TABLE_SIZE(count)); Note that the macro RPS_DEV_FLOW_TABLE_SIZE(count) is defined as: ... + (count * sizeof(struct rps_dev_flow)) where sizeof(struct rps_dev_flow) is 8. (1 << 30) * 8 will overflow 32 bits. This patch replaces the magic number (1 << 30) with a symbolic bound. Suggested-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Chris Boot reported crashes occurring in ipv6_select_ident(). [ 461.457562] RIP: 0010:[<ffffffff812dde61>] [<ffffffff812dde61>] ipv6_select_ident+0x31/0xa7 [ 461.578229] Call Trace: [ 461.580742] <IRQ> [ 461.582870] [<ffffffff812efa7f>] ? udp6_ufo_fragment+0x124/0x1a2 [ 461.589054] [<ffffffff812dbfe0>] ? ipv6_gso_segment+0xc0/0x155 [ 461.595140] [<ffffffff812700c6>] ? skb_gso_segment+0x208/0x28b [ 461.601198] [<ffffffffa03f236b>] ? ipv6_confirm+0x146/0x15e [nf_conntrack_ipv6] [ 461.608786] [<ffffffff81291c4d>] ? nf_iterate+0x41/0x77 [ 461.614227] [<ffffffff81271d64>] ? dev_hard_start_xmit+0x357/0x543 [ 461.620659] [<ffffffff81291cf6>] ? nf_hook_slow+0x73/0x111 [ 461.626440] [<ffffffffa0379745>] ? br_parse_ip_options+0x19a/0x19a [bridge] [ 461.633581] [<ffffffff812722ff>] ? dev_queue_xmit+0x3af/0x459 [ 461.639577] [<ffffffffa03747d2>] ? br_dev_queue_push_xmit+0x72/0x76 [bridge] [ 461.646887] [<ffffffffa03791e3>] ? br_nf_post_routing+0x17d/0x18f [bridge] [ 461.653997] [<ffffffff81291c4d>] ? nf_iterate+0x41/0x77 [ 461.659473] [<ffffffffa0374760>] ? br_flood+0xfa/0xfa [bridge] [ 461.665485] [<ffffffff81291cf6>] ? nf_hook_slow+0x73/0x111 [ 461.671234] [<ffffffffa0374760>] ? br_flood+0xfa/0xfa [bridge] [ 461.677299] [<ffffffffa0379215>] ? nf_bridge_update_protocol+0x20/0x20 [bridge] [ 461.684891] [<ffffffffa03bb0e5>] ? nf_ct_zone+0xa/0x17 [nf_conntrack] [ 461.691520] [<ffffffffa0374760>] ? br_flood+0xfa/0xfa [bridge] [ 461.697572] [<ffffffffa0374812>] ? NF_HOOK.constprop.8+0x3c/0x56 [bridge] [ 461.704616] [<ffffffffa0379031>] ? nf_bridge_push_encap_header+0x1c/0x26 [bridge] [ 461.712329] [<ffffffffa037929f>] ? br_nf_forward_finish+0x8a/0x95 [bridge] [ 461.719490] [<ffffffffa037900a>] ? nf_bridge_pull_encap_header+0x1c/0x27 [bridge] [ 461.727223] [<ffffffffa0379974>] ? br_nf_forward_ip+0x1c0/0x1d4 [bridge] [ 461.734292] [<ffffffff81291c4d>] ? nf_iterate+0x41/0x77 [ 461.739758] [<ffffffffa03748cc>] ? __br_deliver+0xa0/0xa0 [bridge] [ 461.746203] [<ffffffff81291cf6>] ? nf_hook_slow+0x73/0x111 [ 461.751950] [<ffffffffa03748cc>] ? __br_deliver+0xa0/0xa0 [bridge] [ 461.758378] [<ffffffffa037533a>] ? NF_HOOK.constprop.4+0x56/0x56 [bridge] This is caused by bridge netfilter special dst_entry (fake_rtable), a special shared entry, where attaching an inetpeer makes no sense. Problem is present since commit 87c48fa3 (ipv6: make fragment identifications less predictable) Introduce DST_NOPEER dst flag and make sure ipv6_select_ident() and __ip_select_ident() fallback to the 'no peer attached' handling. Reported-by: Chris Boot <bootc@bootc.net> Tested-by: Chris Boot <bootc@bootc.net> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Graf authored
Userspace may not provide TCA_OPTIONS, in fact tc currently does so not do so if no arguments are specified on the command line. Return EINVAL instead of panicing. Signed-off-by: Thomas Graf <tgraf@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Commit 618f9bc7 (net: Move mtu handling down to the protocol depended handlers) forgot the bridge netfilter case, adding a NULL dereference in ip_fragment(). Reported-by: Chris Boot <bootc@bootc.net> CC: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 22 Dec, 2011 5 commits
-
-
git://neil.brown.name/mdLinus Torvalds authored
* 'for-linus' of git://neil.brown.name/md: md/bitmap: It is OK to clear bits during recovery. md: don't give up looking for spares on first failure-to-add md/raid5: ensure correct assessment of drives during degraded reshape. md/linear: fix hot-add of devices to linear arrays.
-
NeilBrown authored
commit d0a4bb49 introduced a regression which is annoying but fairly harmless. When writing to an array that is undergoing recovery (a spare in being integrated into the array), writing to the array will set bits in the bitmap, but they will not be cleared when the write completes. For bits covering areas that have not been recovered yet this is not a problem as the recovery will clear the bits. However bits set in already-recovered region will stay set and never be cleared. This doesn't risk data integrity. The only negatives are: - next time there is a crash, more resyncing than necessary will be done. - the bitmap doesn't look clean, which is confusing. While an array is recovering we don't want to update the 'events_cleared' setting in the bitmap but we do still want to clear bits that have very recently been set - providing they were written to the recovering device. So split those two needs - which previously both depended on 'success' and always clear the bit of the write went to all devices. Signed-off-by: NeilBrown <neilb@suse.de>
-
NeilBrown authored
Before performing a recovery we try to remove any spares that might not be working, then add any that might have become relevant. Currently we abort on the first spare that cannot be added. This is a false optimisation. It is conceivable that - depending on rules in the personality - a subsequent spare might be accepted. Also the loop does other things like count the available spares and reset the 'recovery_offset' value. If we abort early these might not happen properly. So remove the early abort. In particular if you have an array what is undergoing recovery and which has extra spares, then the recovery may not restart after as reboot as the could of 'spares' might end up as zero. Reported-by: Anssi Hannula <anssi.hannula@iki.fi> Signed-off-by: NeilBrown <neilb@suse.de>
-
NeilBrown authored
While reshaping a degraded array (as when reshaping a RAID0 by first converting it to a degraded RAID4) we currently get confused about which devices are in_sync. In most cases we get it right, but in the region that is being reshaped we need to treat non-failed devices as in-sync when we have the data but haven't actually written it out yet. Reported-by: Adam Kwolek <adam.kwolek@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
-
NeilBrown authored
commit d70ed2e4 broke hot-add to a linear array. After that commit, metadata if not written to devices until they have been fully integrated into the array as determined by saved_raid_disk. That patch arranged to clear that field after a recovery completed. However for linear arrays, there is no recovery - the integration is instantaneous. So we need to explicitly clear the saved_raid_disk field. Signed-off-by: NeilBrown <neilb@suse.de>
-