- 15 Mar, 2011 25 commits
-
-
Aneesh Kumar K.V authored
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Aneesh Kumar K.V authored
We should not mark file system synchronous if mounted cache=* option Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Aneesh Kumar K.V authored
Update the comment to indicate that we don't want to cache negative dentries. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Aneesh Kumar K.V authored
We can now support writeable mmaps. Based on the original patch from Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Aneesh Kumar K.V authored
The fid attached to inode will be opened O_RDWR mode and is used for dirty page writeback only. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Aneesh Kumar K.V authored
We add read write helper function here which will be used later by the mmap patch Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Aneesh Kumar K.V authored
We need to call fscache_wait_on_page_write in launder_page for fscache Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Aneesh Kumar K.V authored
We need to ihold even in cached mode Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Aneesh Kumar K.V authored
We need to call v9fs_cache_inode_set_cookie in create path also Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Aneesh Kumar K.V authored
With the old code we were not setting the file->f_op with cached file operations during creat. (format correction by jvrao@linux.vnet.ibm.com) Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
If a transport prefers payload to be sent separate from the PDU (P9_TRANS_PREF_PAYLOAD_SEP), there is no need to allocate msize PDU buffers(struct p9_fcall). This patch allocates only upto 4k buffers for this kind of transports and there won't be any change to the legacy transports. Hence, this patch on top of zero copy changes allows user to specify higher msizes through the mount option without hogging the kernel heap. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
This takes care of copying out error buffers from user buffer payloads when we are using zero copy. This happens because the only payload buffer the server has to respond to the request is the user buffer given for the zero copy read. Because we only use zerocopy when the amount of data to transfer is greater than a certain size (currently 4K) and error strings are limited to ERRMAX (currently 128) we don't need to worry about there being sufficient space for the error to fit in the payload. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
Modify p9_client_readdir() to check the transport preference and act according If the preference is P9_TRANS_PREF_PAYLOAD_SEP, send the payload separately instead of putting it directly on PDU. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
Modify p9_client_write() to check the transport preference and act accordingly. If the preference is P9_TRANS_PREF_PAYLOAD_SEP, send the payload separately instead of putting it directly on PDU. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
Modify p9_client_read() to check the transport preference and act accordingly. If the preference is P9_TRANS_PREF_PAYLOAD_SEP, send the payload separately instead of putting it directly on PDU. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
This patch adds preferences field to the p9_trans_module. Through this, now transport layer can express its preference about the payload. i.e if payload neds to be part of the PDU or it prefers it to be sent sepearetly so that the transport layer can handle it in a better way. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
Modify p9_virtio_request() and req_done() functions to support additional payload sent down to the transport layer through tc->pubuf and tc->pkbuf. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
This will be used by the transport layer to determine the out going request type. Transport layer uses this information to correctly place the mapped pages in the PDU. Patches following this will make use of this to achieve zero copy. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
This patch prepares p9_fcall structure for zero copy. Added fields send the payload buffer information to the transport layer. In addition it adds a 'private' field for the transport layer to store mapped/pinned page information so that it can be freed/unpinned during req_done. This patch also creates trans_common.[ch] to house helper functions. It adds the following helper functions. p9_release_req_pages - Release pages after the transaction. p9_nr_pages - Return number of pages needed to accomodate the payload. payload_gup - Translates user buffer into kernel pages. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
Current code sets access=user as default for all protocol versions. This patch chagnes it to "client" only for dotl. User can always specify particular access mode with -o access= option. No change there. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
The mount option access=client is overloaded as it assumes acl too. Adding posixacl option to enable POSIX ACLs makes it explicit and clear. Also it is convenient in the future to add other types of acls like richacls. Ideally, the access mode 'client' should be just like V9FS_ACCESS_USER except it underscores the location of access check. Traditional 9P protocol lets the server perform access checks but with this mode, all the access checks will be performed on the client itself. Server just follows the client's directive. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
If the kernel is not compiled with CONFIG_9P_FS_POSIX_ACL and the mount option is specified to enable ACLs current code fails the mount. This patch brings the behavior inline with other filesystems like ext3 by proceeding with the mount and log a warning to syslog. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
-
Venkateswararao Jujjuri (JV) authored
With create/mkdir/mknod in non cached mode we initialize the inode using v9fs_get_inode. v9fs_get_inode doesn't initialize the cache inode value to NULL. This is causing to trip on BUG_ON in v9fs_get_cached_acl. Fix is to initialize acls to NULL and not to leave them in ACL_NOT_CACHED state. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-
Venkateswararao Jujjuri (JV) authored
In v9fs_get_acl() if __v9fs_get_acl() gets only one of the dacl/pacl we are not releasing it. Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-
Linus Torvalds authored
-
- 14 Mar, 2011 15 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-2.6-mn10300Linus Torvalds authored
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-2.6-mn10300: MN10300: atomic_read() should ensure it emits a load MN10300: The SMP_ICACHE_INV_FLUSH_RANGE IPI command does not exist MN10300: Proper use of macros get_user() in the case of incremented pointers
-
git://git.linux-mips.org/pub/scm/upstream-linusLinus Torvalds authored
* 'upstream' of git://git.linux-mips.org/pub/scm/upstream-linus: (26 commits) MIPS: Alchemy: Fix reset for MTX-1 and XXS1500 MIPS: MTX-1: Make au1000_eth probe all PHY addresses MIPS: Jz4740: Add HAVE_CLK MIPS: Move idle task creation to work queue MIPS, Perf-events: Use unsigned delta for right shift in event update MIPS, Perf-events: Work with the new callchain interface MIPS, Perf-events: Fix event check in validate_event() MIPS, Perf-events: Work with the new PMU interface MIPS, Perf-events: Work with irq_work MIPS: Fix always CONFIG_LOONGSON_UART_BASE=y MIPS: Loongson: Fix potentially wrong string handling MIPS: Fix GCC-4.6 'set but not used' warning in arch/mips/mm/init.c MIPS: Fix GCC-4.6 'set but not used' warning in ieee754int.h MIPS: Remove unused code from arch/mips/kernel/syscall.c MIPS: Fix GCC-4.6 'set but not used' warning in signal*.c MIPS: MSP: Fix MSP71xx bpci interrupt handler return value MIPS: Select R4K timer lib for all MSP platforms MIPS: Loongson: Remove ad-hoc cmdline default MIPS: Clear the correct flag in sysmips(MIPS_FIXADE, ...). MIPS: Add an unreachable return statement to satisfy buggy GCCs. ...
-
Linus Torvalds authored
Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: ce4100: Set pci ops via callback instead of module init x86/mm: Fix pgd_lock deadlock x86/mm: Handle mm_fault_error() in kernel space x86: Don't check for BIOS corruption in first 64K when there's no need to
-
Linus Torvalds authored
This reverts the parent commit. I hate doing that, but it's generating some discussion ("half of it is right"), and since I am planning on doing the 2.6.38 release later today we can punt it to stable if required. Let's not rock the boat right now. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Oleg Nesterov authored
oom_kill_process() starts with victim_points == 0. This means that (most likely) any child has more points and can be killed erroneously. Also, "children has a different mm" doesn't match the reality, we should check child->mm != t->mm. This check is not exactly correct if t->mm == NULL but this doesn't really matter, oom_kill_task() will kill them anyway. Note: "Kill all processes sharing p->mm" in oom_kill_task() is wrong too. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Florian Fainelli authored
Since commit 32fd6901 (MIPS: Alchemy: get rid of common/reset.c) Alchemy-based boards use their own reset function. For MTX-1 and XXS1500, the reset function pokes at the BCSR.SYSTEM_RESET register, but this does not work. According to Bruno Randolf, this was not tested when written. Previously, the generic au1000_restart() routine called the board specific reset function, which for MTX-1 and XXS1500 did not work, but finally made a jump to the reset vector, which really triggers a system restart. Fix reboot for both targets by jumping to the reset vector. Signed-off-by: Florian Fainelli <florian@openwrt.org> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2093/Acked-by: Bruno Randolf <br1@einfach.org> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Florian Fainelli authored
When au1000_eth probes the MII bus for PHY address, if we do not set au1000_eth platform data's phy_search_highest_address, the MII probing logic will exit early and will assume a valid PHY is found at address 0. For MTX-1, the PHY is at address 31, and without this patch, the link detection/speed/duplex would not work correctly. CC: stable@kernel.org Signed-off-by: Florian Fainelli <florian@openwrt.org> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2111/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Maurus Cuelenaere authored
Jz4740 supports the clock framework but doesn't have HAVE_CLK defined, so define it! Signed-off-by: Maurus Cuelenaere <mcuelenaere@gmail.com> To: linux-mips@linux-mips.org To: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/2112/Acked-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Maksim Rayskiy authored
To avoid forking usermode thread when creating an idle task, move fork_idle to a work queue. If kernel starts with maxcpus= option which does not bring all available cpus online at boot time, idle tasks for offline cpus are not created. If later offline cpus are hotplugged through sysfs, __cpu_up is called in the context of the user task, and fork_idle copies its non-zero mm pointer. This causes BUG() in per_cpu_trap_init. This also avoids issues with resource limits of the CPU writing to sysfs, containers, maybe others. Signed-off-by: Maksim Rayskiy <mrayskiy@broadcom.com> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2070/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Deng-Cheng Zhu authored
Leverage the commit for ARM by Will Deacon: - 446a5a8b ARM: 6205/1: perf: ensure counter delta is treated as unsigned Hardware performance counters on ARM are 32-bits wide but atomic64_t variables are used to represent counter data in the hw_perf_event structure. The armpmu_event_update function right-shifts a signed 64-bit delta variable and adds the result to the event count. This can lead to shifting in sign-bits if the MSB of the 32-bit counter value is set. This results in perf output such as: Performance counter stats for 'sleep 20': 18446744073460670464 cycles <-- 0xFFFFFFFFF12A6000 7783773 instructions # 0.000 IPC 465 context-switches 161 page-faults 1172393 branches 20.154242147 seconds time elapsed This patch ensures that the delta value is treated as unsigned so that the right shift sets the upper bits to zero. Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: David Daney <ddaney@caviumnetworks.com> Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> To: a.p.zijlstra@chello.nl To: fweisbec@gmail.com To: will.deacon@arm.com Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: wuzhangjin@gmail.com Cc: paulus@samba.org Cc: mingo@elte.hu Cc: acme@redhat.com Cc: matt@console-pimps.org Cc: sshtylyov@mvista.com Patchwork: http://patchwork.linux-mips.org/patch/2015/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Deng-Cheng Zhu authored
This is the MIPS part of the following commits by Frederic Weisbecker: - f72c1a93 perf: Factorize callchain context handling Store the kernel and user contexts from the generic layer instead of archs, this gathers some repetitive code. - 56962b44 perf: Generalize some arch callchain code - Most archs use one callchain buffer per cpu, except x86 that needs to deal with NMIs. Provide a default perf_callchain_buffer() implementation that x86 overrides. - Centralize all the kernel/user regs handling and invoke new arch handlers from there: perf_callchain_user() / perf_callchain_kernel() That avoid all the user_mode(), current->mm checks and so... - Invert some parameters in perf_callchain_*() helpers: entry to the left, regs to the right, following the traditional (dst, src). - 70791ce9 perf: Generalize callchain_store() callchain_store() is the same on every archs, inline it in perf_event.h and rename it to perf_callchain_store() to avoid any collision. This removes repetitive code. - c1a65932 perf: Drop unappropriate tests on arch callchains Drop the TASK_RUNNING test on user tasks for callchains as this check doesn't seem to make any sense. Also remove the tests for !current that is not supposed to happen and current->pid as this should be handled at the generic level, with exclude_idle attribute. Reported-by: Wu Zhangjin <wuzhangjin@gmail.com> Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: David Daney <ddaney@caviumnetworks.com> Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> To: a.p.zijlstra@chello.nl To: will.deacon@arm.com Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: paulus@samba.org Cc: mingo@elte.hu Cc: acme@redhat.com Cc: dengcheng.zhu@gmail.com Cc: matt@console-pimps.org Cc: sshtylyov@mvista.com Patchwork: http://patchwork.linux-mips.org/patch/2014/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Deng-Cheng Zhu authored
Ignore events that are in off/error state or belong to a different PMU. This patch originates from the following commit for ARM by Will Deacon: - 65b4711f ARM: 6352/1: perf: fix event validation The validate_event function in the ARM perf events backend has the following problems: 1.) Events that are disabled count towards the cost. 2.) Events associated with other PMUs [for example, software events or breakpoints] do not count towards the cost, but do fail validation, causing the group to fail. This patch changes validate_event so that it ignores events in the PERF_EVENT_STATE_OFF state or that are scheduled for other PMUs. Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: David Daney <ddaney@caviumnetworks.com> Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> To: a.p.zijlstra@chello.nl To: fweisbec@gmail.com To: will.deacon@arm.com Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: wuzhangjin@gmail.com Cc: paulus@samba.org Cc: mingo@elte.hu Cc: acme@redhat.com Cc: dengcheng.zhu@gmail.com Cc: matt@console-pimps.org Cc: sshtylyov@mvista.com Cc: ddaney@caviumnetworks.com Patchwork: http://patchwork.linux-mips.org/patch/2013/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Deng-Cheng Zhu authored
This is the MIPS part of the following commits by Peter Zijlstra: - a4eaf7f1 perf: Rework the PMU methods Replace pmu::{enable,disable,start,stop,unthrottle} with pmu::{add,del,start,stop}, all of which take a flags argument. The new interface extends the capability to stop a counter while keeping it scheduled on the PMU. We replace the throttled state with the generic stopped state. This also allows us to efficiently stop/start counters over certain code paths (like IRQ handlers). It also allows scheduling a counter without it starting, allowing for a generic frozen state (useful for rotating stopped counters). The stopped state is implemented in two different ways, depending on how the architecture implemented the throttled state: 1) We disable the counter: a) the pmu has per-counter enable bits, we flip that b) we program a NOP event, preserving the counter state 2) We store the counter state and ignore all read/overflow events For MIPSXX, the stopped state is implemented in the way of 1.b as above. - 33696fc0 perf: Per PMU disable Changes perf_disable() into perf_pmu_disable(). - 24cd7f54 perf: Reduce perf_disable() usage Since the current perf_disable() usage is only an optimization, remove it for now. This eases the removal of the __weak hw_perf_enable() interface. - b0a873eb perf: Register PMU implementations Simple registration interface for struct pmu, this provides the infrastructure for removing all the weak functions. - 51b0fe39 perf: Deconstify struct pmu sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"` Reported-by: Wu Zhangjin <wuzhangjin@gmail.com> Acked-by: David Daney <ddaney@caviumnetworks.com> Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> To: a.p.zijlstra@chello.nl To: fweisbec@gmail.com To: will.deacon@arm.com Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: wuzhangjin@gmail.com Cc: paulus@samba.org Cc: mingo@elte.hu Cc: acme@redhat.com Cc: dengcheng.zhu@gmail.com Cc: matt@console-pimps.org Cc: sshtylyov@mvista.com Cc: ddaney@caviumnetworks.com Patchwork: http://patchwork.linux-mips.org/patch/2012/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Deng-Cheng Zhu authored
This is the MIPS part of the following commit by Peter Zijlstra: - e360adbe irq_work: Add generic hardirq context callbacks Provide a mechanism that allows running code in IRQ context. It is most useful for NMI code that needs to interact with the rest of the system -- like wakeup a task to drain buffers. Perf currently has such a mechanism, so extract that and provide it as a generic feature, independent of perf so that others may also benefit. The IRQ context callback is generated through self-IPIs where possible, or on architectures like powerpc the decrementer (the built-in timer facility) is set to generate an interrupt immediately. Architectures that don't have anything like this get to do with a callback from the timer tick. These architectures can call irq_work_run() at the tail of any IRQ handlers that might enqueue such work (like the perf IRQ handler) to avoid undue latencies in processing the work. For MIPSXX, we need to call irq_work_run() at the tail of the perf IRQ handler as described above. Reported-by: Wu Zhangjin <wuzhangjin@gmail.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: David Daney <ddaney@caviumnetworks.com> Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> To: fweisbec@gmail.com To: will.deacon@arm.com Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: paulus@samba.org Cc: mingo@elte.hu Cc: acme@redhat.com Cc: matt@console-pimps.org Cc: sshtylyov@mvista.com, Patchwork: http://patchwork.linux-mips.org/patch/2011/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Yoichi Yuasa authored
Signed-off-by: Yoichi Yuasa <yuasa@linux-mips.org> Cc: linux-mips <linux-mips@linux-mips.org> Patchwork: https://patchwork.linux-mips.org/patch/2055/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-