- 30 Oct, 2002 37 commits
-
-
Andrew Morton authored
remove inline from the cache poison checks: the functions are not performance critical.
-
Andrew Morton authored
- enable the cpu array for all caches - remove the optimized implementations for quick list access - with cpu arrays in all caches, the list access is now rare. - make the cpu arrays mandatory, this removes 50% of the conditional branches from the hot path of kmem_cache_alloc [1] - poisoning for objects with constructors Patch got a bit longer... I forgot to mention this: head arrays mean that some pages can be blocked due to objects in the head arrays, and not returned to page_alloc.c. The current kernel never flushes the head arrays, this might worsen the behaviour of low memory systems. The hunk that flushes the arrays regularly comes next. Details changelog: [to be read site by side with the patch] * docu update * "growing" is not really needed: races between grow and shrink are handled by retrying. [additionally, the current kernel never shrinks] * move the batchcount into the cpu array: the old code contained a race during cpu cache tuning: update batchcount [in cachep] before or after the IPI? And NUMA will need it anyway. * bootstrap support: the cpu arrays are really mandatory, nothing works without them. Thus a statically allocated cpu array is needed to for starting the allocators. * move the full, partial & free lists into a separate structure, as a preparation for NUMA * structure reorganization: now the cpu arrays are the most important part, not the lists. * dead code elimination: remove "failures", nowhere read. * dead code elimination: remove "OPTIMIZE": not implemented. The idea is to skip the virt_to_page lookup for caches with on-slab slab structures, and use (ptr&PAGE_MASK) instead. The details are in Bonwicks paper. Not fully implemented. * remove GROWN: kernel never shrinks a cache, thus grown is meaningless. * bootstrap: starting the slab allocator is now a 3 stage process: - nothing works, use the statically allocated cpu arrays. - the smallest kmalloc allocator works, use it to allocate cpu arrays. - all kmalloc allocators work, use the default cpu array size * register a cpu nodifier callback, and allocate the needed head arrays if a new cpu arrives * always enable head arrays, even for DEBUG builds. Poisoning and red-zoning now happens before an object is added to the arrays. Insert enable_all_cpucaches into cpucache_init, there is no need for seperate function. * modifications to the debug checks due to the earlier calls of the dtor for caches with poisoning enabled * poison+ctor is now supported * squeezing 3 objects into a cacheline is hopeless, the FIXME is not solvable and can be removed. * add additional debug tests: check_irq_off(), check_irq_on(), check_spinlock_acquired(). * move do_ccupdate_local nearer to do_tune_cpucache. Should have been part of -04-drain. * additional objects checks. red-zoning is tricky: it's implemented by increasing the object size by 2*BYTES_PER_WORD. Thus BYTES_PER_WORD must be added to objp before calling the destructor, constructor or before returing the object from alloc. The poison functions add BYTES_PER_WORD internally. * create a flagcheck function, right now the tests are duplicated in cache_grow [always] and alloc_debugcheck_before [DEBUG only] * modify slab list updates: all allocs are now bulk allocs that try to get multiple objects at once, update the list pointers only at the end of a bulk alloc, not once per alloc. * might_sleep was moved into kmem_flagcheck. * major hotpath change: - cc always exists, no fallback - cache_alloc_refill is called with disabled interrupts, and does everything to recover from an empty cpu array. Far shorter & simpler __cache_alloc [inlined in both kmalloc and kmem_cache_alloc] * __free_block, free_block, cache_flusharray: main implementation of returning objects to the lists. no big changes, diff lost track. * new debug check: too early kmalloc or kmem_cache_alloc * slightly reduce the sizes of the cpu arrays: keep the size < a power of 2, including batchcount, avail and now limit, for optimal kmalloc memory efficiency. That's it. I even found 2 bugs while reading: dtors and ctors for verify were called with wrong parameters, with RED_ZONE enabled, and some checks still assumed that POISON and ctor are incompatible.
-
Andrew Morton authored
From Manfred Spraul remove the space from the name of the DMA caches: they make it impossible to tune the caches through /proc/slabinfo, and make parsing /proc/slabinfo difficult
-
Andrew Morton authored
In 2.5, local_irq_disable() provides protection against smp_call_function() on all architectures. (Or it will, not sure. But davem says this is OK). So a spin_lock() within the smp_call_function() callback is now permitted, and we can remove/cleanup the workaround.
-
Andrew Morton authored
From Manfred Spraul If an object is freed from a slab, then move the slab to the tail of the partial list - this should increase the probability that the other objects from the same page are freed, too, and that a page can be returned to gfp later. In other words: if we just freed an object from this page then make this page be the *last* page which is eligible for new allocations. Under the assumption that other objects in that same page are about to be freed up as well. The cpu arrays are now always in front of the list, i.e. cache hit rates should not matter.
-
Andrew Morton authored
From Manfred Spraul Always enable the cpu arrays, even on uniprocessor. They provide LIFO ordering, which should improve cache hit rates. And the array allocator is slightly faster than the list operations.
-
Andrew Morton authored
From Manfred Spraul remove kmem_ from all static function that are only used in slab.c. Except kmem_cache_slabmgmt, I've renamed it to alloc_slabmgmt().
-
Andrew Morton authored
add_timer_on is like add_timer, except it takes a target CPU on which to add the timer. The slab code needs per-cpu timers for shrinking the per-cpu caches.
-
Andrew Morton authored
Patch from Dipankar Sarma <dipankar@in.ibm.com> This is Manfred's patch which provides a CPU_UP_PREPARE cpu notifier to allow initialization of per_cpu data just before the cpu becomes fully functional. It also provides a facility for the CPU_UP_PREPARE handler to return NOTIFY_BAD to signify that the CPU is not permitted to come up. If that happens, a CPU_UP_CANCELLED message is passed to all the handlers. The patch also fixes a bogus NOFITY_BAD return from the softirq setup code. Patch has been acked by Rusty. We need this mechanism in slab for starting per-cpu timers and for allocating the per-cpu slab hgead arrays *before* the CPU has come up and started using slab.
-
Matthew Wilcox authored
- Remove obsolete documentation - Update arch/parisc/lib - Remove arch/parisc/tools, we use asm-offsets.c these days - Update arch/parisc/Makefile, defconfig & vmlinux.lds.S
-
Matthew Wilcox authored
Add support for the parisc64 architecture.
-
Matthew Wilcox authored
Performance monitor support for PA8000+ processors.
-
Matthew Wilcox authored
Update arch/parisc/kernel.
-
Matthew Wilcox authored
Update arch/parisc/mm
-
Matthew Wilcox authored
Update include/asm-parisc
-
Matthew Wilcox authored
Add support for unimplemented FP ops on PA processors.
-
Stelian Pop authored
This patch adds some new events to the sonypi driver (Fn key pressed alone, jogdial turned fast or very fast) and cleanups the code a little bit. Thanks to Christian Gennerat for this contribution.
-
Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Davide Libenzi authored
Latest version of the epoll interfaces.
-
bk://ldm.bkbits.net/linux-2.5-kobjectLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Patrick Mochel authored
into osdl.org:/home/mochel/src/kernel/devel/linux-2.5-kobject
-
Patrick Mochel authored
struct subsystem may now contain a pointer to a NULL-terminated array of default attributes to be exported when an object is registered with the subsystem. kobject registration will check the return values of the directory creation and the creation of each file, and handle it appropriately. The documentation has also been updated.
-
Patrick Mochel authored
Previously, sysfs read() and write() calls looked for sysfs_ops in the struct sysfs_dir, in the kobject. Since objects belong to a subsystem, and is a member of a group of like devices, the sysfs_ops have been moved to struct subsystem, and are referenced from there. The only remaining member of struct sysfs_dir is the dentry of the object's directory. That is moved out of the dir struct and directly into struct kobject. That saves us 4 bytes/object. All of the sysfs functions that referenced the struct have been changed to just reference the dentry.
-
Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Roman Zippel authored
This fixes "choice" behaviour - it sets the correct default and fixes oldconfig.
-
http://mdomsch.bkbits.net/linux-2.5-edd-tolinusLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Matt Domsch authored
into dell.com:/home/mdomsch/bk/linux-2.5-edd-tolinus
-
Patrick Mochel authored
A struct subsystem is basically a collection of objects of a certain type, and some callbacks to operate on objects of that type. subsystems contain embedded kobjects themselves, and have a similar set of library routines that kobjects do, which are mostly just wrappers for the correlating kobject routines. kobjects are inserted in depth-first order into their subsystem's list of objects. Orphan kobjects are also given foster parents that point to their subsystem. This provides a bit more rigidity in the hierarchy, and disallows any orphan kobjects. When an object is unregistered, it is removed from its subsystem's list. When the objects' refcount hits 0, the subsystem's ->release() callback is called. Documentation describing the objects and the interfaces has also been added.
-
Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/newconfig
-
bk://linuxusb.bkbits.net/linus-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Greg Kroah-Hartman authored
-
Roman Zippel authored
Add new configs to match changes done lately.
-
http://linux-isdn.bkbits.net/linux-2.5.makeLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/kconfig
-
Kai Germaschewski authored
If we are to build menuconfig/xconfig, we may not have a .config yet, so we shouldn't try to include it. Set MODVERDIR before including the subdir Makefile, drivers/scsi/53c700 needs it.
-
Alexander Viro authored
Got it. Breakage happened when Jens was switching to partial completions - !uptodate is not quite the same as !err ;-) With this fixed everything seems to work nicely.
-
http://linux-isdn.bkbits.net/linux-2.5.isdnLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Greg Kroah-Hartman authored
into kroah.com:/home/linux/linux/BK/gregkh-2.5
-
- 29 Oct, 2002 3 commits
-
-
Patrick Mochel authored
It's now int sysfs_create_link(struct kobject * kobj, struct kobject * target, char * name) So, the caller doesn't have to determine the path of the target nor the depth of the object we're creating the symlink for; it's all taken care of.
-
David Brownell authored
This mentions the web page with information about how to use the 'usbtest' driver.
-
David Brownell authored
This is a version of a patch I sent out last Friday to help address some of the "bad entry" errors that some folk were seeing, seemingly only with control requests. The fix is just to not try being clever: remove one TD at a time and patch the ED as if that TD had completed normally, then do the next ... don't try to patch just once in this fault case. (And it nukes some debug info I accidently submitted.) I've gotten preliminary feedback that this helps.
-