- 13 Mar, 2004 17 commits
-
-
Jaroslav Kysela authored
EMU10K1/EMU10K2 driver disabled Dell OEM Emu10k1x from the pci id list. the board isn't compatible with the normal emu10k1.
-
Jaroslav Kysela authored
au88x0 driver removed EXPORT_NO_SYMBOLS.
-
Jaroslav Kysela authored
PPC Tumbler driver fixed the info callback of mixer input source (for enum type).
-
Jaroslav Kysela authored
USB generic driver added fix and workaround for the mixer problem on SB Extigy.
-
Jaroslav Kysela authored
Documentation fixed the files to include.
-
Jaroslav Kysela authored
Documentation changed the description of the buffer allocation routines for the new designed functions.
-
Jaroslav Kysela authored
PPC Tumbler driver added input source switch to select mic/line-in.
-
Jaroslav Kysela authored
Documentation,PCI drivers,au88x0 driver added the au88x0 drivers for Aureal soundcards by Manuel Jander <mjander@embedded.cl>
-
Jaroslav Kysela authored
VIA82xx driver patch was applied wrongly. fixed the rate restriction of spdif output again.
-
Jaroslav Kysela authored
DT019x driver Fixed warnings
-
Jaroslav Kysela authored
VIA82xx driver restrict the PCM sample rates to 32, 44.1 and 48kHz when the SPDIF switch is on.
-
Jaroslav Kysela authored
USB generic driver prevent twenty-seconds wait when unplugging USB MIDI device with a port subscription
-
Jaroslav Kysela authored
USB generic driver show one decimal place of momentary frequency in proc file
-
Jaroslav Kysela authored
USB generic driver use MIN_PACKS_URB as lower bound for nrpacks parameter
-
Jaroslav Kysela authored
ALSA sequencer,ALSA<-OSS sequencer use wrapper function for DELETE_PORT ioctl calls
-
Jaroslav Kysela authored
ALSA sequencer remove superfluous call to snd_seq_event_port_detach
-
Jaroslav Kysela authored
into suse.cz:/home/perex/bk/linux-sound/linux-sound
-
- 12 Mar, 2004 23 commits
-
-
bk://gkernel.bkbits.net/libata-2.5Linus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.5/linux
-
Jeff Garzik authored
into redhat.com:/spare/repo/libata-2.5
-
bk://kernel.bkbits.net/jgarzik/netconsole-2.5Linus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.5/linux
-
Jeff Garzik authored
into redhat.com:/spare/repo/netconsole-2.5
-
Jeff Garzik authored
-
bk://gkernel.bkbits.net/prism54-2.5Linus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.5/linux
-
Jeff Garzik authored
-
Jeff Garzik authored
-
bk://linux-scsi.bkbits.net/scsi-for-linus-2.6Linus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.5/linux
-
http://lia64.bkbits.net/to-linus-2.5Linus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.5/linux
-
Linus Torvalds authored
Cset exclude: akpm@osdl.org|ChangeSet|20040312161945|47751
-
Andrew Morton authored
From: Manfred Spraul <manfred@colorfullife.com> At present slab is using 2-order allocations for the size-2048 cache. Of course, this can affect networking quite seriously. The patch ensures that slab will never use more than a 1-order allocation for objects which have a size of less than 2*PAGE_SIZE.
-
Andrew Morton authored
From: Nick Piggin <piggin@cyberone.com.au> Add a little helper macro for a common list extraction operation in vmscan.c
-
Andrew Morton authored
The current refill logic in refill_inactive_zone() takes an arbitrarily large number of pages and chops it down to SWAP_CLUSTER_MAX*4, regardless of the size of the zone. This has the effect of reducing the amount of refilling of large zones proportionately much more than of small zones. We made this change in may 2003 and I'm damned if I remember why. let's put it back so we don't truncate the refill count and see what happens.
-
Andrew Morton authored
- prevent nr_scan_inactive from going negative - compare `count' with SWAP_CLUSTER_MAX, not `max_scan' - Use ">= SWAP_CLUSTER_MAX", not "> SWAP_CLUSTER_MAX".
-
Andrew Morton authored
From: Nick Piggin <piggin@cyberone.com.au> Use a "refill_counter" for inactive list scanning, similar to the one used for active list scanning. This batches up scanning now that we precisely balance ratios, and don't round up the amount to be done. No observed benefits, but I imagine it would lower the acquisition frequency of the lru locks in some cases, and make codepaths more efficient in general due to cache niceness.
-
Andrew Morton authored
This is just a random unsubstantiated tuning tweak: don't immediately throttle page allocators and kwapd when the going is getting heavier: scan a bit more of the LRU before throttling.
-
Andrew Morton authored
This removes a vestige of the old algorithm. We don't want to skip zones if all_zones_ok is true: we've already precalculated which zones need scanning and this just stops us from ever performing kswapd reclaim from the DMA zone.
-
Andrew Morton authored
As kswapd is now scanning zones in the highmem->normal->dma direction it can get into competition with the page allocator: kswapd keep on trying to free pages from highmem, then kswapd moves onto lowmem. By the time kswapd has done proportional scanning in lowmem, someone has come in and allocated a few pages from highmem. So kswapd goes back and frees some highmem, then some lowmem again. But nobody has allocated any lowmem yet. So we keep on and on scanning lowmem in response to highmem page allocations. With a simple `dd' on a 1G box we get: r b swpd free buff cache si so bi bo in cs us sy wa id 0 3 0 59340 4628 922348 0 0 4 28188 1072 808 0 10 46 44 0 3 0 29932 4660 951760 0 0 0 30752 1078 441 1 6 30 64 0 3 0 57568 4556 924052 0 0 0 30748 1075 478 0 8 43 49 0 3 0 29664 4584 952176 0 0 0 30752 1075 472 0 6 34 60 0 3 0 5304 4620 976280 0 0 4 40484 1073 456 1 7 52 41 0 3 0 104856 4508 877112 0 0 0 18452 1074 97 0 7 67 26 0 3 0 70768 4540 911488 0 0 0 35876 1078 746 0 7 34 59 1 2 0 42544 4568 939680 0 0 0 21524 1073 556 0 5 43 51 0 3 0 5520 4608 976428 0 0 4 37924 1076 836 0 7 41 51 0 2 0 4848 4632 976812 0 0 32 12308 1092 94 0 1 33 66 Simple fix: go back to scanning the zones in the dma->normal->highmem direction so we meet the page allocator in the middle somewhere. r b swpd free buff cache si so bi bo in cs us sy wa id 1 3 0 5152 3468 976548 0 0 4 37924 1071 650 0 8 64 28 1 2 0 4888 3496 976588 0 0 0 23576 1075 726 0 6 66 27 0 3 0 5336 3532 976348 0 0 0 31264 1072 708 0 8 60 32 0 3 0 6168 3560 975504 0 0 0 40992 1072 683 0 6 63 31 0 3 0 4560 3580 976844 0 0 0 18448 1073 233 0 4 59 37 0 3 0 5840 3624 975712 0 0 4 26660 1072 800 1 8 46 45 0 3 0 4816 3648 976640 0 0 0 40992 1073 526 0 6 47 47 0 3 0 5456 3672 976072 0 0 0 19984 1070 320 0 5 60 35
-
Andrew Morton authored
Currently kswapd walks across all zones in dma->normal->highmem order, performing proportional scanning until all zones are OK. This means that pressure against ZONE_NORMAL causes unnecessary reclaim of ZONE_HIGHMEM. To fix that up we change kswapd so that it walks the zones in the high->normal->dma direction, skipping zones which are OK. Once it encounters a zone which needs some reclaim kswapd will perform proportional scanning against that zone as well as all the succeeding lower zones. We scan the lower zones even if they have sufficient free pages. This is because a) the lower zone may be above pages_high, but because of the incremental min, the lower zone may still not be eligible for allocations. That's bad because cache in that lower zone will then not be scanned at the correct rate. b) pages in this lower zone are usable for allocations against the higher zone. So we do want to san all the relevant zones at an equal rate.
-
Andrew Morton authored
- If max_scan evaluates to zero due to a very small inactive list and high `priority' numbers, we don't want to thrlttle yet. - In balance_pgdat(), we may end up not scanning any pages because all zones happened to be above pages_high. Avoid throttling in this case too.
-
Andrew Morton authored
When page reclaim is working out how many pages to san in a zone (max-scan) it presently rounds that number up if it looks too small - for work batching. Problem is, this can result in excessive scanning against small zones which have few inactive pages. So remove it. Not that it is possible for max_scan to be zero. That's OK - it'll become non-zero as the priority increases.
-
Andrew Morton authored
Page reclaim is currently a bit schitzo: sometimes we say "go and scan this many pages and tell me how many pages were freed" and at other times we say "go and scan this many pages, but stop if you freed this many". It makes the logic harder to control and to understand. This patch coverts everything into the "go and scan this many pages and tell me how many pages were freed" model. It doesn't seem to affect performance much either way.
-