• Dan Magenheimer's avatar
    staging: ramster: place ramster codebase on top of new zcache2 codebase · 14c9fda5
    Dan Magenheimer authored
    [V2: rebased to apply to 20120905 staging-next, no other changes]
    
    This slightly modified ramster codebase is now built entirely on zcache2
    and all ramster-specific code is fully contained in a subdirectory.
    
    Ramster extends zcache2 to allow pages compressed via zcache2 to be
    "load-balanced" across machines in a cluster.  Control and data communication
    is done via kernel sockets, and cluster configuration and management is
    heavily leveraged from the ocfs2 cluster filesystem.
    
    There are no new features since the codebase introduced into staging at 3.4.
    Some cleanup was performed though:
     1) Interfaces directly with new zbud
     2) Debugfs now used instead of sysfs where possible.  Sysfs still
        used where necessary for userland cluster configuration.
    
    Ramster is very much a work-in-progress but also does really work!
    
    RAMSTER HIGH LEVEL OVERVIEW (from original V5 posting in Feb 2012)
    
    RAMster implements peer-to-peer transcendent memory, allowing a "cluster" of
    kernels to dynamically pool their RAM so that a RAM-hungry workload on one
    machine can temporarily and transparently utilize RAM on another machine which
    is presumably idle or running a non-RAM-hungry workload.  Other than the
    already-merged cleancache patchset and frontswap patchset, no core kernel
    changes are currently required.
    
    (Note that, unlike previous public descriptions of RAMster, this implementation
    does NOT require synchronous "gets" or core networking changes. As of V5,
    it also co-exists with ocfs2.)
    
    RAMster combines a clustering and messaging foundation based on the ocfs2
    cluster layer with the in-kernel compression implementation of zcache2, and
    adds code to glue them together.  When a page is "put" to RAMster, it is
    compressed and stored locally.  Periodically, a thread will "remotify" these
    pages by sending them via messages to a remote machine.  When the page is
    later needed as indicated by a page fault, a "get" is issued.  If the data
    is local, it is uncompressed and the fault is resolved.  If the data is
    remote, a message is sent to fetch the data and the faulting thread sleeps;
    when the data arrives, the thread awakens, the data is decompressed and
    the fault is resolved.
    
    As of V5, clusters up to eight nodes are supported; each node can remotify
    pages to one specified node, so clusters can be configured as clients to
    a "memory server".  Some simple policy is in place that will need to be
    refined over time.  Larger clusters and fault-resistant protocols can also
    be added over time.
    
    A HOW-TO is available at:
    http://oss.oracle.com/projects/tmem/dist/files/RAMster/HOWTO-120817Acked-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Signed-off-by: default avatarDan Magenheimer <dan.magenheimer@oracle.com>
    Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
    14c9fda5
Makefile 268 Bytes