Commit 919e2bb8 authored by Jonathan Corbet's avatar Jonathan Corbet

Merge branch 'mauro' into docs-next

Mauro says:

This is the second part of a series I wrote sometime ago where I manually
convert lots of files to be properly parsed by Sphinx as ReST files.

As it touches on lot of stuff, this series is based on today's linux-next,
at tag next-20190617.

The first version of this series had 57 patches. The first part with 28 patches
were already merged. Right now, there are still ~76  patches pending applying
(including this series), and that's because I opted to do ~1 patch per converted
 directory.

That sounds too much to be send on a single round. So, I'm opting to split
it on 3 parts for the conversion, plus a final patch adding orphaned books
to existing ones.

Those patches should probably be good to be merged either by subsystem
maintainers or via the docs tree.

I opted to mark new files not included yet to the main index.rst (directly or
indirectly) with the :orphan: tag, in order to avoid adding warnings to the
build system. This should be removed after we find a "home" for all
the converted files within the new document tree arrangement, after I
submit the third part.
parents ec43a27f 98264991
......@@ -895,7 +895,7 @@ this sysctl interface anymore.
pty
===
See Documentation/filesystems/devpts.txt.
See Documentation/filesystems/devpts.rst.
randomize_va_space
......
.. SPDX-License-Identifier: GPL-2.0
=================
Automount Support
=================
Support is available for filesystems that wish to do automounting
support (such as kAFS which can be found in fs/afs/ and NFS in
fs/nfs/). This facility includes allowing in-kernel mounts to be
......@@ -5,13 +12,12 @@ performed and mountpoint degradation to be requested. The latter can
also be requested by userspace.
======================
IN-KERNEL AUTOMOUNTING
In-Kernel Automounting
======================
See section "Mount Traps" of Documentation/filesystems/autofs.rst
Then from userspace, you can just do something like:
Then from userspace, you can just do something like::
[root@andromeda root]# mount -t afs \#root.afs. /afs
[root@andromeda root]# ls /afs
......@@ -21,7 +27,7 @@ Then from userspace, you can just do something like:
[root@andromeda root]# ls /afs/cambridge/afsdoc/
ChangeLog html LICENSE pdf RELNOTES-1.2.2
And then if you look in the mountpoint catalogue, you'll see something like:
And then if you look in the mountpoint catalogue, you'll see something like::
[root@andromeda root]# cat /proc/mounts
...
......@@ -30,8 +36,7 @@ And then if you look in the mountpoint catalogue, you'll see something like:
#afsdoc. /afs/cambridge.redhat.com/afsdoc afs rw 0 0
===========================
AUTOMATIC MOUNTPOINT EXPIRY
Automatic Mountpoint Expiry
===========================
Automatic expiration of mountpoints is easy, provided you've mounted the
......@@ -43,7 +48,8 @@ To do expiration, you need to follow these steps:
hung.
(2) When a new mountpoint is created in the ->d_automount method, add
the mnt to the list using mnt_set_expiry()
the mnt to the list using mnt_set_expiry()::
mnt_set_expiry(newmnt, &afs_vfsmounts);
(3) When you want mountpoints to be expired, call mark_mounts_for_expiry()
......@@ -70,8 +76,7 @@ and the copies of those that are on an expiration list will be added to the
same expiration list.
=======================
USERSPACE DRIVEN EXPIRY
Userspace Driven Expiry
=======================
As an alternative, it is possible for userspace to request expiry of any
......
.. SPDX-License-Identifier: GPL-2.0
Filesystem Caching
==================
.. toctree::
:maxdepth: 2
fscache
object
backend-api
cachefiles
netfs-api
operations
====================================================
IN-KERNEL CACHE OBJECT REPRESENTATION AND MANAGEMENT
====================================================
.. SPDX-License-Identifier: GPL-2.0
====================================================
In-Kernel Cache Object Representation and Management
====================================================
By: David Howells <dhowells@redhat.com>
Contents:
.. Contents:
(*) Representation
......@@ -18,8 +20,7 @@ Contents:
(*) The set of events.
==============
REPRESENTATION
Representation
==============
FS-Cache maintains an in-kernel representation of each object that a netfs is
......@@ -38,7 +39,7 @@ or even by no objects (it may not be cached).
Furthermore, both cookies and objects are hierarchical. The two hierarchies
correspond, but the cookies tree is a superset of the union of the object trees
of multiple caches:
of multiple caches::
NETFS INDEX TREE : CACHE 1 : CACHE 2
: :
......@@ -89,8 +90,7 @@ pointers to the cookies. The cookies themselves and any objects attached to
those cookies are hidden from it.
===============================
OBJECT MANAGEMENT STATE MACHINE
Object Management State Machine
===============================
Within FS-Cache, each active object is managed by its own individual state
......@@ -124,7 +124,7 @@ is not masked, the object will be queued for processing (by calling
fscache_enqueue_object()).
PROVISION OF CPU TIME
Provision of CPU Time
---------------------
The work to be done by the various states was given CPU time by the threads of
......@@ -141,7 +141,7 @@ because:
workqueues don't necessarily have the right numbers of threads.
LOCKING SIMPLIFICATION
Locking Simplification
----------------------
Because only one worker thread may be operating on any particular object's
......@@ -151,8 +151,7 @@ from the cache backend's representation (fscache_object) - which may be
requested from either end.
=================
THE SET OF STATES
The Set of States
=================
The object state machine has a set of states that it can be in. There are
......@@ -275,19 +274,17 @@ memory and potentially deletes stuff from disk:
this state.
THE SET OF EVENTS
The Set of Events
-----------------
There are a number of events that can be raised to an object state machine:
(*) FSCACHE_OBJECT_EV_UPDATE
FSCACHE_OBJECT_EV_UPDATE
The netfs requested that an object be updated. The state machine will ask
the cache backend to update the object, and the cache backend will ask the
netfs for details of the change through its cookie definition ops.
(*) FSCACHE_OBJECT_EV_CLEARED
FSCACHE_OBJECT_EV_CLEARED
This is signalled in two circumstances:
(a) when an object's last child object is dropped and
......@@ -296,20 +293,16 @@ There are a number of events that can be raised to an object state machine:
This is used to proceed from the dying state.
(*) FSCACHE_OBJECT_EV_ERROR
FSCACHE_OBJECT_EV_ERROR
This is signalled when an I/O error occurs during the processing of some
object.
(*) FSCACHE_OBJECT_EV_RELEASE
(*) FSCACHE_OBJECT_EV_RETIRE
FSCACHE_OBJECT_EV_RELEASE, FSCACHE_OBJECT_EV_RETIRE
These are signalled when the netfs relinquishes a cookie it was using.
The event selected depends on whether the netfs asks for the backing
object to be retired (deleted) or retained.
(*) FSCACHE_OBJECT_EV_WITHDRAW
FSCACHE_OBJECT_EV_WITHDRAW
This is signalled when the cache backend wants to withdraw an object.
This means that the object will have to be detached from the netfs's
cookie.
......
================================
ASYNCHRONOUS OPERATIONS HANDLING
================================
.. SPDX-License-Identifier: GPL-2.0
================================
Asynchronous Operations Handling
================================
By: David Howells <dhowells@redhat.com>
Contents:
.. Contents:
(*) Overview.
......@@ -17,8 +19,7 @@ Contents:
(*) Asynchronous callback.
========
OVERVIEW
Overview
========
FS-Cache has an asynchronous operations handling facility that it uses for its
......@@ -33,11 +34,10 @@ backend for completion.
To make use of this facility, <linux/fscache-cache.h> should be #included.
===============================
OPERATION RECORD INITIALISATION
Operation Record Initialisation
===============================
An operation is recorded in an fscache_operation struct:
An operation is recorded in an fscache_operation struct::
struct fscache_operation {
union {
......@@ -50,7 +50,7 @@ An operation is recorded in an fscache_operation struct:
};
Someone wanting to issue an operation should allocate something with this
struct embedded in it. They should initialise it by calling:
struct embedded in it. They should initialise it by calling::
void fscache_operation_init(struct fscache_operation *op,
fscache_operation_release_t release);
......@@ -67,8 +67,7 @@ FSCACHE_OP_WAITING may be set in op->flags prior to each submission of the
operation and waited for afterwards.
==========
PARAMETERS
Parameters
==========
There are a number of parameters that can be set in the operation record's flag
......@@ -87,7 +86,7 @@ operations:
If this option is to be used, FSCACHE_OP_WAITING must be set in op->flags
before submitting the operation, and the operating thread must wait for it
to be cleared before proceeding:
to be cleared before proceeding::
wait_on_bit(&op->flags, FSCACHE_OP_WAITING,
TASK_UNINTERRUPTIBLE);
......@@ -101,7 +100,7 @@ operations:
page to a netfs page after the backing fs has read the page in.
If this option is used, op->fast_work and op->processor must be
initialised before submitting the operation:
initialised before submitting the operation::
INIT_WORK(&op->fast_work, do_some_work);
......@@ -114,7 +113,7 @@ operations:
pages that have just been fetched from a remote server.
If this option is used, op->slow_work and op->processor must be
initialised before submitting the operation:
initialised before submitting the operation::
fscache_operation_init_slow(op, processor)
......@@ -132,8 +131,7 @@ Furthermore, operations may be one of two types:
operations running at the same time.
=========
PROCEDURE
Procedure
=========
Operations are used through the following procedure:
......@@ -143,7 +141,7 @@ Operations are used through the following procedure:
generic op embedded within.
(2) The submitting thread must then submit the operation for processing using
one of the following two functions:
one of the following two functions::
int fscache_submit_op(struct fscache_object *object,
struct fscache_operation *op);
......@@ -164,7 +162,7 @@ Operations are used through the following procedure:
operation of conflicting exclusivity is in progress on the object.
If the operation is asynchronous, the manager will retain a reference to
it, so the caller should put their reference to it by passing it to:
it, so the caller should put their reference to it by passing it to::
void fscache_put_operation(struct fscache_operation *op);
......@@ -179,12 +177,12 @@ Operations are used through the following procedure:
(4) The operation holds an effective lock upon the object, preventing other
exclusive ops conflicting until it is released. The operation can be
enqueued for further immediate asynchronous processing by adjusting the
CPU time provisioning option if necessary, eg:
CPU time provisioning option if necessary, eg::
op->flags &= ~FSCACHE_OP_TYPE;
op->flags |= ~FSCACHE_OP_FAST;
and calling:
and calling::
void fscache_enqueue_operation(struct fscache_operation *op)
......@@ -192,13 +190,12 @@ Operations are used through the following procedure:
pools.
=====================
ASYNCHRONOUS CALLBACK
Asynchronous Callback
=====================
When used in asynchronous mode, the worker thread pool will invoke the
processor method with a pointer to the operation. This should then get at the
container struct by using container_of():
container struct by using container_of()::
static void fscache_write_op(struct fscache_operation *_op)
{
......
.. SPDX-License-Identifier: GPL-2.0
===========================================
Mounting root file system via SMB (cifs.ko)
===========================================
Written 2019 by Paulo Alcantara <palcantara@suse.de>
Written 2019 by Aurelien Aptel <aaptel@suse.com>
The CONFIG_CIFS_ROOT option enables experimental root file system
......@@ -32,7 +36,7 @@ Server configuration
====================
To enable SMB1+UNIX extensions you will need to set these global
settings in Samba smb.conf:
settings in Samba smb.conf::
[global]
server min protocol = NT1
......@@ -41,12 +45,16 @@ settings in Samba smb.conf:
Kernel command line
===================
root=/dev/cifs
::
root=/dev/cifs
This is just a virtual device that basically tells the kernel to mount
the root file system via SMB protocol.
cifsroot=//<server-ip>/<share>[,options]
::
cifsroot=//<server-ip>/<share>[,options]
Enables the kernel to mount the root file system via SMB that are
located in the <server-ip> and <share> specified in this option.
......@@ -65,33 +73,33 @@ options
Examples
========
Export root file system as a Samba share in smb.conf file.
Export root file system as a Samba share in smb.conf file::
...
[linux]
path = /path/to/rootfs
read only = no
guest ok = yes
force user = root
force group = root
browseable = yes
writeable = yes
admin users = root
public = yes
create mask = 0777
directory mask = 0777
...
...
[linux]
path = /path/to/rootfs
read only = no
guest ok = yes
force user = root
force group = root
browseable = yes
writeable = yes
admin users = root
public = yes
create mask = 0777
directory mask = 0777
...
Restart smb service.
Restart smb service::
# systemctl restart smb
# systemctl restart smb
Test it under QEMU on a kernel built with CONFIG_CIFS_ROOT and
CONFIG_IP_PNP options enabled.
CONFIG_IP_PNP options enabled::
# qemu-system-x86_64 -enable-kvm -cpu host -m 1024 \
-kernel /path/to/linux/arch/x86/boot/bzImage -nographic \
-append "root=/dev/cifs rw ip=dhcp cifsroot=//10.0.2.2/linux,username=foo,password=bar console=ttyS0 3"
# qemu-system-x86_64 -enable-kvm -cpu host -m 1024 \
-kernel /path/to/linux/arch/x86/boot/bzImage -nographic \
-append "root=/dev/cifs rw ip=dhcp cifsroot=//10.0.2.2/linux,username=foo,password=bar console=ttyS0 3"
1: https://wiki.samba.org/index.php/UNIX_Extensions
.. SPDX-License-Identifier: GPL-2.0
=====================
The Devpts Filesystem
=====================
Each mount of the devpts filesystem is now distinct such that ptys
and their indicies allocated in one mount are independent from ptys
and their indicies in all other mounts.
All mounts of the devpts filesystem now create a ``/dev/pts/ptmx`` node
with permissions ``0000``.
To retain backwards compatibility the a ptmx device node (aka any node
created with ``mknod name c 5 2``) when opened will look for an instance
of devpts under the name ``pts`` in the same directory as the ptmx device
node.
As an option instead of placing a ``/dev/ptmx`` device node at ``/dev/ptmx``
it is possible to place a symlink to ``/dev/pts/ptmx`` at ``/dev/ptmx`` or
to bind mount ``/dev/ptx/ptmx`` to ``/dev/ptmx``. If you opt for using
the devpts filesystem in this manner devpts should be mounted with
the ``ptmxmode=0666``, or ``chmod 0666 /dev/pts/ptmx`` should be called.
Total count of pty pairs in all instances is limited by sysctls::
kernel.pty.max = 4096 - global limit
kernel.pty.reserve = 1024 - reserved for filesystems mounted from the initial mount namespace
kernel.pty.nr - current count of ptys
Per-instance limit could be set by adding mount option ``max=<count>``.
This feature was added in kernel 3.4 together with
``sysctl kernel.pty.reserve``.
In kernels older than 3.4 sysctl ``kernel.pty.max`` works as per-instance limit.
Each mount of the devpts filesystem is now distinct such that ptys
and their indicies allocated in one mount are independent from ptys
and their indicies in all other mounts.
All mounts of the devpts filesystem now create a /dev/pts/ptmx node
with permissions 0000.
To retain backwards compatibility the a ptmx device node (aka any node
created with "mknod name c 5 2") when opened will look for an instance
of devpts under the name "pts" in the same directory as the ptmx device
node.
As an option instead of placing a /dev/ptmx device node at /dev/ptmx
it is possible to place a symlink to /dev/pts/ptmx at /dev/ptmx or
to bind mount /dev/ptx/ptmx to /dev/ptmx. If you opt for using
the devpts filesystem in this manner devpts should be mounted with
the ptmxmode=0666, or chmod 0666 /dev/pts/ptmx should be called.
Total count of pty pairs in all instances is limited by sysctls:
kernel.pty.max = 4096 - global limit
kernel.pty.reserve = 1024 - reserved for filesystems mounted from the initial mount namespace
kernel.pty.nr - current count of ptys
Per-instance limit could be set by adding mount option "max=<count>".
This feature was added in kernel 3.4 together with sysctl kernel.pty.reserve.
In kernels older than 3.4 sysctl kernel.pty.max works as per-instance limit.
Linux Directory Notification
============================
.. SPDX-License-Identifier: GPL-2.0
============================
Linux Directory Notification
============================
Stephen Rothwell <sfr@canb.auug.org.au>
......@@ -12,6 +15,7 @@ being delivered using signals.
The application decides which "events" it wants to be notified about.
The currently defined events are:
========= =====================================================
DN_ACCESS A file in the directory was accessed (read)
DN_MODIFY A file in the directory was modified (write,truncate)
DN_CREATE A file was created in the directory
......@@ -19,6 +23,7 @@ The currently defined events are:
DN_RENAME A file in the directory was renamed
DN_ATTRIB A file in the directory had its attributes
changed (chmod,chown)
========= =====================================================
Usually, the application must reregister after each notification, but
if DN_MULTISHOT is or'ed with the event mask, then the registration will
......@@ -36,7 +41,7 @@ especially important if DN_MULTISHOT is specified. Note that SIGRTMIN
is often blocked, so it is better to use (at least) SIGRTMIN + 1.
Implementation expectations (features and bugs :-))
---------------------------
---------------------------------------------------
The notification should work for any local access to files even if the
actual file system is on a remote server. This implies that remote
......
.. SPDX-License-Identifier: GPL-2.0
===================================
File management in the Linux kernel
-----------------------------------
===================================
This document describes how locking for files (struct file)
and file descriptor table (struct files) works.
......@@ -34,7 +37,7 @@ appear atomic. Here are the locking rules for
the fdtable structure -
1. All references to the fdtable must be done through
the files_fdtable() macro :
the files_fdtable() macro::
struct fdtable *fdt;
......@@ -61,7 +64,8 @@ the fdtable structure -
4. To look up the file structure given an fd, a reader
must use either fcheck() or fcheck_files() APIs. These
take care of barrier requirements due to lock-free lookup.
An example :
An example::
struct file *file;
......@@ -77,7 +81,7 @@ the fdtable structure -
of the fd (fget()/fget_light()) are lock-free, it is possible
that look-up may race with the last put() operation on the
file structure. This is avoided using atomic_long_inc_not_zero()
on ->f_count :
on ->f_count::
rcu_read_lock();
file = fcheck_files(files, fd);
......@@ -106,7 +110,8 @@ the fdtable structure -
holding files->file_lock. If ->file_lock is dropped, then
another thread expand the files thereby creating a new
fdtable and making the earlier fdtable pointer stale.
For example :
For example::
spin_lock(&files->file_lock);
fd = locate_fd(files, file, start);
......
.. SPDX-License-Identifier: GPL-2.0
==============
Fuse I/O Modes
==============
Fuse supports the following I/O modes:
- direct-io
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment