Commit 05fc0321 authored by Daniel Vetter's avatar Daniel Vetter

drm/mm: Some doc polish

Added some boilerplate for the structs, documented members where they
are relevant and plenty of markup for hyperlinks all over. And a few
small wording polish.

Note that the intro needs some more love after the DRM_MM_INSERT_*
patch from Chris has landed.

v2: Spelling fixes (Chris).

v3: Use &struct foo instead of &foo structure (Chris).

Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: default avatarDavid Herrmann <dh.herrmann@gmail.com>
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1483044517-5770-3-git-send-email-daniel.vetter@ffwll.ch
parent a8182863
...@@ -449,7 +449,7 @@ LRU Scan/Eviction Support ...@@ -449,7 +449,7 @@ LRU Scan/Eviction Support
------------------------- -------------------------
.. kernel-doc:: drivers/gpu/drm/drm_mm.c .. kernel-doc:: drivers/gpu/drm/drm_mm.c
:doc: lru scan roaster :doc: lru scan roster
DRM MM Range Allocator Function References DRM MM Range Allocator Function References
------------------------------------------ ------------------------------------------
......
...@@ -59,8 +59,8 @@ ...@@ -59,8 +59,8 @@
* *
* The main data struct is &drm_mm, allocations are tracked in &drm_mm_node. * The main data struct is &drm_mm, allocations are tracked in &drm_mm_node.
* Drivers are free to embed either of them into their own suitable * Drivers are free to embed either of them into their own suitable
* datastructures. drm_mm itself will not do any allocations of its own, so if * datastructures. drm_mm itself will not do any memory allocations of its own,
* drivers choose not to embed nodes they need to still allocate them * so if drivers choose not to embed nodes they need to still allocate them
* themselves. * themselves.
* *
* The range allocator also supports reservation of preallocated blocks. This is * The range allocator also supports reservation of preallocated blocks. This is
...@@ -78,7 +78,7 @@ ...@@ -78,7 +78,7 @@
* steep cliff not a real concern. Removing a node again is O(1). * steep cliff not a real concern. Removing a node again is O(1).
* *
* drm_mm supports a few features: Alignment and range restrictions can be * drm_mm supports a few features: Alignment and range restrictions can be
* supplied. Further more every &drm_mm_node has a color value (which is just an * supplied. Furthermore every &drm_mm_node has a color value (which is just an
* opaque unsigned long) which in conjunction with a driver callback can be used * opaque unsigned long) which in conjunction with a driver callback can be used
* to implement sophisticated placement restrictions. The i915 DRM driver uses * to implement sophisticated placement restrictions. The i915 DRM driver uses
* this to implement guard pages between incompatible caching domains in the * this to implement guard pages between incompatible caching domains in the
...@@ -296,11 +296,11 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node, ...@@ -296,11 +296,11 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
* @mm: drm_mm allocator to insert @node into * @mm: drm_mm allocator to insert @node into
* @node: drm_mm_node to insert * @node: drm_mm_node to insert
* *
* This functions inserts an already set-up drm_mm_node into the allocator, * This functions inserts an already set-up &drm_mm_node into the allocator,
* meaning that start, size and color must be set by the caller. This is useful * meaning that start, size and color must be set by the caller. All other
* to initialize the allocator with preallocated objects which must be set-up * fields must be cleared to 0. This is useful to initialize the allocator with
* before the range allocator can be set-up, e.g. when taking over a firmware * preallocated objects which must be set-up before the range allocator can be
* framebuffer. * set-up, e.g. when taking over a firmware framebuffer.
* *
* Returns: * Returns:
* 0 on success, -ENOSPC if there's no hole where @node is. * 0 on success, -ENOSPC if there's no hole where @node is.
...@@ -375,7 +375,7 @@ EXPORT_SYMBOL(drm_mm_reserve_node); ...@@ -375,7 +375,7 @@ EXPORT_SYMBOL(drm_mm_reserve_node);
* @sflags: flags to fine-tune the allocation search * @sflags: flags to fine-tune the allocation search
* @aflags: flags to fine-tune the allocation behavior * @aflags: flags to fine-tune the allocation behavior
* *
* The preallocated node must be cleared to 0. * The preallocated @node must be cleared to 0.
* *
* Returns: * Returns:
* 0 on success, -ENOSPC if there's no suitable hole. * 0 on success, -ENOSPC if there's no suitable hole.
...@@ -537,7 +537,7 @@ void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new) ...@@ -537,7 +537,7 @@ void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new)
EXPORT_SYMBOL(drm_mm_replace_node); EXPORT_SYMBOL(drm_mm_replace_node);
/** /**
* DOC: lru scan roaster * DOC: lru scan roster
* *
* Very often GPUs need to have continuous allocations for a given object. When * Very often GPUs need to have continuous allocations for a given object. When
* evicting objects to make space for a new one it is therefore not most * evicting objects to make space for a new one it is therefore not most
...@@ -549,9 +549,11 @@ EXPORT_SYMBOL(drm_mm_replace_node); ...@@ -549,9 +549,11 @@ EXPORT_SYMBOL(drm_mm_replace_node);
* The DRM range allocator supports this use-case through the scanning * The DRM range allocator supports this use-case through the scanning
* interfaces. First a scan operation needs to be initialized with * interfaces. First a scan operation needs to be initialized with
* drm_mm_scan_init() or drm_mm_scan_init_with_range(). The driver adds * drm_mm_scan_init() or drm_mm_scan_init_with_range(). The driver adds
* objects to the roster (probably by walking an LRU list, but this can be * objects to the roster, probably by walking an LRU list, but this can be
* freely implemented) (using drm_mm_scan_add_block()) until a suitable hole * freely implemented. Eviction candiates are added using
* is found or there are no further evictable objects. * drm_mm_scan_add_block() until a suitable hole is found or there are no
* further evictable objects. Eviction roster metadata is tracked in struct
* &drm_mm_scan.
* *
* The driver must walk through all objects again in exactly the reverse * The driver must walk through all objects again in exactly the reverse
* order to restore the allocator state. Note that while the allocator is used * order to restore the allocator state. Note that while the allocator is used
...@@ -559,7 +561,7 @@ EXPORT_SYMBOL(drm_mm_replace_node); ...@@ -559,7 +561,7 @@ EXPORT_SYMBOL(drm_mm_replace_node);
* *
* Finally the driver evicts all objects selected (drm_mm_scan_remove_block() * Finally the driver evicts all objects selected (drm_mm_scan_remove_block()
* reported true) in the scan, and any overlapping nodes after color adjustment * reported true) in the scan, and any overlapping nodes after color adjustment
* (drm_mm_scan_evict_color()). Adding and removing an object is O(1), and * (drm_mm_scan_color_evict()). Adding and removing an object is O(1), and
* since freeing a node is also O(1) the overall complexity is * since freeing a node is also O(1) the overall complexity is
* O(scanned_objects). So like the free stack which needs to be walked before a * O(scanned_objects). So like the free stack which needs to be walked before a
* scan operation even begins this is linear in the number of objects. It * scan operation even begins this is linear in the number of objects. It
...@@ -705,14 +707,15 @@ EXPORT_SYMBOL(drm_mm_scan_add_block); ...@@ -705,14 +707,15 @@ EXPORT_SYMBOL(drm_mm_scan_add_block);
* @scan: the active drm_mm scanner * @scan: the active drm_mm scanner
* @node: drm_mm_node to remove * @node: drm_mm_node to remove
* *
* Nodes _must_ be removed in exactly the reverse order from the scan list as * Nodes **must** be removed in exactly the reverse order from the scan list as
* they have been added (e.g. using list_add as they are added and then * they have been added (e.g. using list_add() as they are added and then
* list_for_each over that eviction list to remove), otherwise the internal * list_for_each() over that eviction list to remove), otherwise the internal
* state of the memory manager will be corrupted. * state of the memory manager will be corrupted.
* *
* When the scan list is empty, the selected memory nodes can be freed. An * When the scan list is empty, the selected memory nodes can be freed. An
* immediately following drm_mm_search_free with !DRM_MM_SEARCH_BEST will then * immediately following drm_mm_insert_node_in_range_generic() or one of the
* return the just freed block (because its at the top of the free_stack list). * simpler versions of that function with !DRM_MM_SEARCH_BEST will then return
* the just freed block (because its at the top of the free_stack list).
* *
* Returns: * Returns:
* True if this block should be evicted, false otherwise. Will always * True if this block should be evicted, false otherwise. Will always
......
...@@ -67,16 +67,29 @@ enum drm_mm_allocator_flags { ...@@ -67,16 +67,29 @@ enum drm_mm_allocator_flags {
#define DRM_MM_BOTTOMUP DRM_MM_SEARCH_DEFAULT, DRM_MM_CREATE_DEFAULT #define DRM_MM_BOTTOMUP DRM_MM_SEARCH_DEFAULT, DRM_MM_CREATE_DEFAULT
#define DRM_MM_TOPDOWN DRM_MM_SEARCH_BELOW, DRM_MM_CREATE_TOP #define DRM_MM_TOPDOWN DRM_MM_SEARCH_BELOW, DRM_MM_CREATE_TOP
/**
* struct drm_mm_node - allocated block in the DRM allocator
*
* This represents an allocated block in a &drm_mm allocator. Except for
* pre-reserved nodes inserted using drm_mm_reserve_node() the structure is
* entirely opaque and should only be accessed through the provided funcions.
* Since allocation of these nodes is entirely handled by the driver they can be
* embedded.
*/
struct drm_mm_node { struct drm_mm_node {
/** @color: Opaque driver-private tag. */
unsigned long color;
/** @start: Start address of the allocated block. */
u64 start;
/** @size: Size of the allocated block. */
u64 size;
/* private: */
struct list_head node_list; struct list_head node_list;
struct list_head hole_stack; struct list_head hole_stack;
struct rb_node rb; struct rb_node rb;
unsigned hole_follows : 1; unsigned hole_follows : 1;
unsigned allocated : 1; unsigned allocated : 1;
bool scanned_block : 1; bool scanned_block : 1;
unsigned long color;
u64 start;
u64 size;
u64 __subtree_last; u64 __subtree_last;
struct drm_mm *mm; struct drm_mm *mm;
#ifdef CONFIG_DRM_DEBUG_MM #ifdef CONFIG_DRM_DEBUG_MM
...@@ -84,7 +97,29 @@ struct drm_mm_node { ...@@ -84,7 +97,29 @@ struct drm_mm_node {
#endif #endif
}; };
/**
* struct drm_mm - DRM allocator
*
* DRM range allocator with a few special functions and features geared towards
* managing GPU memory. Except for the @color_adjust callback the structure is
* entirely opaque and should only be accessed through the provided functions
* and macros. This structure can be embedded into larger driver structures.
*/
struct drm_mm { struct drm_mm {
/**
* @color_adjust:
*
* Optional driver callback to further apply restrictions on a hole. The
* node argument points at the node containing the hole from which the
* block would be allocated (see drm_mm_hole_follows() and friends). The
* other arguments are the size of the block to be allocated. The driver
* can adjust the start and end as needed to e.g. insert guard pages.
*/
void (*color_adjust)(const struct drm_mm_node *node,
unsigned long color,
u64 *start, u64 *end);
/* private: */
/* List of all memory nodes that immediately precede a free hole. */ /* List of all memory nodes that immediately precede a free hole. */
struct list_head hole_stack; struct list_head hole_stack;
/* head_node.node_list is the list of all memory nodes, ordered /* head_node.node_list is the list of all memory nodes, ordered
...@@ -93,14 +128,20 @@ struct drm_mm { ...@@ -93,14 +128,20 @@ struct drm_mm {
/* Keep an interval_tree for fast lookup of drm_mm_nodes by address. */ /* Keep an interval_tree for fast lookup of drm_mm_nodes by address. */
struct rb_root interval_tree; struct rb_root interval_tree;
void (*color_adjust)(const struct drm_mm_node *node,
unsigned long color,
u64 *start, u64 *end);
unsigned long scan_active; unsigned long scan_active;
}; };
/**
* struct drm_mm_scan - DRM allocator eviction roaster data
*
* This structure tracks data needed for the eviction roaster set up using
* drm_mm_scan_init(), and used with drm_mm_scan_add_block() and
* drm_mm_scan_remove_block(). The structure is entirely opaque and should only
* be accessed through the provided functions and macros. It is meant to be
* allocated temporarily by the driver on the stack.
*/
struct drm_mm_scan { struct drm_mm_scan {
/* private: */
struct drm_mm *mm; struct drm_mm *mm;
u64 size; u64 size;
...@@ -159,7 +200,8 @@ static inline bool drm_mm_initialized(const struct drm_mm *mm) ...@@ -159,7 +200,8 @@ static inline bool drm_mm_initialized(const struct drm_mm *mm)
* *
* Holes are embedded into the drm_mm using the tail of a drm_mm_node. * Holes are embedded into the drm_mm using the tail of a drm_mm_node.
* If you wish to know whether a hole follows this particular node, * If you wish to know whether a hole follows this particular node,
* query this function. * query this function. See also drm_mm_hole_node_start() and
* drm_mm_hole_node_end().
* *
* Returns: * Returns:
* True if a hole follows the @node. * True if a hole follows the @node.
...@@ -228,23 +270,23 @@ static inline u64 drm_mm_hole_node_end(const struct drm_mm_node *hole_node) ...@@ -228,23 +270,23 @@ static inline u64 drm_mm_hole_node_end(const struct drm_mm_node *hole_node)
/** /**
* drm_mm_for_each_node - iterator to walk over all allocated nodes * drm_mm_for_each_node - iterator to walk over all allocated nodes
* @entry: drm_mm_node structure to assign to in each iteration step * @entry: &struct drm_mm_node to assign to in each iteration step
* @mm: drm_mm allocator to walk * @mm: &drm_mm allocator to walk
* *
* This iterator walks over all nodes in the range allocator. It is implemented * This iterator walks over all nodes in the range allocator. It is implemented
* with list_for_each, so not save against removal of elements. * with list_for_each(), so not save against removal of elements.
*/ */
#define drm_mm_for_each_node(entry, mm) \ #define drm_mm_for_each_node(entry, mm) \
list_for_each_entry(entry, drm_mm_nodes(mm), node_list) list_for_each_entry(entry, drm_mm_nodes(mm), node_list)
/** /**
* drm_mm_for_each_node_safe - iterator to walk over all allocated nodes * drm_mm_for_each_node_safe - iterator to walk over all allocated nodes
* @entry: drm_mm_node structure to assign to in each iteration step * @entry: &struct drm_mm_node to assign to in each iteration step
* @next: drm_mm_node structure to store the next step * @next: &struct drm_mm_node to store the next step
* @mm: drm_mm allocator to walk * @mm: &drm_mm allocator to walk
* *
* This iterator walks over all nodes in the range allocator. It is implemented * This iterator walks over all nodes in the range allocator. It is implemented
* with list_for_each_safe, so save against removal of elements. * with list_for_each_safe(), so save against removal of elements.
*/ */
#define drm_mm_for_each_node_safe(entry, next, mm) \ #define drm_mm_for_each_node_safe(entry, next, mm) \
list_for_each_entry_safe(entry, next, drm_mm_nodes(mm), node_list) list_for_each_entry_safe(entry, next, drm_mm_nodes(mm), node_list)
...@@ -259,13 +301,13 @@ static inline u64 drm_mm_hole_node_end(const struct drm_mm_node *hole_node) ...@@ -259,13 +301,13 @@ static inline u64 drm_mm_hole_node_end(const struct drm_mm_node *hole_node)
/** /**
* drm_mm_for_each_hole - iterator to walk over all holes * drm_mm_for_each_hole - iterator to walk over all holes
* @entry: drm_mm_node used internally to track progress * @entry: &drm_mm_node used internally to track progress
* @mm: drm_mm allocator to walk * @mm: &drm_mm allocator to walk
* @hole_start: ulong variable to assign the hole start to on each iteration * @hole_start: ulong variable to assign the hole start to on each iteration
* @hole_end: ulong variable to assign the hole end to on each iteration * @hole_end: ulong variable to assign the hole end to on each iteration
* *
* This iterator walks over all holes in the range allocator. It is implemented * This iterator walks over all holes in the range allocator. It is implemented
* with list_for_each, so not save against removal of elements. @entry is used * with list_for_each(), so not save against removal of elements. @entry is used
* internally and will not reflect a real drm_mm_node for the very first hole. * internally and will not reflect a real drm_mm_node for the very first hole.
* Hence users of this iterator may not access it. * Hence users of this iterator may not access it.
* *
...@@ -334,6 +376,9 @@ static inline int drm_mm_insert_node_in_range(struct drm_mm *mm, ...@@ -334,6 +376,9 @@ static inline int drm_mm_insert_node_in_range(struct drm_mm *mm,
* @sflags: flags to fine-tune the allocation search * @sflags: flags to fine-tune the allocation search
* @aflags: flags to fine-tune the allocation behavior * @aflags: flags to fine-tune the allocation behavior
* *
* This is a simplified version of drm_mm_insert_node_in_range_generic() with no
* range restrictions applied.
*
* The preallocated node must be cleared to 0. * The preallocated node must be cleared to 0.
* *
* Returns: * Returns:
...@@ -434,6 +479,9 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan, ...@@ -434,6 +479,9 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
* @color: opaque tag value to use for the allocation * @color: opaque tag value to use for the allocation
* @flags: flags to specify how the allocation will be performed afterwards * @flags: flags to specify how the allocation will be performed afterwards
* *
* This is a simplified version of drm_mm_scan_init_with_range() with no range
* restrictions applied.
*
* This simply sets up the scanning routines with the parameters for the desired * This simply sets up the scanning routines with the parameters for the desired
* hole. * hole.
* *
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment