Commit f67e3fb4 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'devdax-for-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm

Pull device-dax updates from Dan Williams:
 "New device-dax infrastructure to allow persistent memory and other
  "reserved" / performance differentiated memories, to be assigned to
  the core-mm as "System RAM".

  Some users want to use persistent memory as additional volatile
  memory. They are willing to cope with potential performance
  differences, for example between DRAM and 3D Xpoint, and want to use
  typical Linux memory management apis rather than a userspace memory
  allocator layered over an mmap() of a dax file. The administration
  model is to decide how much Persistent Memory (pmem) to use as System
  RAM, create a device-dax-mode namespace of that size, and then assign
  it to the core-mm. The rationale for device-dax is that it is a
  generic memory-mapping driver that can be layered over any "special
  purpose" memory, not just pmem. On subsequent boots udev rules can be
  used to restore the memory assignment.

  One implication of using pmem as RAM is that mlock() no longer keeps
  data off persistent media. For this reason it is recommended to enable
  NVDIMM Security (previously merged for 5.0) to encrypt pmem contents
  at rest. We considered making this recommendation an actively enforced
  requirement, but in the end decided to leave it as a distribution /
  administrator policy to allow for emulation and test environments that
  lack security capable NVDIMMs.

  Summary:

   - Replace the /sys/class/dax device model with /sys/bus/dax, and
     include a compat driver so distributions can opt-in to the new ABI.

   - Allow for an alternative driver for the device-dax address-range

   - Introduce the 'kmem' driver to hotplug / assign a device-dax
     address-range to the core-mm.

   - Arrange for the device-dax target-node to be onlined so that the
     newly added memory range can be uniquely referenced by numa apis"

NOTE! I'm not entirely happy with the whole "PMEM as RAM" model because
we currently have special - and very annoying rules in the kernel about
accessing PMEM only with the "MC safe" accessors, because machine checks
inside the regular repeat string copy functions can be fatal in some
(not described) circumstances.

And apparently the PMEM modules can cause that a lot more than regular
RAM.  The argument is that this happens because PMEM doesn't necessarily
get scrubbed at boot like RAM does, but that is planned to be added for
the user space tooling.

Quoting Dan from another email:
 "The exposure can be reduced in the volatile-RAM case by scanning for
  and clearing errors before it is onlined as RAM. The userspace tooling
  for that can be in place before v5.1-final. There's also runtime
  notifications of errors via acpi_nfit_uc_error_notify() from
  background scrubbers on the DIMM devices. With that mechanism the
  kernel could proactively clear newly discovered poison in the volatile
  case, but that would be additional development more suitable for v5.2.

  I understand the concern, and the need to highlight this issue by
  tapping the brakes on feature development, but I don't see PMEM as RAM
  making the situation worse when the exposure is also there via DAX in
  the PMEM case. Volatile-RAM is arguably a safer use case since it's
  possible to repair pages where the persistent case needs active
  application coordination"

* tag 'devdax-for-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
  device-dax: "Hotplug" persistent memory for use like normal RAM
  mm/resource: Let walk_system_ram_range() search child resources
  mm/memory-hotplug: Allow memory resources to be children
  mm/resource: Move HMM pr_debug() deeper into resource code
  mm/resource: Return real error codes from walk failures
  device-dax: Add a 'modalias' attribute to DAX 'bus' devices
  device-dax: Add a 'target_node' attribute
  device-dax: Auto-bind device after successful new_id
  acpi/nfit, device-dax: Identify differentiated memory with a unique numa-node
  device-dax: Add /sys/class/dax backwards compatibility
  device-dax: Add support for a dax override driver
  device-dax: Move resource pinning+mapping into the common driver
  device-dax: Introduce bus + driver model
  device-dax: Start defining a dax bus model
  device-dax: Remove multi-resource infrastructure
  device-dax: Kill dax_region base
  device-dax: Kill dax_region ida
parents 477558d7 c221c0b0
What: /sys/class/dax/
Date: May, 2016
KernelVersion: v4.7
Contact: linux-nvdimm@lists.01.org
Description: Device DAX is the device-centric analogue of Filesystem
DAX (CONFIG_FS_DAX). It allows memory ranges to be
allocated and mapped without need of an intervening file
system. Device DAX is strict, precise and predictable.
Specifically this interface:
1/ Guarantees fault granularity with respect to a given
page size (pte, pmd, or pud) set at configuration time.
2/ Enforces deterministic behavior by being strict about
what fault scenarios are supported.
The /sys/class/dax/ interface enumerates all the
device-dax instances in the system. The ABI is
deprecated and will be removed after 2020. It is
replaced with the DAX bus interface /sys/bus/dax/ where
device-dax instances can be found under
/sys/bus/dax/devices/
...@@ -239,6 +239,7 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p) ...@@ -239,6 +239,7 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
memset(&ndr_desc, 0, sizeof(ndr_desc)); memset(&ndr_desc, 0, sizeof(ndr_desc));
ndr_desc.attr_groups = region_attr_groups; ndr_desc.attr_groups = region_attr_groups;
ndr_desc.numa_node = dev_to_node(&p->pdev->dev); ndr_desc.numa_node = dev_to_node(&p->pdev->dev);
ndr_desc.target_node = ndr_desc.numa_node;
ndr_desc.res = &p->res; ndr_desc.res = &p->res;
ndr_desc.of_node = p->dn; ndr_desc.of_node = p->dn;
ndr_desc.provider_data = p; ndr_desc.provider_data = p;
......
...@@ -2956,11 +2956,15 @@ static int acpi_nfit_register_region(struct acpi_nfit_desc *acpi_desc, ...@@ -2956,11 +2956,15 @@ static int acpi_nfit_register_region(struct acpi_nfit_desc *acpi_desc,
ndr_desc->res = &res; ndr_desc->res = &res;
ndr_desc->provider_data = nfit_spa; ndr_desc->provider_data = nfit_spa;
ndr_desc->attr_groups = acpi_nfit_region_attribute_groups; ndr_desc->attr_groups = acpi_nfit_region_attribute_groups;
if (spa->flags & ACPI_NFIT_PROXIMITY_VALID) if (spa->flags & ACPI_NFIT_PROXIMITY_VALID) {
ndr_desc->numa_node = acpi_map_pxm_to_online_node( ndr_desc->numa_node = acpi_map_pxm_to_online_node(
spa->proximity_domain); spa->proximity_domain);
else ndr_desc->target_node = acpi_map_pxm_to_node(
spa->proximity_domain);
} else {
ndr_desc->numa_node = NUMA_NO_NODE; ndr_desc->numa_node = NUMA_NO_NODE;
ndr_desc->target_node = NUMA_NO_NODE;
}
/* /*
* Persistence domain bits are hierarchical, if * Persistence domain bits are hierarchical, if
......
...@@ -84,6 +84,7 @@ int acpi_map_pxm_to_node(int pxm) ...@@ -84,6 +84,7 @@ int acpi_map_pxm_to_node(int pxm)
return node; return node;
} }
EXPORT_SYMBOL(acpi_map_pxm_to_node);
/** /**
* acpi_map_pxm_to_online_node - Map proximity ID to online node * acpi_map_pxm_to_online_node - Map proximity ID to online node
......
...@@ -88,6 +88,7 @@ unsigned long __weak memory_block_size_bytes(void) ...@@ -88,6 +88,7 @@ unsigned long __weak memory_block_size_bytes(void)
{ {
return MIN_MEMORY_BLOCK_SIZE; return MIN_MEMORY_BLOCK_SIZE;
} }
EXPORT_SYMBOL_GPL(memory_block_size_bytes);
static unsigned long get_memory_block_size(void) static unsigned long get_memory_block_size(void)
{ {
......
...@@ -23,12 +23,38 @@ config DEV_DAX ...@@ -23,12 +23,38 @@ config DEV_DAX
config DEV_DAX_PMEM config DEV_DAX_PMEM
tristate "PMEM DAX: direct access to persistent memory" tristate "PMEM DAX: direct access to persistent memory"
depends on LIBNVDIMM && NVDIMM_DAX && DEV_DAX depends on LIBNVDIMM && NVDIMM_DAX && DEV_DAX
depends on m # until we can kill DEV_DAX_PMEM_COMPAT
default DEV_DAX default DEV_DAX
help help
Support raw access to persistent memory. Note that this Support raw access to persistent memory. Note that this
driver consumes memory ranges allocated and exported by the driver consumes memory ranges allocated and exported by the
libnvdimm sub-system. libnvdimm sub-system.
Say Y if unsure Say M if unsure
config DEV_DAX_KMEM
tristate "KMEM DAX: volatile-use of persistent memory"
default DEV_DAX
depends on DEV_DAX
depends on MEMORY_HOTPLUG # for add_memory() and friends
help
Support access to persistent memory as if it were RAM. This
allows easier use of persistent memory by unmodified
applications.
To use this feature, a DAX device must be unbound from the
device_dax driver (PMEM DAX) and bound to this kmem driver
on each boot.
Say N if unsure.
config DEV_DAX_PMEM_COMPAT
tristate "PMEM DAX: support the deprecated /sys/class/dax interface"
depends on DEV_DAX_PMEM
default DEV_DAX_PMEM
help
Older versions of the libdaxctl library expect to find all
device-dax instances under /sys/class/dax. If libdaxctl in
your distribution is older than v58 say M, otherwise say N.
endif endif
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DAX) += dax.o obj-$(CONFIG_DAX) += dax.o
obj-$(CONFIG_DEV_DAX) += device_dax.o obj-$(CONFIG_DEV_DAX) += device_dax.o
obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o obj-$(CONFIG_DEV_DAX_KMEM) += kmem.o
dax-y := super.o dax-y := super.o
dax_pmem-y := pmem.o dax-y += bus.o
device_dax-y := device.o device_dax-y := device.o
obj-y += pmem/
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
#ifndef __DAX_BUS_H__
#define __DAX_BUS_H__
#include <linux/device.h>
struct dev_dax;
struct resource;
struct dax_device;
struct dax_region;
void dax_region_put(struct dax_region *dax_region);
struct dax_region *alloc_dax_region(struct device *parent, int region_id,
struct resource *res, int target_node, unsigned int align,
unsigned long flags);
enum dev_dax_subsys {
DEV_DAX_BUS,
DEV_DAX_CLASS,
};
struct dev_dax *__devm_create_dev_dax(struct dax_region *dax_region, int id,
struct dev_pagemap *pgmap, enum dev_dax_subsys subsys);
static inline struct dev_dax *devm_create_dev_dax(struct dax_region *dax_region,
int id, struct dev_pagemap *pgmap)
{
return __devm_create_dev_dax(dax_region, id, pgmap, DEV_DAX_BUS);
}
/* to be deleted when DEV_DAX_CLASS is removed */
struct dev_dax *__dax_pmem_probe(struct device *dev, enum dev_dax_subsys subsys);
struct dax_device_driver {
struct device_driver drv;
struct list_head ids;
int match_always;
};
int __dax_driver_register(struct dax_device_driver *dax_drv,
struct module *module, const char *mod_name);
#define dax_driver_register(driver) \
__dax_driver_register(driver, THIS_MODULE, KBUILD_MODNAME)
void dax_driver_unregister(struct dax_device_driver *dax_drv);
void kill_dev_dax(struct dev_dax *dev_dax);
#if IS_ENABLED(CONFIG_DEV_DAX_PMEM_COMPAT)
int dev_dax_probe(struct device *dev);
#endif
/*
* While run_dax() is potentially a generic operation that could be
* defined in include/linux/dax.h we don't want to grow any users
* outside of drivers/dax/
*/
void run_dax(struct dax_device *dax_dev);
#define MODULE_ALIAS_DAX_DEVICE(type) \
MODULE_ALIAS("dax:t" __stringify(type) "*")
#define DAX_DEVICE_MODALIAS_FMT "dax:t%d"
#endif /* __DAX_BUS_H__ */
...@@ -16,10 +16,17 @@ ...@@ -16,10 +16,17 @@
#include <linux/device.h> #include <linux/device.h>
#include <linux/cdev.h> #include <linux/cdev.h>
/* private routines between core files */
struct dax_device;
struct dax_device *inode_dax(struct inode *inode);
struct inode *dax_inode(struct dax_device *dax_dev);
int dax_bus_init(void);
void dax_bus_exit(void);
/** /**
* struct dax_region - mapping infrastructure for dax devices * struct dax_region - mapping infrastructure for dax devices
* @id: kernel-wide unique region for a memory range * @id: kernel-wide unique region for a memory range
* @base: linear address corresponding to @res * @target_node: effective numa node if this memory range is onlined
* @kref: to pin while other agents have a need to do lookups * @kref: to pin while other agents have a need to do lookups
* @dev: parent device backing this region * @dev: parent device backing this region
* @align: allocation and mapping alignment for child dax devices * @align: allocation and mapping alignment for child dax devices
...@@ -28,8 +35,7 @@ ...@@ -28,8 +35,7 @@
*/ */
struct dax_region { struct dax_region {
int id; int id;
struct ida ida; int target_node;
void *base;
struct kref kref; struct kref kref;
struct device *dev; struct device *dev;
unsigned int align; unsigned int align;
...@@ -38,20 +44,28 @@ struct dax_region { ...@@ -38,20 +44,28 @@ struct dax_region {
}; };
/** /**
* struct dev_dax - instance data for a subdivision of a dax region * struct dev_dax - instance data for a subdivision of a dax region, and
* data while the device is activated in the driver.
* @region - parent region * @region - parent region
* @dax_dev - core dax functionality * @dax_dev - core dax functionality
* @target_node: effective numa node if dev_dax memory range is onlined
* @dev - device core * @dev - device core
* @id - child id in the region * @pgmap - pgmap for memmap setup / lifetime (driver owned)
* @num_resources - number of physical address extents in this device * @ref: pgmap reference count (driver owned)
* @res - array of physical address ranges * @cmp: @ref final put completion (driver owned)
*/ */
struct dev_dax { struct dev_dax {
struct dax_region *region; struct dax_region *region;
struct dax_device *dax_dev; struct dax_device *dax_dev;
int target_node;
struct device dev; struct device dev;
int id; struct dev_pagemap pgmap;
int num_resources; struct percpu_ref ref;
struct resource res[0]; struct completion cmp;
}; };
static inline struct dev_dax *to_dev_dax(struct device *dev)
{
return container_of(dev, struct dev_dax, dev);
}
#endif #endif
/*
* Copyright(c) 2016 - 2017 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef __DAX_H__
#define __DAX_H__
struct dax_device;
struct dax_device *inode_dax(struct inode *inode);
struct inode *dax_inode(struct dax_device *dax_dev);
#endif /* __DAX_H__ */
/*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#ifndef __DEVICE_DAX_H__
#define __DEVICE_DAX_H__
struct device;
struct dev_dax;
struct resource;
struct dax_region;
void dax_region_put(struct dax_region *dax_region);
struct dax_region *alloc_dax_region(struct device *parent,
int region_id, struct resource *res, unsigned int align,
void *addr, unsigned long flags);
struct dev_dax *devm_create_dev_dax(struct dax_region *dax_region,
int id, struct resource *res, int count);
#endif /* __DEVICE_DAX_H__ */
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016-2019 Intel Corporation. All rights reserved. */
#include <linux/memremap.h>
#include <linux/pagemap.h>
#include <linux/memory.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/pfn_t.h>
#include <linux/slab.h>
#include <linux/dax.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/mman.h>
#include "dax-private.h"
#include "bus.h"
int dev_dax_kmem_probe(struct device *dev)
{
struct dev_dax *dev_dax = to_dev_dax(dev);
struct resource *res = &dev_dax->region->res;
resource_size_t kmem_start;
resource_size_t kmem_size;
resource_size_t kmem_end;
struct resource *new_res;
int numa_node;
int rc;
/*
* Ensure good NUMA information for the persistent memory.
* Without this check, there is a risk that slow memory
* could be mixed in a node with faster memory, causing
* unavoidable performance issues.
*/
numa_node = dev_dax->target_node;
if (numa_node < 0) {
dev_warn(dev, "rejecting DAX region %pR with invalid node: %d\n",
res, numa_node);
return -EINVAL;
}
/* Hotplug starting at the beginning of the next block: */
kmem_start = ALIGN(res->start, memory_block_size_bytes());
kmem_size = resource_size(res);
/* Adjust the size down to compensate for moving up kmem_start: */
kmem_size -= kmem_start - res->start;
/* Align the size down to cover only complete blocks: */
kmem_size &= ~(memory_block_size_bytes() - 1);
kmem_end = kmem_start + kmem_size;
/* Region is permanently reserved. Hot-remove not yet implemented. */
new_res = request_mem_region(kmem_start, kmem_size, dev_name(dev));
if (!new_res) {
dev_warn(dev, "could not reserve region [%pa-%pa]\n",
&kmem_start, &kmem_end);
return -EBUSY;
}
/*
* Set flags appropriate for System RAM. Leave ..._BUSY clear
* so that add_memory() can add a child resource. Do not
* inherit flags from the parent since it may set new flags
* unknown to us that will break add_memory() below.
*/
new_res->flags = IORESOURCE_SYSTEM_RAM;
new_res->name = dev_name(dev);
rc = add_memory(numa_node, new_res->start, resource_size(new_res));
if (rc)
return rc;
return 0;
}
static int dev_dax_kmem_remove(struct device *dev)
{
/*
* Purposely leak the request_mem_region() for the device-dax
* range and return '0' to ->remove() attempts. The removal of
* the device from the driver always succeeds, but the region
* is permanently pinned as reserved by the unreleased
* request_mem_region().
*/
return 0;
}
static struct dax_device_driver device_dax_kmem_driver = {
.drv = {
.probe = dev_dax_kmem_probe,
.remove = dev_dax_kmem_remove,
},
};
static int __init dax_kmem_init(void)
{
return dax_driver_register(&device_dax_kmem_driver);
}
static void __exit dax_kmem_exit(void)
{
dax_driver_unregister(&device_dax_kmem_driver);
}
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");
module_init(dax_kmem_init);
module_exit(dax_kmem_exit);
MODULE_ALIAS_DAX_DEVICE(0);
/*
* Copyright(c) 2016 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
#include <linux/percpu-refcount.h>
#include <linux/memremap.h>
#include <linux/module.h>
#include <linux/pfn_t.h>
#include "../nvdimm/pfn.h"
#include "../nvdimm/nd.h"
#include "device-dax.h"
struct dax_pmem {
struct device *dev;
struct percpu_ref ref;
struct dev_pagemap pgmap;
struct completion cmp;
};
static struct dax_pmem *to_dax_pmem(struct percpu_ref *ref)
{
return container_of(ref, struct dax_pmem, ref);
}
static void dax_pmem_percpu_release(struct percpu_ref *ref)
{
struct dax_pmem *dax_pmem = to_dax_pmem(ref);
dev_dbg(dax_pmem->dev, "trace\n");
complete(&dax_pmem->cmp);
}
static void dax_pmem_percpu_exit(void *data)
{
struct percpu_ref *ref = data;
struct dax_pmem *dax_pmem = to_dax_pmem(ref);
dev_dbg(dax_pmem->dev, "trace\n");
wait_for_completion(&dax_pmem->cmp);
percpu_ref_exit(ref);
}
static void dax_pmem_percpu_kill(struct percpu_ref *ref)
{
struct dax_pmem *dax_pmem = to_dax_pmem(ref);
dev_dbg(dax_pmem->dev, "trace\n");
percpu_ref_kill(ref);
}
static int dax_pmem_probe(struct device *dev)
{
void *addr;
struct resource res;
int rc, id, region_id;
struct nd_pfn_sb *pfn_sb;
struct dev_dax *dev_dax;
struct dax_pmem *dax_pmem;
struct nd_namespace_io *nsio;
struct dax_region *dax_region;
struct nd_namespace_common *ndns;
struct nd_dax *nd_dax = to_nd_dax(dev);
struct nd_pfn *nd_pfn = &nd_dax->nd_pfn;
ndns = nvdimm_namespace_common_probe(dev);
if (IS_ERR(ndns))
return PTR_ERR(ndns);
nsio = to_nd_namespace_io(&ndns->dev);
dax_pmem = devm_kzalloc(dev, sizeof(*dax_pmem), GFP_KERNEL);
if (!dax_pmem)
return -ENOMEM;
/* parse the 'pfn' info block via ->rw_bytes */
rc = devm_nsio_enable(dev, nsio);
if (rc)
return rc;
rc = nvdimm_setup_pfn(nd_pfn, &dax_pmem->pgmap);
if (rc)
return rc;
devm_nsio_disable(dev, nsio);
pfn_sb = nd_pfn->pfn_sb;
if (!devm_request_mem_region(dev, nsio->res.start,
resource_size(&nsio->res),
dev_name(&ndns->dev))) {
dev_warn(dev, "could not reserve region %pR\n", &nsio->res);
return -EBUSY;
}
dax_pmem->dev = dev;
init_completion(&dax_pmem->cmp);
rc = percpu_ref_init(&dax_pmem->ref, dax_pmem_percpu_release, 0,
GFP_KERNEL);
if (rc)
return rc;
rc = devm_add_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref);
if (rc) {
percpu_ref_exit(&dax_pmem->ref);
return rc;
}
dax_pmem->pgmap.ref = &dax_pmem->ref;
dax_pmem->pgmap.kill = dax_pmem_percpu_kill;
addr = devm_memremap_pages(dev, &dax_pmem->pgmap);
if (IS_ERR(addr))
return PTR_ERR(addr);
/* adjust the dax_region resource to the start of data */
memcpy(&res, &dax_pmem->pgmap.res, sizeof(res));
res.start += le64_to_cpu(pfn_sb->dataoff);
rc = sscanf(dev_name(&ndns->dev), "namespace%d.%d", &region_id, &id);
if (rc != 2)
return -EINVAL;
dax_region = alloc_dax_region(dev, region_id, &res,
le32_to_cpu(pfn_sb->align), addr, PFN_DEV|PFN_MAP);
if (!dax_region)
return -ENOMEM;
/* TODO: support for subdividing a dax region... */
dev_dax = devm_create_dev_dax(dax_region, id, &res, 1);
/* child dev_dax instances now own the lifetime of the dax_region */
dax_region_put(dax_region);
return PTR_ERR_OR_ZERO(dev_dax);
}
static struct nd_device_driver dax_pmem_driver = {
.probe = dax_pmem_probe,
.drv = {
.name = "dax_pmem",
},
.type = ND_DRIVER_DAX_PMEM,
};
module_nd_driver(dax_pmem_driver);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Intel Corporation");
MODULE_ALIAS_ND_DEVICE(ND_DEVICE_DAX_PMEM);
obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem_core.o
obj-$(CONFIG_DEV_DAX_PMEM_COMPAT) += dax_pmem_compat.o
dax_pmem-y := pmem.o
dax_pmem_core-y := core.o
dax_pmem_compat-y := compat.o
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
#include <linux/percpu-refcount.h>
#include <linux/memremap.h>
#include <linux/module.h>
#include <linux/pfn_t.h>
#include <linux/nd.h>
#include "../bus.h"
/* we need the private definitions to implement compat suport */
#include "../dax-private.h"
static int dax_pmem_compat_probe(struct device *dev)
{
struct dev_dax *dev_dax = __dax_pmem_probe(dev, DEV_DAX_CLASS);
int rc;
if (IS_ERR(dev_dax))
return PTR_ERR(dev_dax);
if (!devres_open_group(&dev_dax->dev, dev_dax, GFP_KERNEL))
return -ENOMEM;
device_lock(&dev_dax->dev);
rc = dev_dax_probe(&dev_dax->dev);
device_unlock(&dev_dax->dev);
devres_close_group(&dev_dax->dev, dev_dax);
if (rc)
devres_release_group(&dev_dax->dev, dev_dax);
return rc;
}
static int dax_pmem_compat_release(struct device *dev, void *data)
{
device_lock(dev);
devres_release_group(dev, to_dev_dax(dev));
device_unlock(dev);
return 0;
}
static int dax_pmem_compat_remove(struct device *dev)
{
device_for_each_child(dev, NULL, dax_pmem_compat_release);
return 0;
}
static struct nd_device_driver dax_pmem_compat_driver = {
.probe = dax_pmem_compat_probe,
.remove = dax_pmem_compat_remove,
.drv = {
.name = "dax_pmem_compat",
},
.type = ND_DRIVER_DAX_PMEM,
};
static int __init dax_pmem_compat_init(void)
{
return nd_driver_register(&dax_pmem_compat_driver);
}
module_init(dax_pmem_compat_init);
static void __exit dax_pmem_compat_exit(void)
{
driver_unregister(&dax_pmem_compat_driver.drv);
}
module_exit(dax_pmem_compat_exit);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Intel Corporation");
MODULE_ALIAS_ND_DEVICE(ND_DEVICE_DAX_PMEM);
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
#include <linux/memremap.h>
#include <linux/module.h>
#include <linux/pfn_t.h>
#include "../../nvdimm/pfn.h"
#include "../../nvdimm/nd.h"
#include "../bus.h"
struct dev_dax *__dax_pmem_probe(struct device *dev, enum dev_dax_subsys subsys)
{
struct resource res;
int rc, id, region_id;
resource_size_t offset;
struct nd_pfn_sb *pfn_sb;
struct dev_dax *dev_dax;
struct nd_namespace_io *nsio;
struct dax_region *dax_region;
struct dev_pagemap pgmap = { 0 };
struct nd_namespace_common *ndns;
struct nd_dax *nd_dax = to_nd_dax(dev);
struct nd_pfn *nd_pfn = &nd_dax->nd_pfn;
struct nd_region *nd_region = to_nd_region(dev->parent);
ndns = nvdimm_namespace_common_probe(dev);
if (IS_ERR(ndns))
return ERR_CAST(ndns);
nsio = to_nd_namespace_io(&ndns->dev);
/* parse the 'pfn' info block via ->rw_bytes */
rc = devm_nsio_enable(dev, nsio);
if (rc)
return ERR_PTR(rc);
rc = nvdimm_setup_pfn(nd_pfn, &pgmap);
if (rc)
return ERR_PTR(rc);
devm_nsio_disable(dev, nsio);
/* reserve the metadata area, device-dax will reserve the data */
pfn_sb = nd_pfn->pfn_sb;
offset = le64_to_cpu(pfn_sb->dataoff);
if (!devm_request_mem_region(dev, nsio->res.start, offset,
dev_name(&ndns->dev))) {
dev_warn(dev, "could not reserve metadata\n");
return ERR_PTR(-EBUSY);
}
rc = sscanf(dev_name(&ndns->dev), "namespace%d.%d", &region_id, &id);
if (rc != 2)
return ERR_PTR(-EINVAL);
/* adjust the dax_region resource to the start of data */
memcpy(&res, &pgmap.res, sizeof(res));
res.start += offset;
dax_region = alloc_dax_region(dev, region_id, &res,
nd_region->target_node, le32_to_cpu(pfn_sb->align),
PFN_DEV|PFN_MAP);
if (!dax_region)
return ERR_PTR(-ENOMEM);
dev_dax = __devm_create_dev_dax(dax_region, id, &pgmap, subsys);
/* child dev_dax instances now own the lifetime of the dax_region */
dax_region_put(dax_region);
return dev_dax;
}
EXPORT_SYMBOL_GPL(__dax_pmem_probe);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Intel Corporation");
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
#include <linux/percpu-refcount.h>
#include <linux/memremap.h>
#include <linux/module.h>
#include <linux/pfn_t.h>
#include <linux/nd.h>
#include "../bus.h"
static int dax_pmem_probe(struct device *dev)
{
return PTR_ERR_OR_ZERO(__dax_pmem_probe(dev, DEV_DAX_BUS));
}
static struct nd_device_driver dax_pmem_driver = {
.probe = dax_pmem_probe,
.drv = {
.name = "dax_pmem",
},
.type = ND_DRIVER_DAX_PMEM,
};
static int __init dax_pmem_init(void)
{
return nd_driver_register(&dax_pmem_driver);
}
module_init(dax_pmem_init);
static void __exit dax_pmem_exit(void)
{
driver_unregister(&dax_pmem_driver.drv);
}
module_exit(dax_pmem_exit);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Intel Corporation");
#if !IS_ENABLED(CONFIG_DEV_DAX_PMEM_COMPAT)
/* For compat builds, don't load this module by default */
MODULE_ALIAS_ND_DEVICE(ND_DEVICE_DAX_PMEM);
#endif
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/uio.h> #include <linux/uio.h>
#include <linux/dax.h> #include <linux/dax.h>
#include <linux/fs.h> #include <linux/fs.h>
#include "dax-private.h"
static dev_t dax_devt; static dev_t dax_devt;
DEFINE_STATIC_SRCU(dax_srcu); DEFINE_STATIC_SRCU(dax_srcu);
...@@ -383,11 +384,15 @@ void kill_dax(struct dax_device *dax_dev) ...@@ -383,11 +384,15 @@ void kill_dax(struct dax_device *dax_dev)
spin_lock(&dax_host_lock); spin_lock(&dax_host_lock);
hlist_del_init(&dax_dev->list); hlist_del_init(&dax_dev->list);
spin_unlock(&dax_host_lock); spin_unlock(&dax_host_lock);
dax_dev->private = NULL;
} }
EXPORT_SYMBOL_GPL(kill_dax); EXPORT_SYMBOL_GPL(kill_dax);
void run_dax(struct dax_device *dax_dev)
{
set_bit(DAXDEV_ALIVE, &dax_dev->flags);
}
EXPORT_SYMBOL_GPL(run_dax);
static struct inode *dax_alloc_inode(struct super_block *sb) static struct inode *dax_alloc_inode(struct super_block *sb)
{ {
struct dax_device *dax_dev; struct dax_device *dax_dev;
...@@ -602,6 +607,8 @@ EXPORT_SYMBOL_GPL(dax_inode); ...@@ -602,6 +607,8 @@ EXPORT_SYMBOL_GPL(dax_inode);
void *dax_get_private(struct dax_device *dax_dev) void *dax_get_private(struct dax_device *dax_dev)
{ {
if (!test_bit(DAXDEV_ALIVE, &dax_dev->flags))
return NULL;
return dax_dev->private; return dax_dev->private;
} }
EXPORT_SYMBOL_GPL(dax_get_private); EXPORT_SYMBOL_GPL(dax_get_private);
...@@ -615,7 +622,7 @@ static void init_once(void *_dax_dev) ...@@ -615,7 +622,7 @@ static void init_once(void *_dax_dev)
inode_init_once(inode); inode_init_once(inode);
} }
static int __dax_fs_init(void) static int dax_fs_init(void)
{ {
int rc; int rc;
...@@ -647,35 +654,45 @@ static int __dax_fs_init(void) ...@@ -647,35 +654,45 @@ static int __dax_fs_init(void)
return rc; return rc;
} }
static void __dax_fs_exit(void) static void dax_fs_exit(void)
{ {
kern_unmount(dax_mnt); kern_unmount(dax_mnt);
unregister_filesystem(&dax_fs_type); unregister_filesystem(&dax_fs_type);
kmem_cache_destroy(dax_cache); kmem_cache_destroy(dax_cache);
} }
static int __init dax_fs_init(void) static int __init dax_core_init(void)
{ {
int rc; int rc;
rc = __dax_fs_init(); rc = dax_fs_init();
if (rc) if (rc)
return rc; return rc;
rc = alloc_chrdev_region(&dax_devt, 0, MINORMASK+1, "dax"); rc = alloc_chrdev_region(&dax_devt, 0, MINORMASK+1, "dax");
if (rc) if (rc)
__dax_fs_exit(); goto err_chrdev;
return rc;
rc = dax_bus_init();
if (rc)
goto err_bus;
return 0;
err_bus:
unregister_chrdev_region(dax_devt, MINORMASK+1);
err_chrdev:
dax_fs_exit();
return 0;
} }
static void __exit dax_fs_exit(void) static void __exit dax_core_exit(void)
{ {
unregister_chrdev_region(dax_devt, MINORMASK+1); unregister_chrdev_region(dax_devt, MINORMASK+1);
ida_destroy(&dax_minor_ida); ida_destroy(&dax_minor_ida);
__dax_fs_exit(); dax_fs_exit();
} }
MODULE_AUTHOR("Intel Corporation"); MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
subsys_initcall(dax_fs_init); subsys_initcall(dax_core_init);
module_exit(dax_fs_exit); module_exit(dax_core_exit);
...@@ -47,6 +47,7 @@ static int e820_register_one(struct resource *res, void *data) ...@@ -47,6 +47,7 @@ static int e820_register_one(struct resource *res, void *data)
ndr_desc.res = res; ndr_desc.res = res;
ndr_desc.attr_groups = e820_pmem_region_attribute_groups; ndr_desc.attr_groups = e820_pmem_region_attribute_groups;
ndr_desc.numa_node = e820_range_to_nid(res->start); ndr_desc.numa_node = e820_range_to_nid(res->start);
ndr_desc.target_node = ndr_desc.numa_node;
set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags); set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags);
if (!nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc)) if (!nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc))
return -ENXIO; return -ENXIO;
......
...@@ -153,7 +153,7 @@ struct nd_region { ...@@ -153,7 +153,7 @@ struct nd_region {
u16 ndr_mappings; u16 ndr_mappings;
u64 ndr_size; u64 ndr_size;
u64 ndr_start; u64 ndr_start;
int id, num_lanes, ro, numa_node; int id, num_lanes, ro, numa_node, target_node;
void *provider_data; void *provider_data;
struct kernfs_node *bb_state; struct kernfs_node *bb_state;
struct badblocks bb; struct badblocks bb;
......
...@@ -68,6 +68,7 @@ static int of_pmem_region_probe(struct platform_device *pdev) ...@@ -68,6 +68,7 @@ static int of_pmem_region_probe(struct platform_device *pdev)
memset(&ndr_desc, 0, sizeof(ndr_desc)); memset(&ndr_desc, 0, sizeof(ndr_desc));
ndr_desc.attr_groups = region_attr_groups; ndr_desc.attr_groups = region_attr_groups;
ndr_desc.numa_node = dev_to_node(&pdev->dev); ndr_desc.numa_node = dev_to_node(&pdev->dev);
ndr_desc.target_node = ndr_desc.numa_node;
ndr_desc.res = &pdev->resource[i]; ndr_desc.res = &pdev->resource[i];
ndr_desc.of_node = np; ndr_desc.of_node = np;
set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags); set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags);
......
...@@ -1072,6 +1072,7 @@ static struct nd_region *nd_region_create(struct nvdimm_bus *nvdimm_bus, ...@@ -1072,6 +1072,7 @@ static struct nd_region *nd_region_create(struct nvdimm_bus *nvdimm_bus,
nd_region->flags = ndr_desc->flags; nd_region->flags = ndr_desc->flags;
nd_region->ro = ro; nd_region->ro = ro;
nd_region->numa_node = ndr_desc->numa_node; nd_region->numa_node = ndr_desc->numa_node;
nd_region->target_node = ndr_desc->target_node;
ida_init(&nd_region->ns_ida); ida_init(&nd_region->ns_ida);
ida_init(&nd_region->btt_ida); ida_init(&nd_region->btt_ida);
ida_init(&nd_region->pfn_ida); ida_init(&nd_region->pfn_ida);
......
...@@ -400,12 +400,17 @@ extern bool acpi_osi_is_win8(void); ...@@ -400,12 +400,17 @@ extern bool acpi_osi_is_win8(void);
#ifdef CONFIG_ACPI_NUMA #ifdef CONFIG_ACPI_NUMA
int acpi_map_pxm_to_online_node(int pxm); int acpi_map_pxm_to_online_node(int pxm);
int acpi_map_pxm_to_node(int pxm);
int acpi_get_node(acpi_handle handle); int acpi_get_node(acpi_handle handle);
#else #else
static inline int acpi_map_pxm_to_online_node(int pxm) static inline int acpi_map_pxm_to_online_node(int pxm)
{ {
return 0; return 0;
} }
static inline int acpi_map_pxm_to_node(int pxm)
{
return 0;
}
static inline int acpi_get_node(acpi_handle handle) static inline int acpi_get_node(acpi_handle handle)
{ {
return 0; return 0;
......
...@@ -130,6 +130,7 @@ struct nd_region_desc { ...@@ -130,6 +130,7 @@ struct nd_region_desc {
void *provider_data; void *provider_data;
int num_lanes; int num_lanes;
int numa_node; int numa_node;
int target_node;
unsigned long flags; unsigned long flags;
struct device_node *of_node; struct device_node *of_node;
}; };
......
...@@ -382,7 +382,7 @@ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end, ...@@ -382,7 +382,7 @@ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
int (*func)(struct resource *, void *)) int (*func)(struct resource *, void *))
{ {
struct resource res; struct resource res;
int ret = -1; int ret = -EINVAL;
while (start < end && while (start < end &&
!find_next_iomem_res(start, end, flags, desc, first_lvl, &res)) { !find_next_iomem_res(start, end, flags, desc, first_lvl, &res)) {
...@@ -452,6 +452,9 @@ int walk_mem_res(u64 start, u64 end, void *arg, ...@@ -452,6 +452,9 @@ int walk_mem_res(u64 start, u64 end, void *arg,
* This function calls the @func callback against all memory ranges of type * This function calls the @func callback against all memory ranges of type
* System RAM which are marked as IORESOURCE_SYSTEM_RAM and IORESOUCE_BUSY. * System RAM which are marked as IORESOURCE_SYSTEM_RAM and IORESOUCE_BUSY.
* It is to be used only for System RAM. * It is to be used only for System RAM.
*
* This will find System RAM ranges that are children of top-level resources
* in addition to top-level System RAM resources.
*/ */
int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages, int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
void *arg, int (*func)(unsigned long, unsigned long, void *)) void *arg, int (*func)(unsigned long, unsigned long, void *))
...@@ -460,14 +463,14 @@ int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages, ...@@ -460,14 +463,14 @@ int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
unsigned long flags; unsigned long flags;
struct resource res; struct resource res;
unsigned long pfn, end_pfn; unsigned long pfn, end_pfn;
int ret = -1; int ret = -EINVAL;
start = (u64) start_pfn << PAGE_SHIFT; start = (u64) start_pfn << PAGE_SHIFT;
end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1; end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
while (start < end && while (start < end &&
!find_next_iomem_res(start, end, flags, IORES_DESC_NONE, !find_next_iomem_res(start, end, flags, IORES_DESC_NONE,
true, &res)) { false, &res)) {
pfn = (res.start + PAGE_SIZE - 1) >> PAGE_SHIFT; pfn = (res.start + PAGE_SIZE - 1) >> PAGE_SHIFT;
end_pfn = (res.end + 1) >> PAGE_SHIFT; end_pfn = (res.end + 1) >> PAGE_SHIFT;
if (end_pfn > pfn) if (end_pfn > pfn)
...@@ -1128,6 +1131,15 @@ struct resource * __request_region(struct resource *parent, ...@@ -1128,6 +1131,15 @@ struct resource * __request_region(struct resource *parent,
conflict = __request_resource(parent, res); conflict = __request_resource(parent, res);
if (!conflict) if (!conflict)
break; break;
/*
* mm/hmm.c reserves physical addresses which then
* become unavailable to other users. Conflicts are
* not expected. Warn to aid debugging if encountered.
*/
if (conflict->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) {
pr_warn("Unaddressable device %s %pR conflicts with %pR",
conflict->name, conflict, res);
}
if (conflict != parent) { if (conflict != parent) {
if (!(conflict->flags & IORESOURCE_BUSY)) { if (!(conflict->flags & IORESOURCE_BUSY)) {
parent = conflict; parent = conflict;
......
...@@ -101,28 +101,24 @@ u64 max_mem_size = U64_MAX; ...@@ -101,28 +101,24 @@ u64 max_mem_size = U64_MAX;
/* add this memory to iomem resource */ /* add this memory to iomem resource */
static struct resource *register_memory_resource(u64 start, u64 size) static struct resource *register_memory_resource(u64 start, u64 size)
{ {
struct resource *res, *conflict; struct resource *res;
unsigned long flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
char *resource_name = "System RAM";
if (start + size > max_mem_size) if (start + size > max_mem_size)
return ERR_PTR(-E2BIG); return ERR_PTR(-E2BIG);
res = kzalloc(sizeof(struct resource), GFP_KERNEL); /*
if (!res) * Request ownership of the new memory range. This might be
return ERR_PTR(-ENOMEM); * a child of an existing resource that was present but
* not marked as busy.
res->name = "System RAM"; */
res->start = start; res = __request_region(&iomem_resource, start, size,
res->end = start + size - 1; resource_name, flags);
res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
conflict = request_resource_conflict(&iomem_resource, res); if (!res) {
if (conflict) { pr_debug("Unable to reserve System RAM region: %016llx->%016llx\n",
if (conflict->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) { start, start + size);
pr_debug("Device unaddressable memory block "
"memory hotplug at %#010llx !\n",
(unsigned long long)start);
}
pr_debug("System RAM resource %pR cannot be added\n", res);
kfree(res);
return ERR_PTR(-EEXIST); return ERR_PTR(-EEXIST);
} }
return res; return res;
......
...@@ -35,6 +35,8 @@ obj-$(CONFIG_DAX) += dax.o ...@@ -35,6 +35,8 @@ obj-$(CONFIG_DAX) += dax.o
endif endif
obj-$(CONFIG_DEV_DAX) += device_dax.o obj-$(CONFIG_DEV_DAX) += device_dax.o
obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem_core.o
obj-$(CONFIG_DEV_DAX_PMEM_COMPAT) += dax_pmem_compat.o
nfit-y := $(ACPI_SRC)/core.o nfit-y := $(ACPI_SRC)/core.o
nfit-y += $(ACPI_SRC)/intel.o nfit-y += $(ACPI_SRC)/intel.o
...@@ -57,6 +59,7 @@ nd_e820-y := $(NVDIMM_SRC)/e820.o ...@@ -57,6 +59,7 @@ nd_e820-y := $(NVDIMM_SRC)/e820.o
nd_e820-y += config_check.o nd_e820-y += config_check.o
dax-y := $(DAX_SRC)/super.o dax-y := $(DAX_SRC)/super.o
dax-y += $(DAX_SRC)/bus.o
dax-y += config_check.o dax-y += config_check.o
device_dax-y := $(DAX_SRC)/device.o device_dax-y := $(DAX_SRC)/device.o
...@@ -64,7 +67,9 @@ device_dax-y += dax-dev.o ...@@ -64,7 +67,9 @@ device_dax-y += dax-dev.o
device_dax-y += device_dax_test.o device_dax-y += device_dax_test.o
device_dax-y += config_check.o device_dax-y += config_check.o
dax_pmem-y := $(DAX_SRC)/pmem.o dax_pmem-y := $(DAX_SRC)/pmem/pmem.o
dax_pmem_core-y := $(DAX_SRC)/pmem/core.o
dax_pmem_compat-y := $(DAX_SRC)/pmem/compat.o
dax_pmem-y += config_check.o dax_pmem-y += config_check.o
libnvdimm-y := $(NVDIMM_SRC)/core.o libnvdimm-y := $(NVDIMM_SRC)/core.o
......
...@@ -17,20 +17,11 @@ ...@@ -17,20 +17,11 @@
phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff, phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff,
unsigned long size) unsigned long size)
{ {
struct resource *res; struct resource *res = &dev_dax->region->res;
phys_addr_t addr; phys_addr_t addr;
int i;
for (i = 0; i < dev_dax->num_resources; i++) {
res = &dev_dax->res[i];
addr = pgoff * PAGE_SIZE + res->start; addr = pgoff * PAGE_SIZE + res->start;
if (addr >= res->start && addr <= res->end) if (addr >= res->start && addr <= res->end) {
break;
pgoff -= PHYS_PFN(resource_size(res));
}
if (i < dev_dax->num_resources) {
res = &dev_dax->res[i];
if (addr + size - 1 <= res->end) { if (addr + size - 1 <= res->end) {
if (get_nfit_res(addr)) { if (get_nfit_res(addr)) {
struct page *page; struct page *page;
...@@ -44,6 +35,5 @@ phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff, ...@@ -44,6 +35,5 @@ phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff,
return addr; return addr;
} }
} }
return -1; return -1;
} }
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment