Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
C
cython
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Labels
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Commits
Open sidebar
nexedi
cython
Commits
93c51d6e
Commit
93c51d6e
authored
Mar 18, 2018
by
gabrieldemarmiesse
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Added the examples of the numpy tutorial.
parent
d902b7cf
Changes
8
Show whitespace changes
Inline
Side-by-side
Showing
8 changed files
with
326 additions
and
37 deletions
+326
-37
docs/examples/userguide/convolve_fused_types.pyx
docs/examples/userguide/convolve_fused_types.pyx
+72
-0
docs/examples/userguide/convolve_infer_types.pyx
docs/examples/userguide/convolve_infer_types.pyx
+59
-0
docs/examples/userguide/convolve_memview.pyx
docs/examples/userguide/convolve_memview.pyx
+60
-0
docs/examples/userguide/convolve_py.py
docs/examples/userguide/convolve_py.py
+42
-0
docs/examples/userguide/convolve_typed.pyx
docs/examples/userguide/convolve_typed.pyx
+60
-0
docs/src/quickstart/build.rst
docs/src/quickstart/build.rst
+1
-0
docs/src/reference/compilation.rst
docs/src/reference/compilation.rst
+1
-1
docs/src/userguide/numpy_tutorial.rst
docs/src/userguide/numpy_tutorial.rst
+31
-36
No files found.
docs/examples/userguide/convolve_fused_types.pyx
0 → 100644
View file @
93c51d6e
# cython: infer_types=True
import
numpy
as
np
cimport
cython
# "def" can type its arguments but not have a return type. The type of the
# arguments for a "def" function is checked at run-time when entering the
# function.
# We now need to fix a datatype for our arrays. I've used the variable
# DTYPE for this, which is assigned to the usual NumPy runtime
# type info object.
# The arrays f, g and h is typed as "np.ndarray" instances. The only effect
# this has is to a) insert checks that the function arguments really are
# NumPy arrays, and b) make some attribute access like f.shape[0] much
# more efficient. (In this example this doesn't matter though.)
ctypedef
fused
my_type
:
int
double
long
@
cython
.
boundscheck
(
False
)
@
cython
.
wraparound
(
False
)
cpdef
naive_convolve_fused_types
(
my_type
[:,:]
f
,
my_type
[:,:]
g
):
if
g
.
shape
[
0
]
%
2
!=
1
or
g
.
shape
[
1
]
%
2
!=
1
:
raise
ValueError
(
"Only odd dimensions on filter supported"
)
# The "cdef" keyword is also used within functions to type variables. It
# can only be used at the top indentation level (there are non-trivial
# problems with allowing them in other places, though we'd love to see
# good and thought out proposals for it).
#
# For the indices, the "int" type is used. This corresponds to a C int,
# other C types (like "unsigned int") could have been used instead.
# Purists could use "Py_ssize_t" which is the proper Python type for
# array indices.
vmax
=
f
.
shape
[
0
]
wmax
=
f
.
shape
[
1
]
smax
=
g
.
shape
[
0
]
tmax
=
g
.
shape
[
1
]
smid
=
smax
//
2
tmid
=
tmax
//
2
xmax
=
vmax
+
2
*
smid
ymax
=
wmax
+
2
*
tmid
if
my_type
is
int
:
dtype
=
np
.
intc
elif
my_type
is
double
:
dtype
=
np
.
double
else
:
dtype
=
np
.
long
h_np
=
np
.
zeros
([
xmax
,
ymax
],
dtype
=
dtype
)
cdef
my_type
[:,:]
h
=
h_np
# For the value variable, we want to use the same data type as is
# stored in the array, so we use "DTYPE_t" as defined above.
# NB! An important side-effect of this is that if "value" overflows its
# datatype size, it will simply wrap around like in C, rather than raise
# an error like in Python.
cdef
my_type
value
for
x
in
range
(
xmax
):
for
y
in
range
(
ymax
):
s_from
=
max
(
smid
-
x
,
-
smid
)
s_to
=
min
((
xmax
-
x
)
-
smid
,
smid
+
1
)
t_from
=
max
(
tmid
-
y
,
-
tmid
)
t_to
=
min
((
ymax
-
y
)
-
tmid
,
tmid
+
1
)
value
=
0
for
s
in
range
(
s_from
,
s_to
):
for
t
in
range
(
t_from
,
t_to
):
v
=
x
-
smid
+
s
w
=
y
-
tmid
+
t
value
+=
g
[
smid
-
s
,
tmid
-
t
]
*
f
[
v
,
w
]
h
[
x
,
y
]
=
value
return
h_np
\ No newline at end of file
docs/examples/userguide/convolve_infer_types.pyx
0 → 100644
View file @
93c51d6e
# cython: infer_types=True
import
numpy
as
np
cimport
cython
# "def" can type its arguments but not have a return type. The type of the
# arguments for a "def" function is checked at run-time when entering the
# function.
# We now need to fix a datatype for our arrays. I've used the variable
# DTYPE for this, which is assigned to the usual NumPy runtime
# type info object.
DTYPE
=
np
.
intc
# The arrays f, g and h is typed as "np.ndarray" instances. The only effect
# this has is to a) insert checks that the function arguments really are
# NumPy arrays, and b) make some attribute access like f.shape[0] much
# more efficient. (In this example this doesn't matter though.)
@
cython
.
boundscheck
(
False
)
@
cython
.
wraparound
(
False
)
def
naive_convolve_infer_types
(
int
[:,::
1
]
f
,
int
[:,::
1
]
g
):
if
g
.
shape
[
0
]
%
2
!=
1
or
g
.
shape
[
1
]
%
2
!=
1
:
raise
ValueError
(
"Only odd dimensions on filter supported"
)
# The "cdef" keyword is also used within functions to type variables. It
# can only be used at the top indentation level (there are non-trivial
# problems with allowing them in other places, though we'd love to see
# good and thought out proposals for it).
#
# For the indices, the "int" type is used. This corresponds to a C int,
# other C types (like "unsigned int") could have been used instead.
# Purists could use "Py_ssize_t" which is the proper Python type for
# array indices.
vmax
=
f
.
shape
[
0
]
wmax
=
f
.
shape
[
1
]
smax
=
g
.
shape
[
0
]
tmax
=
g
.
shape
[
1
]
smid
=
smax
//
2
tmid
=
tmax
//
2
xmax
=
vmax
+
2
*
smid
ymax
=
wmax
+
2
*
tmid
h_np
=
np
.
zeros
([
xmax
,
ymax
],
dtype
=
DTYPE
)
cdef
int
[:,::
1
]
h
=
h_np
# For the value variable, we want to use the same data type as is
# stored in the array, so we use "DTYPE_t" as defined above.
# NB! An important side-effect of this is that if "value" overflows its
# datatype size, it will simply wrap around like in C, rather than raise
# an error like in Python.
cdef
int
value
for
x
in
range
(
xmax
):
for
y
in
range
(
ymax
):
s_from
=
max
(
smid
-
x
,
-
smid
)
s_to
=
min
((
xmax
-
x
)
-
smid
,
smid
+
1
)
t_from
=
max
(
tmid
-
y
,
-
tmid
)
t_to
=
min
((
ymax
-
y
)
-
tmid
,
tmid
+
1
)
value
=
0
for
s
in
range
(
s_from
,
s_to
):
for
t
in
range
(
t_from
,
t_to
):
v
=
x
-
smid
+
s
w
=
y
-
tmid
+
t
value
+=
g
[
smid
-
s
,
tmid
-
t
]
*
f
[
v
,
w
]
h
[
x
,
y
]
=
value
return
h_np
\ No newline at end of file
docs/examples/userguide/convolve_memview.pyx
0 → 100644
View file @
93c51d6e
import
numpy
as
np
# "def" can type its arguments but not have a return type. The type of the
# arguments for a "def" function is checked at run-time when entering the
# function.
# We now need to fix a datatype for our arrays. I've used the variable
# DTYPE for this, which is assigned to the usual NumPy runtime
# type info object.
DTYPE
=
np
.
intc
# The arrays f, g and h is typed as "np.ndarray" instances. The only effect
# this has is to a) insert checks that the function arguments really are
# NumPy arrays, and b) make some attribute access like f.shape[0] much
# more efficient. (In this example this doesn't matter though.)
def
naive_convolve_memview
(
int
[:,:]
f
,
int
[:,:]
g
):
if
g
.
shape
[
0
]
%
2
!=
1
or
g
.
shape
[
1
]
%
2
!=
1
:
raise
ValueError
(
"Only odd dimensions on filter supported"
)
# The "cdef" keyword is also used within functions to type variables. It
# can only be used at the top indentation level (there are non-trivial
# problems with allowing them in other places, though we'd love to see
# good and thought out proposals for it).
#
# For the indices, the "int" type is used. This corresponds to a C int,
# other C types (like "unsigned int") could have been used instead.
# Purists could use "Py_ssize_t" which is the proper Python type for
# array indices.
cdef
int
vmax
=
f
.
shape
[
0
]
cdef
int
wmax
=
f
.
shape
[
1
]
cdef
int
smax
=
g
.
shape
[
0
]
cdef
int
tmax
=
g
.
shape
[
1
]
cdef
int
smid
=
smax
//
2
cdef
int
tmid
=
tmax
//
2
cdef
int
xmax
=
vmax
+
2
*
smid
cdef
int
ymax
=
wmax
+
2
*
tmid
h_np
=
np
.
zeros
([
xmax
,
ymax
],
dtype
=
DTYPE
)
cdef
int
[:,:]
h
=
h_np
cdef
int
x
,
y
,
s
,
t
,
v
,
w
# It is very important to type ALL your variables. You do not get any
# warnings if not, only much slower code (they are implicitly typed as
# Python objects).
cdef
int
s_from
,
s_to
,
t_from
,
t_to
# For the value variable, we want to use the same data type as is
# stored in the array, so we use "DTYPE_t" as defined above.
# NB! An important side-effect of this is that if "value" overflows its
# datatype size, it will simply wrap around like in C, rather than raise
# an error like in Python.
cdef
int
value
for
x
in
range
(
xmax
):
for
y
in
range
(
ymax
):
s_from
=
max
(
smid
-
x
,
-
smid
)
s_to
=
min
((
xmax
-
x
)
-
smid
,
smid
+
1
)
t_from
=
max
(
tmid
-
y
,
-
tmid
)
t_to
=
min
((
ymax
-
y
)
-
tmid
,
tmid
+
1
)
value
=
0
for
s
in
range
(
s_from
,
s_to
):
for
t
in
range
(
t_from
,
t_to
):
v
=
x
-
smid
+
s
w
=
y
-
tmid
+
t
value
+=
g
[
smid
-
s
,
tmid
-
t
]
*
f
[
v
,
w
]
h
[
x
,
y
]
=
value
return
h_np
\ No newline at end of file
docs/examples/userguide/convolve_py.py
0 → 100644
View file @
93c51d6e
from
__future__
import
division
import
numpy
as
np
def
naive_convolve_py
(
f
,
g
):
# f is an image and is indexed by (v, w)
# g is a filter kernel and is indexed by (s, t),
# it needs odd dimensions
# h is the output image and is indexed by (x, y),
# it is not cropped
if
g
.
shape
[
0
]
%
2
!=
1
or
g
.
shape
[
1
]
%
2
!=
1
:
raise
ValueError
(
"Only odd dimensions on filter supported"
)
# smid and tmid are number of pixels between the center pixel
# and the edge, ie for a 5x5 filter they will be 2.
#
# The output size is calculated by adding smid, tmid to each
# side of the dimensions of the input image.
vmax
=
f
.
shape
[
0
]
wmax
=
f
.
shape
[
1
]
smax
=
g
.
shape
[
0
]
tmax
=
g
.
shape
[
1
]
smid
=
smax
//
2
tmid
=
tmax
//
2
xmax
=
vmax
+
2
*
smid
ymax
=
wmax
+
2
*
tmid
# Allocate result image.
h
=
np
.
zeros
([
xmax
,
ymax
],
dtype
=
f
.
dtype
)
# Do convolution
for
x
in
range
(
xmax
):
for
y
in
range
(
ymax
):
# Calculate pixel value for h at (x,y). Sum one component
# for each pixel (s, t) of the filter g.
s_from
=
max
(
smid
-
x
,
-
smid
)
s_to
=
min
((
xmax
-
x
)
-
smid
,
smid
+
1
)
t_from
=
max
(
tmid
-
y
,
-
tmid
)
t_to
=
min
((
ymax
-
y
)
-
tmid
,
tmid
+
1
)
value
=
0
for
s
in
range
(
s_from
,
s_to
):
for
t
in
range
(
t_from
,
t_to
):
v
=
x
-
smid
+
s
w
=
y
-
tmid
+
t
value
+=
g
[
smid
-
s
,
tmid
-
t
]
*
f
[
v
,
w
]
h
[
x
,
y
]
=
value
return
h
docs/examples/userguide/convolve_typed.pyx
0 → 100644
View file @
93c51d6e
import
numpy
as
np
# "def" can type its arguments but not have a return type. The type of the
# arguments for a "def" function is checked at run-time when entering the
# function.
# We now need to fix a datatype for our arrays. I've used the variable
# DTYPE for this, which is assigned to the usual NumPy runtime
# type info object.
DTYPE
=
np
.
intc
# The arrays f, g and h is typed as "np.ndarray" instances. The only effect
# this has is to a) insert checks that the function arguments really are
# NumPy arrays, and b) make some attribute access like f.shape[0] much
# more efficient. (In this example this doesn't matter though.)
def
naive_convolve_types
(
f
,
g
):
if
g
.
shape
[
0
]
%
2
!=
1
or
g
.
shape
[
1
]
%
2
!=
1
:
raise
ValueError
(
"Only odd dimensions on filter supported"
)
assert
f
.
dtype
==
DTYPE
and
g
.
dtype
==
DTYPE
# The "cdef" keyword is also used within functions to type variables. It
# can only be used at the top indentation level (there are non-trivial
# problems with allowing them in other places, though we'd love to see
# good and thought out proposals for it).
#
# For the indices, the "int" type is used. This corresponds to a C int,
# other C types (like "unsigned int") could have been used instead.
# Purists could use "Py_ssize_t" which is the proper Python type for
# array indices.
cdef
int
vmax
=
f
.
shape
[
0
]
cdef
int
wmax
=
f
.
shape
[
1
]
cdef
int
smax
=
g
.
shape
[
0
]
cdef
int
tmax
=
g
.
shape
[
1
]
cdef
int
smid
=
smax
//
2
cdef
int
tmid
=
tmax
//
2
cdef
int
xmax
=
vmax
+
2
*
smid
cdef
int
ymax
=
wmax
+
2
*
tmid
h
=
np
.
zeros
([
xmax
,
ymax
],
dtype
=
DTYPE
)
cdef
int
x
,
y
,
s
,
t
,
v
,
w
# It is very important to type ALL your variables. You do not get any
# warnings if not, only much slower code (they are implicitly typed as
# Python objects).
cdef
int
s_from
,
s_to
,
t_from
,
t_to
# For the value variable, we want to use the same data type as is
# stored in the array, so we use "DTYPE_t" as defined above.
# NB! An important side-effect of this is that if "value" overflows its
# datatype size, it will simply wrap around like in C, rather than raise
# an error like in Python.
cdef
int
value
for
x
in
range
(
xmax
):
for
y
in
range
(
ymax
):
s_from
=
max
(
smid
-
x
,
-
smid
)
s_to
=
min
((
xmax
-
x
)
-
smid
,
smid
+
1
)
t_from
=
max
(
tmid
-
y
,
-
tmid
)
t_to
=
min
((
ymax
-
y
)
-
tmid
,
tmid
+
1
)
value
=
0
for
s
in
range
(
s_from
,
s_to
):
for
t
in
range
(
t_from
,
t_to
):
v
=
x
-
smid
+
s
w
=
y
-
tmid
+
t
value
+=
g
[
smid
-
s
,
tmid
-
t
]
*
f
[
v
,
w
]
h
[
x
,
y
]
=
value
return
h
\ No newline at end of file
docs/src/quickstart/build.rst
View file @
93c51d6e
...
...
@@ -56,6 +56,7 @@ To build, run ``python setup.py build_ext --inplace``. Then simply
start a Python session and do ``from hello import say_hello_to`` and
use the imported function as you see fit.
.. _jupyter-notebook:
Using the Jupyter notebook
--------------------------
...
...
docs/src/reference/compilation.rst
View file @
93c51d6e
...
...
@@ -59,7 +59,7 @@ that CPython generates for disambiguation, such as
``yourmod.cpython-35m-x86_64-linux-gnu.so`` on a regular 64bit Linux installation
of CPython 3.5.
.. _compiling-distutils:
Compiling with ``distutils``
============================
...
...
docs/src/userguide/numpy_tutorial.rst
View file @
93c51d6e
...
...
@@ -6,35 +6,29 @@
Cython for NumPy users
**************************
.. NOTE:: Cython 0.16 introduced typed memoryviews as a successor to the NumPy
integration described here. They are easier to use than the buffer syntax
below, have less overhead, and can be passed around without requiring the GIL.
They should be preferred to the syntax presented in this page.
See :ref:`Typed Memoryviews <memoryviews>`.
This tutorial is aimed at NumPy users who have no experience with Cython at
all. If you have some knowledge of Cython you may want to skip to the
''Efficient indexing'' section which explains the new improvements made in
summer 2008.
''Efficient indexing'' section.
The main scenario considered is NumPy end-use rather than NumPy/SciPy
development. The reason is that Cython is not (yet) able to support functions
that are generic with respect to
datatype and
the number of dimensions in a
that are generic with respect to the number of dimensions in a
high-level fashion. This restriction is much more severe for SciPy development
than more specific, "end-user" functions. See the last section for more
information on this.
The style of this tutorial will not fit everybody, so you can also consider:
* Robert Bradshaw's `slides on cython for SciPy2008
<http://wiki.sagemath.org/scipy08?action=AttachFile&do=get&target=scipy-cython.tgz>`_
(a higher-level and quicker introduction)
* Basic Cython documentation (see `Cython front page <http://cython.org>`_).
* ``[:enhancements/buffer:Spec for the efficient indexing]``
* Kurt Smith's `video tutorial of Cython at SciPy 2015
<https://www.youtube.com/watch?v=gMvkiQ-gOW8&t=4730s&ab_channel=Enthought>`_.
It's longuer but some readers like watching talks more than reading.
The slides and notebooks of this talk are `on github
<https://github.com/kwmsmith/scipy-2015-cython-tutorial>`_.
* Basic Cython documentation (see `Cython front page
<https://cython.readthedocs.io/en/latest/index.html>`_).
Cython at a glance
==================
==
==================
Cython is a compiler which compiles Python-like code files to C code. Still,
''Cython is not a Python to C translator''. That is, it doesn't take your full
...
...
@@ -52,9 +46,9 @@ This has two important consequences:
of C libraries. When writing code in Cython you can call into C code as
easily as into Python code.
Some
Python constructs are not yet supported, though making Cython compile all
Python code is a stated goal
(among the more important omissions are inner
functions and generator functions)
.
Very few
Python constructs are not yet supported, though making Cython compile all
Python code is a stated goal
, you can see the differences with Python in
:ref:`limitations <cython-limitations>`
.
Your Cython environment
========================
...
...
@@ -70,31 +64,36 @@ However there are several options to automate these steps:
1. The `SAGE <http://sagemath.org>`_ mathematics software system provides
excellent support for using Cython and NumPy from an interactive command
line
(like IPython)
or through a notebook interface (like
line or through a notebook interface (like
Maple/Mathematica). See `this documentation
<http://www.sagemath.org/doc/prog/node40.html>`_.
2. A version of `pyximport <http://www.prescod.net/pyximport/>`_ is shipped
with Cython, so that you can import pyx-files dynamically into Python and
<http://doc.sagemath.org/html/en/developer/coding_in_cython.html>`_.
2. Cython can be used as an extension within a Jupyter notebook,
making it easy to compile and use Cython code with just a ``%%cython``
at the top of a cell. For more information see
:ref:`Using the Jupyter Notebook <jupyter-notebook>`.
3. A version of pyximport is shipped with Cython,
so that you can import pyx-files dynamically into Python and
have them compiled automatically (See :ref:`pyximport`).
3
. Cython supports distutils so that you can very easily create build scripts
4
. Cython supports distutils so that you can very easily create build scripts
which automate the process, this is the preferred method for full programs.
4. Manual compilation (see below)
See :ref:`Compiling with distutils <compiling-distutils>`.
5. Manual compilation (see below)
.. Note::
If using another interactive command line environment than SAGE, like
IPython or Python itself, it is important that you restart the process
IPython
, Jupyter Notebook
or Python itself, it is important that you restart the process
when you recompile the module. It is not enough to issue an "import"
statement again.
Installation
=============
Unless you are used to some other automatic method
:
`download Cython <http://cython.org/#download>`_ (0.9.8.1.1 or later), unpack it,
and run the usual ```python setup.py install``. This will install a
``cython`` executable on your system. It is also possible to use Cython from
the source directory without installing (simply launch :file:`cython.py` in the
root directory).
If you already have a C compiler, just do:
:
pip install Cython
otherwise, see :ref:`the installation page <install>`.
As of this writing SAGE comes with an older release of Cython than required
for this tutorial. So if using SAGE you should download the newest Cython and
...
...
@@ -125,10 +124,6 @@ like::
$ gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -fno-strict-aliasing -I/usr/include/python2.7 -o yourmod.so yourmod.c
``gcc`` should have access to the NumPy C header files so if they are not
installed at :file:`/usr/include/numpy` or similar you may need to pass another
option for those.
This creates :file:`yourmod.so` in the same directory, which is importable by
Python by using a normal ``import yourmod`` statement.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment