Commit bcb676e0 authored by scoder's avatar scoder Committed by GitHub

Merge branch 'master' into master

parents 7d6b784f de3436da
language: python
dist: trusty
sudo: false
cache:
pip: true
directories:
......@@ -33,8 +34,10 @@ env:
branches:
only:
- master
- release
install:
- python -c 'import sys; print("Python %s" % (sys.version,))'
- CFLAGS="-O2 -ggdb -Wall -Wextra $(python -c 'import sys; print("-fno-strict-aliasing" if sys.version_info[0] == 2 else "")')" python setup.py build
before_script: ccache -s
......
......@@ -2,8 +2,52 @@
Cython Changelog
================
0.27 (2017-??-??)
=================
Features added
--------------
* Extension module initialisation follows PEP 489 in CPython 3.5+, which resolves
several differences with regard to normal Python modules. This makes the global
names ``__file__`` and ``__path__`` correctly available to module level code and
improves the support for module-level relative imports.
https://www.python.org/dev/peps/pep-0489/
(Github issues #1715, #1753)
* Asynchronous generators (PEP 525) and asynchronous comprehensions (PEP 530)
have been implemented. Note that async generators require finalisation support
in order to allow for asynchronous operations during cleanup, which is only
available in CPython 3.6+. All other functionality has been backported as usual.
https://www.python.org/dev/peps/pep-0525/
https://www.python.org/dev/peps/pep-0530/
* Annotations are now included in the signature docstring generated by the
``embedsignature`` directive. Patch by Lisandro Dalcin (Github issue #1781).
* ``len(memoryview)`` can be used in nogil sections to get the size of the
first dimension of a memory view (``shape[0]``). (Github issue #1733)
* C++ classes can now contain (properly refcounted) Python objects.
Bugs fixed
----------
* Loops over ``range(enum)`` were not converted into C for-loops.
* Error positions of names (e.g. variables) were incorrectly reported after the
name and not at the beginning of the name.
latest
* Compile time ``DEF`` assignments were evaluated even when they occur inside of
falsy ``IF`` blocks. (Github issue #1796)
* abs(signed int) now returns a signed rather than unsigned int.
(Github issue #1837)
0.26.1 (2017-??-??)
===================
Features added
......@@ -12,8 +56,36 @@ Features added
Bugs fixed
----------
* ``cython.view.array`` was missing ``.__len__()``.
* Extension types with a ``.pxd`` override for their ``__releasebuffer__`` slot
(e.g. as provided by Cython for the Python ``array.array`` type) could leak
a reference to the buffer owner on release, thus not freeing the memory.
(Github issue #1638)
* Invalid C code in generators (declaration after code).
(Github issue #1801)
* Arithmetic operations on ``const`` integer variables could generate invalid code.
(Github issue #1798).
0.26 (2017-07-xx)
* Local variables with names of special Python methods failed to compile inside of
closures (Github issue #1797).
* Problem with indirect Emacs buffers in cython-mode.
Patch by Martin Albrecht (Github issue #1743).
* Extension types named ``result`` or ``PickleError`` generated invalid unpickling code.
Patch by Jason Madden (Github issue #1786).
* Bazel integration failed to compile ``.py`` files.
Patch by Guro Bokum (Github issue #1784).
* Some include directories and dependencies were referenced with their absolute paths
in the generated files despite lying within the project directory.
0.26 (2017-07-19)
=================
Features added
......@@ -45,7 +117,7 @@ Features added
* The overhead of calling fused types generic functions was reduced.
* "cdef extern" include files are now also searched relative to the current file.
Patch by jdemeyer (Github issue #1654).
Patch by Jeroen Demeyer (Github issue #1654).
* Optional optimization for re-aquiring the GIL, controlled by the
`fast_gil` directive.
......@@ -71,9 +143,12 @@ Bugs fixed
Original patch by Jelmer Vernooij (Github issue #1565).
* Decorators of cdef class methods could be executed twice.
Patch by jdemeyer (Github issue #1724).
Patch by Jeroen Demeyer (Github issue #1724).
* Dict iteration using the Py2 ``iter*`` methods failed in PyPy3.
Patch by Armin Rigo (Github issue #1631).
* Several warnings in the generated coder are now suppressed.
* Several warnings in the generated code are now suppressed.
Other changes
-------------
......@@ -87,6 +162,7 @@ Other changes
import is now a compile-time (rather than runtime) error.
* Do not use special dll linkage for "cdef public" functions.
Patch by Jeroen Demeyer (Github issue #1687).
* cdef/cpdef methods must match their declarations. See Github Issue #1732.
This is now a warning and will be an error in future releases.
......
......@@ -76,6 +76,15 @@ else:
basestring = str
def _make_relative(file_paths, base=None):
if not base:
base = os.getcwd()
if base[-1] != os.path.sep:
base += os.path.sep
return [_relpath(path, base) if path.startswith(base) else path
for path in file_paths]
def extended_iglob(pattern):
if '{' in pattern:
m = re.match('(.*){([^}]+)}(.*)', pattern)
......@@ -617,8 +626,10 @@ class DependencyTree(object):
info = self.parse_dependencies(filename)[3]
kwds = info.values
cimports, externs, incdirs = self.cimports_externs_incdirs(filename)
basedir = os.getcwd()
# Add dependencies on "cdef extern from ..." files
if externs:
externs = _make_relative(externs, basedir)
if 'depends' in kwds:
kwds['depends'] = list(set(kwds['depends']).union(externs))
else:
......@@ -627,7 +638,7 @@ class DependencyTree(object):
# "cdef extern from ..." files
if incdirs:
include_dirs = list(kwds.get('include_dirs', []))
for inc in incdirs:
for inc in _make_relative(incdirs, basedir):
if inc not in include_dirs:
include_dirs.append(inc)
kwds['include_dirs'] = include_dirs
......
......@@ -42,11 +42,17 @@ if sys.platform == 'win32':
from unittest import skip as skip_win32
except ImportError:
# poor dev's silent @unittest.skip()
def skip_win32(f):
return lambda self: None
def skip_win32(dummy):
def _skip_win32(func):
return None
return _skip_win32
else:
def skip_win32(f):
return f
def skip_win32(dummy):
def _skip_win32(func):
def wrapper(*args, **kwargs):
func(*args, **kwargs)
return wrapper
return _skip_win32
class TestIPythonMagic(CythonTest):
......@@ -60,7 +66,7 @@ class TestIPythonMagic(CythonTest):
result = ip.run_cell_magic('cython_inline', '', 'return a+b')
self.assertEqual(result, 30)
@skip_win32
@skip_win32('Skip on Windows')
def test_cython_pyximport(self):
module_name = '_test_cython_pyximport'
ip.run_cell_magic('cython_pyximport', module_name, code)
......@@ -111,7 +117,7 @@ class TestIPythonMagic(CythonTest):
self.assertEqual(ip.user_ns['g'], 2 // 10)
self.assertEqual(ip.user_ns['h'], 2 // 10)
@skip_win32
@skip_win32('Skip on Windows')
def test_extlibs(self):
code = py3compat.str_to_unicode("""
from libc.math cimport sin
......
......@@ -519,3 +519,298 @@ class PxdWriter(DeclarationWriter):
def visit_StatNode(self, node):
pass
class ExpressionWriter(TreeVisitor):
def __init__(self, result=None):
super(ExpressionWriter, self).__init__()
if result is None:
result = u""
self.result = result
self.precedence = [0]
def write(self, tree):
self.visit(tree)
return self.result
def put(self, s):
self.result += s
def remove(self, s):
if self.result.endswith(s):
self.result = self.result[:-len(s)]
def comma_separated_list(self, items):
if len(items) > 0:
for item in items[:-1]:
self.visit(item)
self.put(u", ")
self.visit(items[-1])
def visit_Node(self, node):
raise AssertionError("Node not handled by serializer: %r" % node)
def visit_NameNode(self, node):
self.put(node.name)
def visit_NoneNode(self, node):
self.put(u"None")
def visit_EllipsisNode(self, node):
self.put(u"...")
def visit_BoolNode(self, node):
self.put(str(node.value))
def visit_ConstNode(self, node):
self.put(str(node.value))
def visit_ImagNode(self, node):
self.put(node.value)
self.put(u"j")
def emit_string(self, node, prefix=u""):
repr_val = repr(node.value)
if repr_val[0] in 'ub':
repr_val = repr_val[1:]
self.put(u"%s%s" % (prefix, repr_val))
def visit_BytesNode(self, node):
self.emit_string(node, u"b")
def visit_StringNode(self, node):
self.emit_string(node)
def visit_UnicodeNode(self, node):
self.emit_string(node, u"u")
def emit_sequence(self, node, parens=(u"", u"")):
open_paren, close_paren = parens
items = node.subexpr_nodes()
self.put(open_paren)
self.comma_separated_list(items)
self.put(close_paren)
def visit_ListNode(self, node):
self.emit_sequence(node, u"[]")
def visit_TupleNode(self, node):
self.emit_sequence(node, u"()")
def visit_SetNode(self, node):
if len(node.subexpr_nodes()) > 0:
self.emit_sequence(node, u"{}")
else:
self.put(u"set()")
def visit_DictNode(self, node):
self.emit_sequence(node, u"{}")
def visit_DictItemNode(self, node):
self.visit(node.key)
self.put(u": ")
self.visit(node.value)
unop_precedence = {
'not': 3, '!': 3,
'+': 11, '-': 11, '~': 11,
}
binop_precedence = {
'or': 1,
'and': 2,
# unary: 'not': 3, '!': 3,
'in': 4, 'not_in': 4, 'is': 4, 'is_not': 4, '<': 4, '<=': 4, '>': 4, '>=': 4, '!=': 4, '==': 4,
'|': 5,
'^': 6,
'&': 7,
'<<': 8, '>>': 8,
'+': 9, '-': 9,
'*': 10, '@': 10, '/': 10, '//': 10, '%': 10,
# unary: '+': 11, '-': 11, '~': 11
'**': 12,
}
def operator_enter(self, new_prec):
old_prec = self.precedence[-1]
if old_prec > new_prec:
self.put(u"(")
self.precedence.append(new_prec)
def operator_exit(self):
old_prec, new_prec = self.precedence[-2:]
if old_prec > new_prec:
self.put(u")")
self.precedence.pop()
def visit_NotNode(self, node):
op = 'not'
prec = self.unop_precedence[op]
self.operator_enter(prec)
self.put(u"not ")
self.visit(node.operand)
self.operator_exit()
def visit_UnopNode(self, node):
op = node.operator
prec = self.unop_precedence[op]
self.operator_enter(prec)
self.put(u"%s" % node.operator)
self.visit(node.operand)
self.operator_exit()
def visit_BinopNode(self, node):
op = node.operator
prec = self.binop_precedence.get(op, 0)
self.operator_enter(prec)
self.visit(node.operand1)
self.put(u" %s " % op.replace('_', ' '))
self.visit(node.operand2)
self.operator_exit()
def visit_BoolBinopNode(self, node):
self.visit_BinopNode(node)
def visit_PrimaryCmpNode(self, node):
self.visit_BinopNode(node)
def visit_IndexNode(self, node):
self.visit(node.base)
self.put(u"[")
if isinstance(node.index, TupleNode):
self.emit_sequence(node.index)
else:
self.visit(node.index)
self.put(u"]")
def visit_SliceIndexNode(self, node):
self.visit(node.base)
self.put(u"[")
if node.start:
self.visit(node.start)
self.put(u":")
if node.stop:
self.visit(node.stop)
if node.slice:
self.put(u":")
self.visit(node.slice)
self.put(u"]")
def visit_SliceNode(self, node):
if not node.start.is_none:
self.visit(node.start)
self.put(u":")
if not node.stop.is_none:
self.visit(node.stop)
if not node.step.is_none:
self.put(u":")
self.visit(node.step)
def visit_CondExprNode(self, node):
self.visit(node.true_val)
self.put(u" if ")
self.visit(node.test)
self.put(u" else ")
self.visit(node.false_val)
def visit_AttributeNode(self, node):
self.visit(node.obj)
self.put(u".%s" % node.attribute)
def visit_SimpleCallNode(self, node):
self.visit(node.function)
self.put(u"(")
self.comma_separated_list(node.args)
self.put(")")
def emit_pos_args(self, node):
if node is None:
return
if isinstance(node, AddNode):
self.emit_pos_args(node.operand1)
self.emit_pos_args(node.operand2)
elif isinstance(node, TupleNode):
for expr in node.subexpr_nodes():
self.visit(expr)
self.put(u", ")
elif isinstance(node, AsTupleNode):
self.put("*")
self.visit(node.arg)
self.put(u", ")
else:
self.visit(node)
self.put(u", ")
def emit_kwd_args(self, node):
if node is None:
return
if isinstance(node, MergedDictNode):
for expr in node.subexpr_nodes():
self.emit_kwd_args(expr)
elif isinstance(node, DictNode):
for expr in node.subexpr_nodes():
self.put(u"%s=" % expr.key.value)
self.visit(expr.value)
self.put(u", ")
else:
self.put(u"**")
self.visit(node)
self.put(u", ")
def visit_GeneralCallNode(self, node):
self.visit(node.function)
self.put(u"(")
self.emit_pos_args(node.positional_args)
self.emit_kwd_args(node.keyword_args)
self.remove(u", ")
self.put(")")
def emit_comprehension(self, body, target,
sequence, condition,
parens=(u"", u"")):
open_paren, close_paren = parens
self.put(open_paren)
self.visit(body)
self.put(u" for ")
self.visit(target)
self.put(u" in ")
self.visit(sequence)
if condition:
self.put(u" if ")
self.visit(condition)
self.put(close_paren)
def visit_ComprehensionAppendNode(self, node):
self.visit(node.expr)
def visit_DictComprehensionAppendNode(self, node):
self.visit(node.key_expr)
self.put(u": ")
self.visit(node.value_expr)
def visit_ComprehensionNode(self, node):
tpmap = {'list': u"[]", 'dict': u"{}", 'set': u"{}"}
parens = tpmap[node.type.py_type_name()]
body = node.loop.body
target = node.loop.target
sequence = node.loop.iterator.sequence
condition = None
if hasattr(body, 'if_clauses'):
# type(body) is Nodes.IfStatNode
condition = body.if_clauses[0].condition
body = body.if_clauses[0].body
self.emit_comprehension(body, target, sequence, condition, parens)
def visit_GeneratorExpressionNode(self, node):
body = node.loop.body
target = node.loop.target
sequence = node.loop.iterator.sequence
condition = None
if hasattr(body, 'if_clauses'):
# type(body) is Nodes.IfStatNode
condition = body.if_clauses[0].condition
body = body.if_clauses[0].body.expr.arg
elif hasattr(body, 'expr'):
# type(body) is Nodes.ExprStatNode
body = body.expr.arg
self.emit_comprehension(body, target, sequence, condition, u"()")
......@@ -294,7 +294,7 @@ _parse_code = re.compile((
br'(?P<py_macro_api>Py[A-Z][a-z]+_[A-Z][A-Z_]+)|'
br'(?P<py_c_api>Py[A-Z][a-z]+_[A-Z][a-z][A-Za-z_]*)'
br')(?=\()|' # look-ahead to exclude subsequent '(' from replacement
br'(?P<error_goto>(?:(?<=;) *if .* +)?\{__pyx_filename = .*goto __pyx_L\w+;\})'
br'(?P<error_goto>(?:(?<=;) *if [^;]* +)?__PYX_ERR\([^)]+\))'
).decode('ascii')).sub
......
from __future__ import absolute_import
from __future__ import absolute_import, print_function
from .Visitor import CythonTransform
from .StringEncoding import EncodedString
from . import Options
from . import PyrexTypes, ExprNodes
from ..CodeWriter import ExpressionWriter
class AnnotationWriter(ExpressionWriter):
def visit_Node(self, node):
self.put(u"<???>")
def visit_LambdaNode(self, node):
# XXX Should we do better?
self.put("<lambda>")
class EmbedSignature(CythonTransform):
def __init__(self, context):
super(EmbedSignature, self).__init__(context)
self.denv = None # XXX
self.class_name = None
self.class_node = None
unop_precedence = 11
binop_precedence = {
'or': 1,
'and': 2,
'not': 3,
'in': 4, 'not in': 4, 'is': 4, 'is not': 4, '<': 4, '<=': 4, '>': 4, '>=': 4, '!=': 4, '==': 4,
'|': 5,
'^': 6,
'&': 7,
'<<': 8, '>>': 8,
'+': 9, '-': 9,
'*': 10, '/': 10, '//': 10, '%': 10,
# unary: '+': 11, '-': 11, '~': 11
'**': 12}
def _fmt_expr_node(self, node, precedence=0):
if isinstance(node, ExprNodes.BinopNode) and not node.inplace:
new_prec = self.binop_precedence.get(node.operator, 0)
result = '%s %s %s' % (self._fmt_expr_node(node.operand1, new_prec),
node.operator,
self._fmt_expr_node(node.operand2, new_prec))
if precedence > new_prec:
result = '(%s)' % result
elif isinstance(node, ExprNodes.UnopNode):
result = '%s%s' % (node.operator,
self._fmt_expr_node(node.operand, self.unop_precedence))
if precedence > self.unop_precedence:
result = '(%s)' % result
elif isinstance(node, ExprNodes.AttributeNode):
result = '%s.%s' % (self._fmt_expr_node(node.obj), node.attribute)
else:
result = node.name
def _fmt_expr(self, node):
writer = AnnotationWriter()
result = writer.write(node)
# print(type(node).__name__, '-->', result)
return result
def _fmt_arg_defv(self, arg):
default_val = arg.default
if not default_val:
return None
if isinstance(default_val, ExprNodes.NullNode):
return 'NULL'
try:
denv = self.denv # XXX
ctval = default_val.compile_time_value(self.denv)
repr_val = repr(ctval)
if isinstance(default_val, ExprNodes.UnicodeNode):
if repr_val[:1] != 'u':
return u'u%s' % repr_val
elif isinstance(default_val, ExprNodes.BytesNode):
if repr_val[:1] != 'b':
return u'b%s' % repr_val
elif isinstance(default_val, ExprNodes.StringNode):
if repr_val[:1] in 'ub':
return repr_val[1:]
return repr_val
except Exception:
try:
return self._fmt_expr_node(default_val)
except AttributeError:
return '<???>'
def _fmt_arg(self, arg):
if arg.type is PyrexTypes.py_object_type or arg.is_self_arg:
doc = arg.name
else:
doc = arg.type.declaration_code(arg.name, for_display=1)
if arg.annotation:
annotation = self._fmt_expr(arg.annotation)
doc = doc + (': %s' % annotation)
if arg.default:
arg_defv = self._fmt_arg_defv(arg)
if arg_defv:
doc = doc + ('=%s' % arg_defv)
default = self._fmt_expr(arg.default)
doc = doc + (' = %s' % default)
elif arg.default:
default = self._fmt_expr(arg.default)
doc = doc + ('=%s' % default)
return doc
def _fmt_star_arg(self, arg):
arg_doc = arg.name
if arg.annotation:
annotation = self._fmt_expr(arg.annotation)
arg_doc = arg_doc + (': %s' % annotation)
return arg_doc
def _fmt_arglist(self, args,
npargs=0, pargs=None,
nkargs=0, kargs=None,
......@@ -94,11 +64,13 @@ class EmbedSignature(CythonTransform):
arg_doc = self._fmt_arg(arg)
arglist.append(arg_doc)
if pargs:
arglist.insert(npargs, '*%s' % pargs.name)
arg_doc = self._fmt_star_arg(pargs)
arglist.insert(npargs, '*%s' % arg_doc)
elif nkargs:
arglist.insert(npargs, '*')
if kargs:
arglist.append('**%s' % kargs.name)
arg_doc = self._fmt_star_arg(kargs)
arglist.append('**%s' % arg_doc)
return arglist
def _fmt_ret_type(self, ret):
......@@ -110,6 +82,7 @@ class EmbedSignature(CythonTransform):
def _fmt_signature(self, cls_name, func_name, args,
npargs=0, pargs=None,
nkargs=0, kargs=None,
return_expr=None,
return_type=None, hide_self=False):
arglist = self._fmt_arglist(args,
npargs, pargs,
......@@ -119,7 +92,10 @@ class EmbedSignature(CythonTransform):
func_doc = '%s(%s)' % (func_name, arglist_doc)
if cls_name:
func_doc = '%s.%s' % (cls_name, func_doc)
if return_type:
ret_doc = None
if return_expr:
ret_doc = self._fmt_expr(return_expr)
elif return_type:
ret_doc = self._fmt_ret_type(return_type)
if ret_doc:
func_doc = '%s -> %s' % (func_doc, ret_doc)
......@@ -177,6 +153,7 @@ class EmbedSignature(CythonTransform):
class_name, func_name, node.args,
npargs, node.star_arg,
nkargs, node.starstar_arg,
return_expr=node.return_type_annotation,
return_type=None, hide_self=hide_self)
if signature:
if is_constructor:
......
......@@ -738,7 +738,8 @@ buffer_structs_code = load_buffer_utility(
"BufferFormatStructs", proto_block='utility_code_proto_before_types')
acquire_utility_code = load_buffer_utility("BufferFormatCheck",
context=context,
requires=[buffer_structs_code])
requires=[buffer_structs_code,
UtilityCode.load_cached("IsLittleEndian", "ModuleSetupCode.c")])
# See utility code BufferFormatFromTypeInfo
_typeinfo_to_format_code = load_buffer_utility("TypeInfoToFormat", context={},
......
......@@ -95,16 +95,25 @@ builtin_function_table = [
is_strict_signature = True),
BuiltinFunction('abs', "f", "f", "fabsf",
is_strict_signature = True),
BuiltinFunction('abs', "i", "i", "abs",
is_strict_signature = True),
BuiltinFunction('abs', "l", "l", "labs",
is_strict_signature = True),
BuiltinFunction('abs', None, None, "__Pyx_abs_longlong",
utility_code = UtilityCode.load("abs_longlong", "Builtins.c"),
func_type = PyrexTypes.CFuncType(
PyrexTypes.c_longlong_type, [
PyrexTypes.CFuncTypeArg("arg", PyrexTypes.c_longlong_type, None)
],
is_strict_signature = True, nogil=True)),
] + list(
# uses getattr to get PyrexTypes.c_uint_type etc to allow easy iteration over a list
BuiltinFunction('abs', None, None, "__Pyx_abs_{0}".format(t),
utility_code = UtilityCode.load("abs_{0}".format(t), "Builtins.c"),
BuiltinFunction('abs', None, None, "/*abs_{0}*/".format(t.specialization_name()),
func_type = PyrexTypes.CFuncType(
getattr(PyrexTypes,"c_u{0}_type".format(t)), [
PyrexTypes.CFuncTypeArg("arg", getattr(PyrexTypes,"c_{0}_type".format(t)), None)
],
t,
[PyrexTypes.CFuncTypeArg("arg", t, None)],
is_strict_signature = True, nogil=True))
for t in ("int", "long", "longlong")
for t in (PyrexTypes.c_uint_type, PyrexTypes.c_ulong_type, PyrexTypes.c_ulonglong_type)
) + list(
BuiltinFunction('abs', None, None, "__Pyx_c_abs{0}".format(t.funcsuffix),
func_type = PyrexTypes.CFuncType(
......
......@@ -33,6 +33,7 @@ cdef class FunctionState:
cdef public object return_from_error_cleanup_label # not used in __init__ ?
cdef public object exc_vars
cdef public object current_except
cdef public bint in_try_finally
cdef public bint can_trace
cdef public bint gil_owned
......
......@@ -6,10 +6,11 @@
from __future__ import absolute_import
import cython
cython.declare(os=object, re=object, operator=object,
Naming=object, Options=object, StringEncoding=object,
cython.declare(os=object, re=object, operator=object, textwrap=object,
Template=object, Naming=object, Options=object, StringEncoding=object,
Utils=object, SourceDescriptor=object, StringIOTree=object,
DebugFlags=object, basestring=object)
DebugFlags=object, basestring=object, defaultdict=object,
closing=object, partial=object)
import os
import re
......@@ -274,7 +275,7 @@ class UtilityCodeBase(object):
elif not values:
values = None
elif len(values) == 1:
values = values[0]
values = list(values)[0]
kwargs[name] = values
if proto is not None:
......@@ -602,6 +603,7 @@ class FunctionState(object):
self.in_try_finally = 0
self.exc_vars = None
self.current_except = None
self.can_trace = False
self.gil_owned = True
......@@ -632,8 +634,8 @@ class FunctionState(object):
label += '_' + name
return label
def new_yield_label(self):
label = self.new_label('resume_from_yield')
def new_yield_label(self, expr_type='yield'):
label = self.new_label('resume_from_%s' % expr_type)
num_and_label = (len(self.yield_labels) + 1, label)
self.yield_labels.append(num_and_label)
return num_and_label
......@@ -1628,7 +1630,7 @@ class CCodeWriter(object):
# Functions delegated to function scope
def new_label(self, name=None): return self.funcstate.new_label(name)
def new_error_label(self): return self.funcstate.new_error_label()
def new_yield_label(self): return self.funcstate.new_yield_label()
def new_yield_label(self, *args): return self.funcstate.new_yield_label(*args)
def get_loop_labels(self): return self.funcstate.get_loop_labels()
def set_loop_labels(self, labels): return self.funcstate.set_loop_labels(labels)
def new_loop_labels(self): return self.funcstate.new_loop_labels()
......@@ -1916,9 +1918,12 @@ class CCodeWriter(object):
if entry.type.is_pyobject:
self.putln("__Pyx_XGIVEREF(%s);" % self.entry_as_pyobject(entry))
def put_var_incref(self, entry):
def put_var_incref(self, entry, nanny=True):
if entry.type.is_pyobject:
if nanny:
self.putln("__Pyx_INCREF(%s);" % self.entry_as_pyobject(entry))
else:
self.putln("Py_INCREF(%s);" % self.entry_as_pyobject(entry))
def put_var_xincref(self, entry):
if entry.type.is_pyobject:
......@@ -1967,9 +1972,12 @@ class CCodeWriter(object):
if entry.type.is_pyobject:
self.putln("__Pyx_XDECREF(%s);" % self.entry_as_pyobject(entry))
def put_var_xdecref(self, entry):
def put_var_xdecref(self, entry, nanny=True):
if entry.type.is_pyobject:
if nanny:
self.putln("__Pyx_XDECREF(%s);" % self.entry_as_pyobject(entry))
else:
self.putln("Py_XDECREF(%s);" % self.entry_as_pyobject(entry))
def put_var_decref_clear(self, entry):
self._put_var_decref_clear(entry, null_check=False)
......@@ -2214,7 +2222,7 @@ class CCodeWriter(object):
def put_finish_refcount_context(self):
self.putln("__Pyx_RefNannyFinishContext();")
def put_add_traceback(self, qualified_name):
def put_add_traceback(self, qualified_name, include_cline=True):
"""
Build a Python traceback for propagating exceptions.
......@@ -2222,7 +2230,7 @@ class CCodeWriter(object):
"""
format_tuple = (
qualified_name,
Naming.clineno_cname,
Naming.clineno_cname if include_cline else 0,
Naming.lineno_cname,
Naming.filename_cname,
)
......
This diff is collapsed.
......@@ -560,8 +560,6 @@ class FusedCFuncDefNode(StatListNode):
"""
from . import TreeFragment, Code, UtilityCode
env.use_utility_code(Code.UtilityCode.load_cached("IsLittleEndian","ModuleSetupCode.c"))
fused_types = self._get_fused_base_types([
arg.type for arg in self.node.args if arg.type.is_fused])
......@@ -638,6 +636,7 @@ class FusedCFuncDefNode(StatListNode):
if normal_types:
self._fused_instance_checks(normal_types, pyx_code, env)
if buffer_types or pythran_types:
env.use_utility_code(Code.UtilityCode.load_cached("IsLittleEndian", "ModuleSetupCode.c"))
self._buffer_checks(buffer_types, pythran_types, pyx_code, decl_code, env)
if has_object_fallback:
pyx_code.context.update(specialized_type_name='object')
......
This diff is collapsed.
......@@ -101,6 +101,10 @@ print_function = pyrex_prefix + "print"
print_function_kwargs = pyrex_prefix + "print_kwargs"
cleanup_cname = pyrex_prefix + "module_cleanup"
pymoduledef_cname = pyrex_prefix + "moduledef"
pymoduledef_slots_cname = pyrex_prefix + "moduledef_slots"
pymodinit_module_arg = pyrex_prefix + "pyinit_module"
pymodule_create_func_cname = pyrex_prefix + "pymod_create"
pymodule_exec_func_cname = pyrex_prefix + "pymod_exec"
optional_args_cname = pyrex_prefix + "optional_args"
import_star = pyrex_prefix + "import_star"
import_star_set = pyrex_prefix + "import_star_set"
......
This diff is collapsed.
......@@ -259,7 +259,7 @@ class IterationTransform(Visitor.EnvTransform):
return self._transform_reversed_iteration(node, iterator)
# range() iteration?
if Options.convert_range and node.target.type.is_int:
if Options.convert_range and (node.target.type.is_int or node.target.type.is_enum):
if iterator.self is None and function.is_name and \
function.entry and function.entry.is_builtin and \
function.name in ('range', 'xrange'):
......@@ -892,7 +892,7 @@ class IterationTransform(Visitor.EnvTransform):
method_node = ExprNodes.StringNode(
dict_obj.pos, is_identifier=True, value=method)
dict_obj = dict_obj.as_none_safe_node(
"'NoneType' object has no attribute '%s'",
"'NoneType' object has no attribute '%{0}s'".format('.30' if len(method) <= 30 else ''),
error = "PyExc_AttributeError",
format_args = [method])
else:
......@@ -2429,6 +2429,14 @@ class OptimizeBuiltinCalls(Visitor.NodeRefCleanupMixin,
node.pos, "__Pyx_Py_UNICODE_strlen", self.Pyx_Py_UNICODE_strlen_func_type,
args = [arg],
is_temp = node.is_temp)
elif arg.type.is_memoryviewslice:
func_type = PyrexTypes.CFuncType(
PyrexTypes.c_size_t_type, [
PyrexTypes.CFuncTypeArg("memoryviewslice", arg.type, None)
], nogil=True)
new_node = ExprNodes.PythonCapiCallNode(
node.pos, "__Pyx_MemoryView_Len", func_type,
args=[arg], is_temp=node.is_temp)
elif arg.type.is_pyobject:
cfunc_name = self._map_to_capi_len_function(arg.type)
if cfunc_name is None:
......@@ -2442,8 +2450,7 @@ class OptimizeBuiltinCalls(Visitor.NodeRefCleanupMixin,
"object of type 'NoneType' has no len()")
new_node = ExprNodes.PythonCapiCallNode(
node.pos, cfunc_name, self.PyObject_Size_func_type,
args = [arg],
is_temp = node.is_temp)
args=[arg], is_temp=node.is_temp)
elif arg.type.is_unicode_char:
return ExprNodes.IntNode(node.pos, value='1', constant_result=1,
type=node.type)
......@@ -2759,7 +2766,7 @@ class OptimizeBuiltinCalls(Visitor.NodeRefCleanupMixin,
if is_list:
type_name = 'List'
obj = obj.as_none_safe_node(
"'NoneType' object has no attribute '%s'",
"'NoneType' object has no attribute '%.30s'",
error="PyExc_AttributeError",
format_args=['pop'])
else:
......@@ -3449,7 +3456,7 @@ class OptimizeBuiltinCalls(Visitor.NodeRefCleanupMixin,
format_args=['decode', string_type.name])
else:
string_node = string_node.as_none_safe_node(
"'NoneType' object has no attribute '%s'",
"'NoneType' object has no attribute '%.30s'",
error="PyExc_AttributeError",
format_args=['decode'])
elif not string_type.is_string and not string_type.is_cpp_string:
......@@ -3646,7 +3653,7 @@ class OptimizeBuiltinCalls(Visitor.NodeRefCleanupMixin,
format_args=[attr_name, function.obj.name])
else:
self_arg = self_arg.as_none_safe_node(
"'NoneType' object has no attribute '%s'",
"'NoneType' object has no attribute '%{0}s'".format('.30' if len(attr_name) <= 30 else ''),
error = "PyExc_AttributeError",
format_args = [attr_name])
args[0] = self_arg
......
......@@ -46,22 +46,33 @@ cdef class ExpandInplaceOperators(EnvTransform):
cdef class AlignFunctionDefinitions(CythonTransform):
cdef dict directives
cdef scope
cdef set imported_names
cdef object scope
@cython.final
cdef class YieldNodeCollector(TreeVisitor):
cdef public list yields
cdef public list returns
cdef public list finallys
cdef public list excepts
cdef public bint has_return_value
cdef public bint has_yield
cdef public bint has_await
@cython.final
cdef class MarkClosureVisitor(CythonTransform):
cdef bint needs_closure
@cython.final
cdef class CreateClosureClasses(CythonTransform):
cdef list path
cdef bint in_lambda
cdef module_scope
cdef generator_class
cdef create_class_from_scope(self, node, target_module_scope, inner_node=*)
cdef find_entries_used_in_closures(self, node)
cdef class GilCheck(VisitorTransform):
cdef list env_stack
cdef bint nogil
......
This diff is collapsed.
......@@ -501,7 +501,7 @@ def p_call_parse_args(s, allow_genexp=True):
break
s.next()
if s.sy == 'for':
if s.sy in ('for', 'async'):
if not keyword_args and not last_was_tuple_unpack:
if len(positional_args) == 1 and len(positional_args[0]) == 1:
positional_args = [[p_genexp(s, positional_args[0][0])]]
......@@ -703,17 +703,18 @@ def p_atom(s):
s.error("invalid string kind '%s'" % kind)
elif sy == 'IDENT':
name = s.systring
s.next()
if name == "None":
return ExprNodes.NoneNode(pos)
result = ExprNodes.NoneNode(pos)
elif name == "True":
return ExprNodes.BoolNode(pos, value=True)
result = ExprNodes.BoolNode(pos, value=True)
elif name == "False":
return ExprNodes.BoolNode(pos, value=False)
result = ExprNodes.BoolNode(pos, value=False)
elif name == "NULL" and not s.in_python_file:
return ExprNodes.NullNode(pos)
result = ExprNodes.NullNode(pos)
else:
return p_name(s, name)
result = p_name(s, name)
s.next()
return result
else:
s.error("Expected an identifier or literal")
......@@ -771,6 +772,15 @@ def wrap_compile_time_constant(pos, value):
return ExprNodes.IntNode(pos, value=rep, constant_result=value)
elif isinstance(value, float):
return ExprNodes.FloatNode(pos, value=rep, constant_result=value)
elif isinstance(value, complex):
node = ExprNodes.ImagNode(pos, value=repr(value.imag), constant_result=complex(0.0, value.imag))
if value.real:
# FIXME: should we care about -0.0 ?
# probably not worth using the '-' operator for negative imag values
node = ExprNodes.binop_node(
pos, '+', ExprNodes.FloatNode(pos, value=repr(value.real), constant_result=value.real), node,
constant_result=value)
return node
elif isinstance(value, _unicode):
return ExprNodes.UnicodeNode(pos, value=EncodedString(value))
elif isinstance(value, _bytes):
......@@ -1187,7 +1197,7 @@ def p_f_string_expr(s, unicode_value, pos, starting_index, is_raw):
# list_display ::= "[" [listmaker] "]"
# listmaker ::= (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
# comp_iter ::= comp_for | comp_if
# comp_for ::= "for" expression_list "in" testlist [comp_iter]
# comp_for ::= ["async"] "for" expression_list "in" testlist [comp_iter]
# comp_if ::= "if" test [comp_iter]
def p_list_maker(s):
......@@ -1199,7 +1209,7 @@ def p_list_maker(s):
return ExprNodes.ListNode(pos, args=[])
expr = p_test_or_starred_expr(s)
if s.sy == 'for':
if s.sy in ('for', 'async'):
if expr.is_starred:
s.error("iterable unpacking cannot be used in comprehension")
append = ExprNodes.ComprehensionAppendNode(pos, expr=expr)
......@@ -1221,7 +1231,7 @@ def p_list_maker(s):
def p_comp_iter(s, body):
if s.sy == 'for':
if s.sy in ('for', 'async'):
return p_comp_for(s, body)
elif s.sy == 'if':
return p_comp_if(s, body)
......@@ -1230,11 +1240,17 @@ def p_comp_iter(s, body):
return body
def p_comp_for(s, body):
# s.sy == 'for'
pos = s.position()
# [async] for ...
is_async = False
if s.sy == 'async':
is_async = True
s.next()
kw = p_for_bounds(s, allow_testlist=False)
kw.update(else_clause = None, body = p_comp_iter(s, body))
# s.sy == 'for'
s.expect('for')
kw = p_for_bounds(s, allow_testlist=False, is_async=is_async)
kw.update(else_clause=None, body=p_comp_iter(s, body), is_async=is_async)
return Nodes.ForStatNode(pos, **kw)
def p_comp_if(s, body):
......@@ -1302,7 +1318,7 @@ def p_dict_or_set_maker(s):
else:
break
if s.sy == 'for':
if s.sy in ('for', 'async'):
# dict/set comprehension
if len(parts) == 1 and isinstance(parts[0], list) and len(parts[0]) == 1:
item = parts[0][0]
......@@ -1432,13 +1448,13 @@ def p_testlist_comp(s):
s.next()
exprs = p_test_or_starred_expr_list(s, expr)
return ExprNodes.TupleNode(pos, args = exprs)
elif s.sy == 'for':
elif s.sy in ('for', 'async'):
return p_genexp(s, expr)
else:
return expr
def p_genexp(s, expr):
# s.sy == 'for'
# s.sy == 'async' | 'for'
loop = p_comp_for(s, Nodes.ExprStatNode(
expr.pos, expr = ExprNodes.YieldExprNode(expr.pos, arg=expr)))
return ExprNodes.GeneratorExpressionNode(expr.pos, loop=loop)
......@@ -2134,7 +2150,14 @@ def p_simple_statement_list(s, ctx, first_statement = 0):
stat = stats[0]
else:
stat = Nodes.StatListNode(pos, stats = stats)
if s.sy not in ('NEWLINE', 'EOF'):
# provide a better error message for users who accidentally write Cython code in .py files
if isinstance(stat, Nodes.ExprStatNode):
if stat.expr.is_name and stat.expr.name == 'cdef':
s.error("The 'cdef' keyword is only allowed in Cython files (pyx/pxi/pxd)", pos)
s.expect_newline("Syntax error in simple statement list")
return stat
def p_compile_time_expr(s):
......@@ -2151,6 +2174,7 @@ def p_DEF_statement(s):
name = p_ident(s)
s.expect('=')
expr = p_compile_time_expr(s)
if s.compile_time_eval:
value = expr.compile_time_value(denv)
#print "p_DEF_statement: %s = %r" % (name, value) ###
denv.declare(name, value)
......
......@@ -6,7 +6,6 @@ from time import time
from . import Errors
from . import DebugFlags
from . import Options
from .Visitor import CythonTransform
from .Errors import CompileError, InternalError, AbortError
from . import Naming
......@@ -183,7 +182,7 @@ def create_pipeline(context, mode, exclude_classes=()):
NormalizeTree(context),
PostParse(context),
_specific_post_parse,
TrackNumpyAttributes(context),
TrackNumpyAttributes(),
InterpretCompilerDirectives(context, context.compiler_directives),
ParallelRangeTransform(context),
AdjustDefByDirectives(context),
......@@ -324,8 +323,15 @@ def insert_into_pipeline(pipeline, transform, before=None, after=None):
# Running a pipeline
#
_pipeline_entry_points = {}
def run_pipeline(pipeline, source, printtree=True):
from .Visitor import PrintTree
exec_ns = globals().copy() if DebugFlags.debug_verbose_pipeline else None
def run(phase, data):
return phase(data)
error = None
data = source
......@@ -333,12 +339,19 @@ def run_pipeline(pipeline, source, printtree=True):
try:
for phase in pipeline:
if phase is not None:
if not printtree and isinstance(phase, PrintTree):
continue
if DebugFlags.debug_verbose_pipeline:
t = time()
print("Entering pipeline phase %r" % phase)
if not printtree and isinstance(phase, PrintTree):
continue
data = phase(data)
# create a new wrapper for each step to show the name in profiles
phase_name = getattr(phase, '__name__', type(phase).__name__)
try:
run = _pipeline_entry_points[phase_name]
except KeyError:
exec("def %s(phase, data): return phase(data)" % phase_name, exec_ns)
run = _pipeline_entry_points[phase_name] = exec_ns[phase_name]
data = run(phase, data)
if DebugFlags.debug_verbose_pipeline:
print(" %.3f seconds" % (time() - t))
except CompileError as err:
......
......@@ -3461,7 +3461,8 @@ class CppClassType(CType):
})
from .UtilityCode import CythonUtilityCode
env.use_utility_code(CythonUtilityCode.load(
cls.replace('unordered_', '') + ".from_py", "CppConvert.pyx", context=context))
cls.replace('unordered_', '') + ".from_py", "CppConvert.pyx",
context=context, compiler_directives=env.directives))
self.from_py_function = cname
return True
......@@ -3505,7 +3506,8 @@ class CppClassType(CType):
})
from .UtilityCode import CythonUtilityCode
env.use_utility_code(CythonUtilityCode.load(
cls.replace('unordered_', '') + ".to_py", "CppConvert.pyx", context=context))
cls.replace('unordered_', '') + ".to_py", "CppConvert.pyx",
context=context, compiler_directives=env.directives))
self.to_py_function = cname
return True
......@@ -4338,6 +4340,10 @@ def widest_numeric_type(type1, type2):
type1 = type1.ref_base_type
if type2.is_reference:
type2 = type2.ref_base_type
if type1.is_const:
type1 = type1.const_base_type
if type2.is_const:
type2 = type2.const_base_type
if type1 == type2:
widest_type = type1
elif type1.is_complex or type2.is_complex:
......
This diff is collapsed.
......@@ -223,7 +223,6 @@ class TestDebugTransform(DebuggerTestCase):
# the xpath of the standard ElementTree is primitive, don't use
# anything fancy
L = list(t.find('/Module/Globals'))
# assertTrue is retarded, use the normal assert statement
assert L
xml_globals = dict((e.attrib['name'], e.attrib['type']) for e in L)
self.assertEqual(len(L), len(xml_globals))
......
from __future__ import absolute_import
import unittest
import Cython.Compiler.PyrexTypes as PT
class TestMethodDispatcherTransform(unittest.TestCase):
def test_widest_numeric_type(self):
def assert_widest(type1, type2, widest):
self.assertEqual(widest, PT.widest_numeric_type(type1, type2))
assert_widest(PT.c_int_type, PT.c_long_type, PT.c_long_type)
assert_widest(PT.c_double_type, PT.c_long_type, PT.c_double_type)
assert_widest(PT.c_longdouble_type, PT.c_long_type, PT.c_longdouble_type)
cenum = PT.CEnumType("E", "cenum", typedef_flag=False)
assert_widest(PT.c_int_type, cenum, PT.c_int_type)
......@@ -250,8 +250,7 @@ class MarkParallelAssignments(EnvTransform):
def visit_YieldExprNode(self, node):
if self.parallel_block_stack:
error(node.pos, "Yield not allowed in parallel sections")
error(node.pos, "'%s' not allowed in parallel sections" % node.expr_keyword)
return node
def visit_ReturnStatNode(self, node):
......@@ -307,6 +306,13 @@ class MarkOverflowingArithmetic(CythonTransform):
else:
return self.visit_dangerous_node(node)
def visit_SimpleCallNode(self, node):
if node.function.is_name and node.function.name == 'abs':
# Overflows for minimum value of fixed size ints.
return self.visit_dangerous_node(node)
else:
return self.visit_neutral_node(node)
visit_UnopNode = visit_neutral_node
visit_UnaryMinusNode = visit_dangerous_node
......
......@@ -519,7 +519,7 @@ class DictOffsetSlot(SlotDescriptor):
# Slot descriptor for a class' dict offset, for dynamic attributes.
def slot_code(self, scope):
dict_entry = scope.lookup_here("__dict__")
dict_entry = scope.lookup_here("__dict__") if not scope.is_closure_class_scope else None
if dict_entry and dict_entry.is_variable:
if getattr(dict_entry.type, 'cname', None) != 'PyDict_Type':
error(dict_entry.pos, "__dict__ slot must be of type 'dict'")
......
......@@ -19,7 +19,7 @@ import tempfile
import functools
import traceback
import itertools
from test import test_support
#from test import test_support
import gdb
......
......@@ -102,6 +102,8 @@ cdef extern from "Python.h":
# or NULL on failure. This is the equivalent of the Python
# expression "o.attr_name".
object PyObject_GenericGetAttr(object o, object attr_name)
int PyObject_SetAttrString(object o, char *attr_name, object v) except -1
# Set the value of the attribute named attr_name, for object o, to
# the value v. Returns -1 on failure. This is the equivalent of
......@@ -112,6 +114,8 @@ cdef extern from "Python.h":
# the value v. Returns -1 on failure. This is the equivalent of
# the Python statement "o.attr_name = v".
int PyObject_GenericSetAttr(object o, object attr_name, object v) except -1
int PyObject_DelAttrString(object o, char *attr_name) except -1
# Delete attribute named attr_name, for object o. Returns -1 on
# failure. This is the equivalent of the Python statement: "del
......
cdef extern from "<functional>" namespace "std" nogil:
cdef cppclass function[T]:
function() except +
function(T*) except +
function(function&) except +
function(void*) except +
function operator=(T*)
function operator=(function&)
function operator=(void*)
function operator=[U](U)
bint operator bool()
......@@ -18,6 +18,11 @@ cdef extern from "<sys/stat.h>" nogil:
time_t st_mtime
time_t st_ctime
# st_birthtime exists on *BSD and OS X.
# Under Linux, defining it here does not hurt. Compilation under Linux
# will only (and rightfully) fail when attempting to use the field.
time_t st_birthtime
# POSIX prescribes including both <sys/stat.h> and <unistd.h> for these
cdef extern from "<unistd.h>" nogil:
int fchmod(int, mode_t)
......
# cython.* namespace for pure mode.
from __future__ import absolute_import
__version__ = "0.26rc1"
__version__ = "0.27a0"
try:
from __builtin__ import basestring
......@@ -144,6 +144,7 @@ def cdiv(a, b):
q = a / b
if q < 0:
q += 1
return q
def cmod(a, b):
r = a % b
......@@ -421,10 +422,13 @@ void = typedef(int, "void")
for t in int_types + float_types + complex_types + other_types:
for i in range(1, 4):
gs["%s_%s" % ('p'*i, t)] = globals()[t]._pointer(i)
gs["%s_%s" % ('p'*i, t)] = gs[t]._pointer(i)
void = typedef(None, "void")
NULL = p_void(0)
NULL = gs['p_void'](0)
# looks like 'gs' has some users out there by now...
#del gs
integral = floating = numeric = _FusedType()
......
This diff is collapsed.
......@@ -111,7 +111,7 @@ static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) {
{{for type_ptr, getbuffer, releasebuffer in types}}
{{if getbuffer}}
if (PyObject_TypeCheck(obj, {{type_ptr}})) return {{getbuffer}}(obj, view, flags);
if (__Pyx_TypeCheck(obj, {{type_ptr}})) return {{getbuffer}}(obj, view, flags);
{{endif}}
{{endfor}}
......@@ -128,14 +128,15 @@ static void __Pyx_ReleaseBuffer(Py_buffer *view) {
return;
}
if ((0)) {}
{{for type_ptr, getbuffer, releasebuffer in types}}
{{if releasebuffer}}
if (PyObject_TypeCheck(obj, {{type_ptr}})) { {{releasebuffer}}(obj, view); return; }
else if (__Pyx_TypeCheck(obj, {{type_ptr}})) {{releasebuffer}}(obj, view);
{{endif}}
{{endfor}}
Py_DECREF(obj);
view->obj = NULL;
Py_DECREF(obj);
}
#endif /* PY_MAJOR_VERSION < 3 */
......@@ -166,11 +167,6 @@ static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
/////////////// BufferFormatCheck ///////////////
static CYTHON_INLINE int __Pyx_IsLittleEndian(void) {
unsigned int n = 1;
return *(unsigned char*)(&n) != 0;
}
static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
__Pyx_BufFmt_StackElem* stack,
......@@ -611,7 +607,7 @@ static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const cha
++ts;
break;
case '<':
if (!__Pyx_IsLittleEndian()) {
if (!__Pyx_Is_Little_Endian()) {
PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler");
return NULL;
}
......@@ -620,7 +616,7 @@ static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const cha
break;
case '>':
case '!':
if (__Pyx_IsLittleEndian()) {
if (__Pyx_Is_Little_Endian()) {
PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler");
return NULL;
}
......
......@@ -245,9 +245,7 @@ static CYTHON_INLINE unsigned long __Pyx_abs_long(long x) {
//////////////////// abs_longlong.proto ////////////////////
static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_abs_longlong(PY_LONG_LONG x) {
if (unlikely(x == -PY_LLONG_MAX-1))
return ((unsigned PY_LONG_LONG)PY_LLONG_MAX) + 1U;
static CYTHON_INLINE PY_LONG_LONG __Pyx_abs_longlong(PY_LONG_LONG x) {
#if defined (__cplusplus) && __cplusplus >= 201103L
return (unsigned PY_LONG_LONG) std::abs(x);
#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
......
This diff is collapsed.
......@@ -1218,7 +1218,7 @@ static PyObject* __Pyx_Method_ClassMethod(PyObject *method) {
methoddescr_type = Py_TYPE(meth);
Py_DECREF(meth);
}
if (PyObject_TypeCheck(method, methoddescr_type)) {
if (__Pyx_TypeCheck(method, methoddescr_type)) {
#endif
// cdef classes
PyMethodDescrObject *descr = (PyMethodDescrObject *)method;
......@@ -1238,7 +1238,7 @@ static PyObject* __Pyx_Method_ClassMethod(PyObject *method) {
return PyClassMethod_New(method);
}
#ifdef __Pyx_CyFunction_USED
else if (PyObject_TypeCheck(method, __pyx_CyFunctionType)) {
else if (__Pyx_TypeCheck(method, __pyx_CyFunctionType)) {
return PyClassMethod_New(method);
}
#endif
......
......@@ -32,6 +32,20 @@ static int __Pyx_main(int argc, wchar_t **argv) {
%(module_is_main)s = 1;
#if PY_MAJOR_VERSION < 3
init%(module_name)s();
#elif CYTHON_PEP489_MULTI_PHASE_INIT
m = PyInit_%(module_name)s();
if (!PyModule_Check(m)) {
PyModuleDef *mdef = (PyModuleDef *) m;
PyObject *modname = PyUnicode_FromString("__main__");
m = NULL;
if (modname) {
// FIXME: not currently calling PyModule_FromDefAndSpec() here because we do not have a module spec!
// FIXME: not currently setting __file__, __path__, __spec__, ...
m = PyModule_NewObject(modname);
Py_DECREF(modname);
if (m) PyModule_ExecDef(m, mdef);
}
}
#else
m = PyInit_%(module_name)s();
#endif
......
......@@ -11,10 +11,20 @@
#if CYTHON_FAST_THREAD_STATE
#define __Pyx_PyThreadState_declare PyThreadState *$local_tstate_cname;
#define __Pyx_PyThreadState_assign $local_tstate_cname = PyThreadState_GET();
#define __Pyx_PyErr_Occurred() $local_tstate_cname->curexc_type
#if PY_VERSION_HEX >= 0x03050000
#define __Pyx_PyThreadState_assign $local_tstate_cname = _PyThreadState_UncheckedGet();
#elif PY_VERSION_HEX >= 0x03000000
#define __Pyx_PyThreadState_assign $local_tstate_cname = PyThreadState_Get();
#elif PY_VERSION_HEX >= 0x02070000
#define __Pyx_PyThreadState_assign $local_tstate_cname = _PyThreadState_Current;
#else
#define __Pyx_PyThreadState_assign $local_tstate_cname = PyThreadState_Get();
#endif
#else
#define __Pyx_PyThreadState_declare
#define __Pyx_PyThreadState_assign
#define __Pyx_PyErr_Occurred() PyErr_Occurred()
#endif
......@@ -31,11 +41,28 @@ static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tsta
/////////////// PyErrExceptionMatches ///////////////
#if CYTHON_FAST_THREAD_STATE
static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {
Py_ssize_t i, n;
n = PyTuple_GET_SIZE(tuple);
#if PY_MAJOR_VERSION >= 3
// the tighter subtype checking in Py3 allows faster out-of-order comparison
for (i=0; i<n; i++) {
if (exc_type == PyTuple_GET_ITEM(tuple, i)) return 1;
}
#endif
for (i=0; i<n; i++) {
if (__Pyx_PyErr_GivenExceptionMatches(exc_type, PyTuple_GET_ITEM(tuple, i))) return 1;
}
return 0;
}
static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err) {
PyObject *exc_type = tstate->curexc_type;
if (exc_type == err) return 1;
if (unlikely(!exc_type)) return 0;
return PyErr_GivenExceptionMatches(exc_type, err);
if (unlikely(PyTuple_Check(err)))
return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err);
return __Pyx_PyErr_GivenExceptionMatches(exc_type, err);
}
#endif
......@@ -265,7 +292,7 @@ static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject
PyErr_Restore(tmp_type, tmp_value, tb);
Py_XDECREF(tmp_tb);
#else
PyThreadState *tstate = PyThreadState_GET();
PyThreadState *tstate = __Pyx_PyThreadState_Current;
PyObject* tmp_tb = tstate->curexc_traceback;
if (tb != tmp_tb) {
Py_INCREF(tb);
......@@ -529,6 +556,50 @@ static void __Pyx_WriteUnraisable(const char *name, CYTHON_UNUSED int clineno,
#endif
}
/////////////// CLineInTraceback.proto ///////////////
static int __Pyx_CLineForTraceback(int c_line);
/////////////// CLineInTraceback ///////////////
//@requires: ObjectHandling.c::PyObjectGetAttrStr
//@substitute: naming
static int __Pyx_CLineForTraceback(int c_line) {
#ifdef CYTHON_CLINE_IN_TRACEBACK /* 0 or 1 to disable/enable C line display in tracebacks at C compile time */
return ((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0;
#else
PyObject *use_cline;
#if CYTHON_COMPILING_IN_CPYTHON
PyObject **cython_runtime_dict = _PyObject_GetDictPtr(${cython_runtime_cname});
if (likely(cython_runtime_dict)) {
use_cline = PyDict_GetItem(*cython_runtime_dict, PYIDENT("cline_in_traceback"));
} else
#endif
{
PyObject *ptype, *pvalue, *ptraceback;
PyObject *use_cline_obj;
PyErr_Fetch(&ptype, &pvalue, &ptraceback);
use_cline_obj = __Pyx_PyObject_GetAttrStr(${cython_runtime_cname}, PYIDENT("cline_in_traceback"));
if (use_cline_obj) {
use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True;
Py_DECREF(use_cline_obj);
} else {
use_cline = NULL;
}
PyErr_Restore(ptype, pvalue, ptraceback);
}
if (!use_cline) {
c_line = 0;
PyObject_SetAttr(${cython_runtime_cname}, PYIDENT("cline_in_traceback"), Py_False);
}
else if (PyObject_Not(use_cline) != 0) {
c_line = 0;
}
return c_line;
#endif
}
/////////////// AddTraceback.proto ///////////////
static void __Pyx_AddTraceback(const char *funcname, int c_line,
......@@ -536,6 +607,7 @@ static void __Pyx_AddTraceback(const char *funcname, int c_line,
/////////////// AddTraceback ///////////////
//@requires: ModuleSetupCode.c::CodeObjectCache
//@requires: CLineInTraceback
//@substitute: naming
#include "compile.h"
......@@ -600,29 +672,9 @@ static void __Pyx_AddTraceback(const char *funcname, int c_line,
int py_line, const char *filename) {
PyCodeObject *py_code = 0;
PyFrameObject *py_frame = 0;
PyObject *use_cline = 0;
PyObject *ptype, *pvalue, *ptraceback;
static PyObject* cline_in_traceback = NULL;
if (cline_in_traceback == NULL) {
#if PY_MAJOR_VERSION < 3
cline_in_traceback = PyString_FromString("cline_in_traceback");
#else
cline_in_traceback = PyUnicode_FromString("cline_in_traceback");
#endif
}
if (c_line) {
PyErr_Fetch(&ptype, &pvalue, &ptraceback);
use_cline = PyObject_GetAttr(${cython_runtime_cname}, cline_in_traceback);
if (use_cline == NULL) {
c_line = 0;
PyObject_SetAttr(${cython_runtime_cname}, cline_in_traceback, Py_False);
}
else if (PyObject_Not(use_cline) != 0) {
c_line = 0;
}
PyErr_Restore(ptype, pvalue, ptraceback);
c_line = __Pyx_CLineForTraceback(c_line);
}
// Negate to avoid collisions between py and c lines.
......@@ -634,7 +686,7 @@ static void __Pyx_AddTraceback(const char *funcname, int c_line,
$global_code_object_cache_insert(c_line ? -c_line : py_line, py_code);
}
py_frame = PyFrame_New(
PyThreadState_GET(), /*PyThreadState *tstate,*/
__Pyx_PyThreadState_Current, /*PyThreadState *tstate,*/
py_code, /*PyCodeObject *code,*/
$moddict_cname, /*PyObject *globals,*/
0 /*PyObject *locals*/
......@@ -645,5 +697,4 @@ static void __Pyx_AddTraceback(const char *funcname, int c_line,
bad:
Py_XDECREF(py_code);
Py_XDECREF(py_frame);
Py_XDECREF(use_cline);
}
......@@ -57,46 +57,72 @@ static void __Pyx_call_next_tp_clear(PyObject* obj, inquiry current_tp_clear) {
static int __Pyx_setup_reduce(PyObject* type_obj);
/////////////// SetupReduce ///////////////
//@requires: ObjectHandling.c::PyObjectGetAttrStr
//@substitute: naming
#define __Pyx_setup_reduce_GET_ATTR_OR_BAD(res, obj, name) res = PyObject_GetAttrString(obj, name); if (res == NULL) goto BAD;
static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) {
int ret;
PyObject *name_attr;
name_attr = __Pyx_PyObject_GetAttrStr(meth, PYIDENT("__name__"));
if (likely(name_attr)) {
ret = PyObject_RichCompareBool(name_attr, name, Py_EQ);
} else {
ret = -1;
}
if (unlikely(ret < 0)) {
PyErr_Clear();
ret = 0;
}
Py_XDECREF(name_attr);
return ret;
}
static int __Pyx_setup_reduce(PyObject* type_obj) {
int ret = 0;
PyObject* builtin_object = NULL;
static PyObject *object_reduce = NULL;
static PyObject *object_reduce_ex = NULL;
PyObject *object_reduce = NULL;
PyObject *object_reduce_ex = NULL;
PyObject *reduce = NULL;
PyObject *reduce_ex = NULL;
PyObject *reduce_cython = NULL;
PyObject *setstate = NULL;
PyObject *setstate_cython = NULL;
if (PyObject_HasAttrString(type_obj, "__getstate__")) goto GOOD;
#if CYTHON_USE_PYTYPE_LOOKUP
if (_PyType_Lookup((PyTypeObject*)type_obj, PYIDENT("__getstate__"))) goto GOOD;
#else
if (PyObject_HasAttr(type_obj, PYIDENT("__getstate__"))) goto GOOD;
#endif
if (object_reduce_ex == NULL) {
__Pyx_setup_reduce_GET_ATTR_OR_BAD(builtin_object, __pyx_b, "object");
__Pyx_setup_reduce_GET_ATTR_OR_BAD(object_reduce, builtin_object, "__reduce__");
__Pyx_setup_reduce_GET_ATTR_OR_BAD(object_reduce_ex, builtin_object, "__reduce_ex__");
}
#if CYTHON_USE_PYTYPE_LOOKUP
object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, PYIDENT("__reduce_ex__")); if (!object_reduce_ex) goto BAD;
#else
object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, PYIDENT("__reduce_ex__")); if (!object_reduce_ex) goto BAD;
#endif
__Pyx_setup_reduce_GET_ATTR_OR_BAD(reduce_ex, type_obj, "__reduce_ex__");
reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, PYIDENT("__reduce_ex__")); if (unlikely(!reduce_ex)) goto BAD;
if (reduce_ex == object_reduce_ex) {
__Pyx_setup_reduce_GET_ATTR_OR_BAD(reduce, type_obj, "__reduce__");
if (object_reduce == reduce
|| (strcmp(reduce->ob_type->tp_name, "method_descriptor") == 0
&& strcmp(((PyMethodDescrObject*)reduce)->d_method->ml_name, "__reduce_cython__") == 0)) {
__Pyx_setup_reduce_GET_ATTR_OR_BAD(reduce_cython, type_obj, "__reduce_cython__");
ret = PyDict_SetItemString(((PyTypeObject*)type_obj)->tp_dict, "__reduce__", reduce_cython); if (ret < 0) goto BAD;
ret = PyDict_DelItemString(((PyTypeObject*)type_obj)->tp_dict, "__reduce_cython__"); if (ret < 0) goto BAD;
setstate = PyObject_GetAttrString(type_obj, "__setstate__");
#if CYTHON_USE_PYTYPE_LOOKUP
object_reduce = _PyType_Lookup(&PyBaseObject_Type, PYIDENT("__reduce__")); if (!object_reduce) goto BAD;
#else
object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, PYIDENT("__reduce__")); if (!object_reduce) goto BAD;
#endif
reduce = __Pyx_PyObject_GetAttrStr(type_obj, PYIDENT("__reduce__")); if (unlikely(!reduce)) goto BAD;
if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, PYIDENT("__reduce_cython__"))) {
reduce_cython = __Pyx_PyObject_GetAttrStr(type_obj, PYIDENT("__reduce_cython__")); if (unlikely(!reduce_cython)) goto BAD;
ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, PYIDENT("__reduce__"), reduce_cython); if (unlikely(ret < 0)) goto BAD;
ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, PYIDENT("__reduce_cython__")); if (unlikely(ret < 0)) goto BAD;
setstate = __Pyx_PyObject_GetAttrStr(type_obj, PYIDENT("__setstate__"));
if (!setstate) PyErr_Clear();
if (!setstate
|| (strcmp(setstate->ob_type->tp_name, "method_descriptor") == 0
&& strcmp(((PyMethodDescrObject*)setstate)->d_method->ml_name, "__setstate_cython__") == 0)) {
__Pyx_setup_reduce_GET_ATTR_OR_BAD(setstate_cython, type_obj, "__setstate_cython__");
ret = PyDict_SetItemString(((PyTypeObject*)type_obj)->tp_dict, "__setstate__", setstate_cython); if (ret < 0) goto BAD;
ret = PyDict_DelItemString(((PyTypeObject*)type_obj)->tp_dict, "__setstate_cython__"); if (ret < 0) goto BAD;
if (!setstate || __Pyx_setup_reduce_is_named(setstate, PYIDENT("__setstate_cython__"))) {
setstate_cython = __Pyx_PyObject_GetAttrStr(type_obj, PYIDENT("__setstate_cython__")); if (unlikely(!setstate_cython)) goto BAD;
ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, PYIDENT("__setstate__"), setstate_cython); if (unlikely(ret < 0)) goto BAD;
ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, PYIDENT("__setstate_cython__")); if (unlikely(ret < 0)) goto BAD;
}
PyType_Modified((PyTypeObject*)type_obj);
}
......@@ -104,10 +130,14 @@ static int __Pyx_setup_reduce(PyObject* type_obj) {
goto GOOD;
BAD:
if (!PyErr_Occurred()) PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name);
if (!PyErr_Occurred())
PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name);
ret = -1;
GOOD:
Py_XDECREF(builtin_object);
#if !CYTHON_COMPILING_IN_CPYTHON
Py_XDECREF(object_reduce);
Py_XDECREF(object_reduce_ex);
#endif
Py_XDECREF(reduce);
Py_XDECREF(reduce_ex);
Py_XDECREF(reduce_cython);
......
......@@ -26,7 +26,7 @@ static CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, in
#endif
}
else {
if (likely(PyObject_TypeCheck(obj, type))) return 1;
if (likely(__Pyx_TypeCheck(obj, type))) return 1;
}
__Pyx_RaiseArgumentTypeInvalid(name, obj, type);
return 0;
......
......@@ -225,6 +225,8 @@ cdef class array:
flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE
return memoryview(self, flags, self.dtype_is_object)
def __len__(self):
return self._shape[0]
def __getattr__(self, attr):
return getattr(self.memview, attr)
......
......@@ -11,6 +11,9 @@ typedef struct {
Py_ssize_t suboffsets[{{max_dims}}];
} {{memviewslice_name}};
// used for "len(memviewslice)"
#define __Pyx_MemoryView_Len(m) (m.shape[0])
/////////// Atomics.proto /////////////
......
This diff is collapsed.
......@@ -166,8 +166,7 @@ static CYTHON_INLINE PyObject *__Pyx_PyIter_Next2(PyObject* iterator, PyObject*
if (defval) {
PyObject* exc_type = PyErr_Occurred();
if (exc_type) {
if (unlikely(exc_type != PyExc_StopIteration) &&
!PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))
if (unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration)))
return NULL;
PyErr_Clear();
}
......@@ -191,10 +190,10 @@ static CYTHON_INLINE int __Pyx_IterFinish(void); /*proto*/
static CYTHON_INLINE int __Pyx_IterFinish(void) {
#if CYTHON_FAST_THREAD_STATE
PyThreadState *tstate = PyThreadState_GET();
PyThreadState *tstate = __Pyx_PyThreadState_Current;
PyObject* exc_type = tstate->curexc_type;
if (unlikely(exc_type)) {
if (likely(exc_type == PyExc_StopIteration) || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration)) {
if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) {
PyObject *exc_value, *exc_tb;
exc_value = tstate->curexc_value;
exc_tb = tstate->curexc_traceback;
......@@ -917,7 +916,7 @@ static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) {
PyErr_SetString(PyExc_SystemError, "Missing type object");
return 0;
}
if (likely(PyObject_TypeCheck(obj, type)))
if (likely(__Pyx_TypeCheck(obj, type)))
return 1;
PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s",
Py_TYPE(obj)->tp_name, type->tp_name);
......@@ -1036,8 +1035,7 @@ static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) {
/////////////// PyObjectLookupSpecial.proto ///////////////
//@requires: PyObjectGetAttrStr
#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x02070000
// looks like calling _PyType_Lookup() isn't safe in Py<=2.6/3.1
#if CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS
static CYTHON_INLINE PyObject* __Pyx_PyObject_LookupSpecial(PyObject* obj, PyObject* attr_name) {
PyObject *res;
PyTypeObject *tp = Py_TYPE(obj);
......@@ -1124,7 +1122,7 @@ static int __Pyx_TryUnpackUnboundCMethod(__Pyx_CachedCFunction* target) {
#if CYTHON_COMPILING_IN_CPYTHON
#if PY_MAJOR_VERSION >= 3
// method dscriptor type isn't exported in Py2.x, cannot easily check the type there
if (likely(PyObject_TypeCheck(method, &PyMethodDescr_Type)))
if (likely(__Pyx_TypeCheck(method, &PyMethodDescr_Type)))
#endif
{
PyMethodDescrObject *descr = (PyMethodDescrObject*) method;
......@@ -1270,6 +1268,7 @@ bad:
/////////////// PyObjectCallMethod1.proto ///////////////
static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg); /*proto*/
static PyObject* __Pyx__PyObject_CallMethod1(PyObject* method, PyObject* arg); /*proto*/
/////////////// PyObjectCallMethod1 ///////////////
//@requires: PyObjectGetAttrStr
......@@ -1277,10 +1276,8 @@ static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name
//@requires: PyFunctionFastCall
//@requires: PyCFunctionFastCall
static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg) {
PyObject *method, *result = NULL;
method = __Pyx_PyObject_GetAttrStr(obj, method_name);
if (unlikely(!method)) goto done;
static PyObject* __Pyx__PyObject_CallMethod1(PyObject* method, PyObject* arg) {
PyObject *result = NULL;
#if CYTHON_UNPACK_METHODS
if (likely(PyMethod_Check(method))) {
PyObject *self = PyMethod_GET_SELF(method);
......@@ -1308,7 +1305,6 @@ static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name
Py_INCREF(arg);
PyTuple_SET_ITEM(args, 1, arg);
Py_INCREF(function);
Py_DECREF(method); method = NULL;
result = __Pyx_PyObject_Call(function, args, NULL);
Py_DECREF(args);
Py_DECREF(function);
......@@ -1317,6 +1313,17 @@ static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name
}
#endif
result = __Pyx_PyObject_CallOneArg(method, arg);
// avoid "unused label" warning
goto done;
done:
return result;
}
static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg) {
PyObject *method, *result = NULL;
method = __Pyx_PyObject_GetAttrStr(obj, method_name);
if (unlikely(!method)) goto done;
result = __Pyx__PyObject_CallMethod1(method, arg);
done:
Py_XDECREF(method);
return result;
......@@ -1490,7 +1497,7 @@ static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args,
static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na,
PyObject *globals) {
PyFrameObject *f;
PyThreadState *tstate = PyThreadState_GET();
PyThreadState *tstate = __Pyx_PyThreadState_Current;
PyObject **fastlocals;
Py_ssize_t i;
PyObject *result;
......@@ -1748,7 +1755,7 @@ static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) {
}
#endif
#ifdef __Pyx_CyFunction_USED
if (likely(PyCFunction_Check(func) || PyObject_TypeCheck(func, __pyx_CyFunctionType))) {
if (likely(PyCFunction_Check(func) || __Pyx_TypeCheck(func, __pyx_CyFunctionType))) {
#else
if (likely(PyCFunction_Check(func))) {
#endif
......
......@@ -60,10 +60,10 @@
if (CYTHON_TRACE_NOGIL) { \
PyThreadState *tstate; \
PyGILState_STATE state = PyGILState_Ensure(); \
tstate = PyThreadState_GET(); \
tstate = __Pyx_PyThreadState_Current; \
if (unlikely(tstate->use_tracing) && !tstate->tracing && \
(tstate->c_profilefunc || (CYTHON_TRACE && tstate->c_tracefunc))) { \
__Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, funcname, srcfile, firstlineno); \
__Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, tstate, funcname, srcfile, firstlineno); \
} \
PyGILState_Release(state); \
if (unlikely(__Pyx_use_tracing < 0)) goto_error; \
......@@ -72,7 +72,7 @@
PyThreadState* tstate = PyThreadState_GET(); \
if (unlikely(tstate->use_tracing) && !tstate->tracing && \
(tstate->c_profilefunc || (CYTHON_TRACE && tstate->c_tracefunc))) { \
__Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, funcname, srcfile, firstlineno); \
__Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, tstate, funcname, srcfile, firstlineno); \
if (unlikely(__Pyx_use_tracing < 0)) goto_error; \
} \
}
......@@ -81,7 +81,7 @@
{ PyThreadState* tstate = PyThreadState_GET(); \
if (unlikely(tstate->use_tracing) && !tstate->tracing && \
(tstate->c_profilefunc || (CYTHON_TRACE && tstate->c_tracefunc))) { \
__Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, funcname, srcfile, firstlineno); \
__Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, tstate, funcname, srcfile, firstlineno); \
if (unlikely(__Pyx_use_tracing < 0)) goto_error; \
} \
}
......@@ -89,7 +89,7 @@
#define __Pyx_TraceException() \
if (likely(!__Pyx_use_tracing)); else { \
PyThreadState* tstate = PyThreadState_GET(); \
PyThreadState* tstate = __Pyx_PyThreadState_Current; \
if (tstate->use_tracing && \
(tstate->c_profilefunc || (CYTHON_TRACE && tstate->c_tracefunc))) { \
tstate->tracing++; \
......@@ -130,14 +130,14 @@
if (CYTHON_TRACE_NOGIL) { \
PyThreadState *tstate; \
PyGILState_STATE state = PyGILState_Ensure(); \
tstate = PyThreadState_GET(); \
tstate = __Pyx_PyThreadState_Current; \
if (tstate->use_tracing) { \
__Pyx_call_return_trace_func(tstate, $frame_cname, (PyObject*)result); \
} \
PyGILState_Release(state); \
} \
} else { \
PyThreadState* tstate = PyThreadState_GET(); \
PyThreadState* tstate = __Pyx_PyThreadState_Current; \
if (tstate->use_tracing) { \
__Pyx_call_return_trace_func(tstate, $frame_cname, (PyObject*)result); \
} \
......@@ -146,7 +146,7 @@
#else
#define __Pyx_TraceReturn(result, nogil) \
if (likely(!__Pyx_use_tracing)); else { \
PyThreadState* tstate = PyThreadState_GET(); \
PyThreadState* tstate = __Pyx_PyThreadState_Current; \
if (tstate->use_tracing) { \
__Pyx_call_return_trace_func(tstate, $frame_cname, (PyObject*)result); \
} \
......@@ -154,7 +154,7 @@
#endif
static PyCodeObject *__Pyx_createFrameCodeObject(const char *funcname, const char *srcfile, int firstlineno); /*proto*/
static int __Pyx_TraceSetupAndCall(PyCodeObject** code, PyFrameObject** frame, const char *funcname, const char *srcfile, int firstlineno); /*proto*/
static int __Pyx_TraceSetupAndCall(PyCodeObject** code, PyFrameObject** frame, PyThreadState* tstate, const char *funcname, const char *srcfile, int firstlineno); /*proto*/
#else
......@@ -197,7 +197,7 @@
int ret = 0; \
PyThreadState *tstate; \
PyGILState_STATE state = PyGILState_Ensure(); \
tstate = PyThreadState_GET(); \
tstate = __Pyx_PyThreadState_Current; \
if (unlikely(tstate->use_tracing && tstate->c_tracefunc \
&& __pyx_frame->f_trace != Py_None)) { \
ret = __Pyx_call_line_trace_func(tstate, $frame_cname, lineno); \
......@@ -206,7 +206,7 @@
if (unlikely(ret)) goto_error; \
} \
} else { \
PyThreadState* tstate = PyThreadState_GET(); \
PyThreadState* tstate = __Pyx_PyThreadState_Current; \
if (unlikely(tstate->use_tracing && tstate->c_tracefunc \
&& __pyx_frame->f_trace != Py_None)) { \
int ret = __Pyx_call_line_trace_func(tstate, $frame_cname, lineno); \
......@@ -217,7 +217,7 @@
#else
#define __Pyx_TraceLine(lineno, nogil, goto_error) \
if (likely(!__Pyx_use_tracing)); else { \
PyThreadState* tstate = PyThreadState_GET(); \
PyThreadState* tstate = __Pyx_PyThreadState_Current; \
if (unlikely(tstate->use_tracing && tstate->c_tracefunc \
&& __pyx_frame->f_trace != Py_None)) { \
int ret = __Pyx_call_line_trace_func(tstate, $frame_cname, lineno); \
......@@ -237,12 +237,12 @@
static int __Pyx_TraceSetupAndCall(PyCodeObject** code,
PyFrameObject** frame,
PyThreadState* tstate,
const char *funcname,
const char *srcfile,
int firstlineno) {
PyObject *type, *value, *traceback;
int retval;
PyThreadState* tstate = PyThreadState_GET();
if (*frame == NULL || !CYTHON_PROFILE_REUSE_FRAME) {
if (*code == NULL) {
*code = __Pyx_createFrameCodeObject(funcname, srcfile, firstlineno);
......
......@@ -577,9 +577,8 @@ static CYTHON_INLINE int __Pyx_Py_UNICODE_ISTITLE(Py_UCS4 uchar)
/////////////// unicode_tailmatch.proto ///////////////
static int __Pyx_PyUnicode_Tailmatch(PyObject* s, PyObject* substr,
Py_ssize_t start, Py_ssize_t end, int direction); /*proto*/
static int __Pyx_PyUnicode_Tailmatch(
PyObject* s, PyObject* substr, Py_ssize_t start, Py_ssize_t end, int direction); /*proto*/
/////////////// unicode_tailmatch ///////////////
......@@ -587,17 +586,16 @@ static int __Pyx_PyUnicode_Tailmatch(PyObject* s, PyObject* substr,
// tuple of prefixes/suffixes, whereas it's much more common to
// test for a single unicode string.
static int __Pyx_PyUnicode_Tailmatch(PyObject* s, PyObject* substr,
static int __Pyx_PyUnicode_TailmatchTuple(PyObject* s, PyObject* substrings,
Py_ssize_t start, Py_ssize_t end, int direction) {
if (unlikely(PyTuple_Check(substr))) {
Py_ssize_t i, count = PyTuple_GET_SIZE(substr);
Py_ssize_t i, count = PyTuple_GET_SIZE(substrings);
for (i = 0; i < count; i++) {
Py_ssize_t result;
#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
result = PyUnicode_Tailmatch(s, PyTuple_GET_ITEM(substr, i),
result = PyUnicode_Tailmatch(s, PyTuple_GET_ITEM(substrings, i),
start, end, direction);
#else
PyObject* sub = PySequence_ITEM(substr, i);
PyObject* sub = PySequence_ITEM(substrings, i);
if (unlikely(!sub)) return -1;
result = PyUnicode_Tailmatch(s, sub, start, end, direction);
Py_DECREF(sub);
......@@ -607,6 +605,12 @@ static int __Pyx_PyUnicode_Tailmatch(PyObject* s, PyObject* substr,
}
}
return 0;
}
static int __Pyx_PyUnicode_Tailmatch(PyObject* s, PyObject* substr,
Py_ssize_t start, Py_ssize_t end, int direction) {
if (unlikely(PyTuple_Check(substr))) {
return __Pyx_PyUnicode_TailmatchTuple(s, substr, start, end, direction);
}
return (int) PyUnicode_Tailmatch(s, substr, start, end, direction);
}
......@@ -677,17 +681,16 @@ static int __Pyx_PyBytes_SingleTailmatch(PyObject* self, PyObject* arg,
return retval;
}
static int __Pyx_PyBytes_Tailmatch(PyObject* self, PyObject* substr,
static int __Pyx_PyBytes_TailmatchTuple(PyObject* self, PyObject* substrings,
Py_ssize_t start, Py_ssize_t end, int direction) {
if (unlikely(PyTuple_Check(substr))) {
Py_ssize_t i, count = PyTuple_GET_SIZE(substr);
Py_ssize_t i, count = PyTuple_GET_SIZE(substrings);
for (i = 0; i < count; i++) {
int result;
#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
result = __Pyx_PyBytes_SingleTailmatch(self, PyTuple_GET_ITEM(substr, i),
result = __Pyx_PyBytes_SingleTailmatch(self, PyTuple_GET_ITEM(substrings, i),
start, end, direction);
#else
PyObject* sub = PySequence_ITEM(substr, i);
PyObject* sub = PySequence_ITEM(substrings, i);
if (unlikely(!sub)) return -1;
result = __Pyx_PyBytes_SingleTailmatch(self, sub, start, end, direction);
Py_DECREF(sub);
......@@ -697,6 +700,12 @@ static int __Pyx_PyBytes_Tailmatch(PyObject* self, PyObject* substr,
}
}
return 0;
}
static int __Pyx_PyBytes_Tailmatch(PyObject* self, PyObject* substr,
Py_ssize_t start, Py_ssize_t end, int direction) {
if (unlikely(PyTuple_Check(substr))) {
return __Pyx_PyBytes_TailmatchTuple(self, substr, start, end, direction);
}
return __Pyx_PyBytes_SingleTailmatch(self, substr, start, end, direction);
......@@ -737,7 +746,7 @@ static CYTHON_INLINE char __Pyx_PyBytes_GetItemInt(PyObject* bytes, Py_ssize_t i
Py_ssize_t size = PyBytes_GET_SIZE(bytes);
if (unlikely(index >= size) | ((index < 0) & unlikely(index < -size))) {
PyErr_SetString(PyExc_IndexError, "string index out of range");
return -1;
return (char) -1;
}
}
if (index < 0)
......
......@@ -54,6 +54,12 @@ static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);
#define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize
#endif
#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s))
#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s))
#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s))
#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s))
#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s))
#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s))
#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s))
#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s))
#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s))
......@@ -198,15 +204,9 @@ static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) {
return __Pyx_PyObject_AsStringAndSize(o, &ignore);
}
// Py3.7 returns a "const char*" for unicode strings
static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
#if CYTHON_COMPILING_IN_CPYTHON && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)
if (
#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
__Pyx_sys_getdefaultencoding_not_ascii &&
#endif
PyUnicode_Check(o)) {
#if PY_VERSION_HEX < 0x03030000
static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
char* defenc_c;
// borrowed reference, cached internally in 'o' by CPython
PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL);
......@@ -227,10 +227,14 @@ static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_
#endif /*__PYX_DEFAULT_STRING_ENCODING_IS_ASCII*/
*length = PyBytes_GET_SIZE(defenc);
return defenc_c;
}
#else /* PY_VERSION_HEX < 0x03030000 */
if (__Pyx_PyUnicode_READY(o) == -1) return NULL;
static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL;
#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
if (PyUnicode_IS_ASCII(o)) {
if (likely(PyUnicode_IS_ASCII(o))) {
// cached for the lifetime of the object
*length = PyUnicode_GET_LENGTH(o);
return PyUnicode_AsUTF8(o);
......@@ -242,7 +246,19 @@ static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_
#else /* __PYX_DEFAULT_STRING_ENCODING_IS_ASCII */
return PyUnicode_AsUTF8AndSize(o, length);
#endif /* __PYX_DEFAULT_STRING_ENCODING_IS_ASCII */
}
#endif /* PY_VERSION_HEX < 0x03030000 */
#endif
// Py3.7 returns a "const char*" for unicode strings
static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
#if CYTHON_COMPILING_IN_CPYTHON && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)
if (
#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
__Pyx_sys_getdefaultencoding_not_ascii &&
#endif
PyUnicode_Check(o)) {
return __Pyx_PyUnicode_AsStringAndSize(o, length);
} else
#endif /* __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT */
......@@ -270,6 +286,28 @@ static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {
else return PyObject_IsTrue(x);
}
static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) {
#if PY_MAJOR_VERSION >= 3
if (PyLong_Check(result)) {
// CPython issue #17576: warn if 'result' not of exact type int.
if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
"__int__ returned non-int (type %.200s). "
"The ability to return an instance of a strict subclass of int "
"is deprecated, and may be removed in a future version of Python.",
Py_TYPE(result)->tp_name)) {
Py_DECREF(result);
return NULL;
}
return result;
}
#endif
PyErr_Format(PyExc_TypeError,
"__%.4s__ returned non-%.4s (type %.200s)",
type_name, type_name, Py_TYPE(result)->tp_name);
Py_DECREF(result);
return NULL;
}
static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {
#if CYTHON_USE_TYPE_SLOTS
PyNumberMethods *m;
......@@ -277,9 +315,9 @@ static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {
const char *name = NULL;
PyObject *res = NULL;
#if PY_MAJOR_VERSION < 3
if (PyInt_Check(x) || PyLong_Check(x))
if (likely(PyInt_Check(x) || PyLong_Check(x)))
#else
if (PyLong_Check(x))
if (likely(PyLong_Check(x)))
#endif
return __Pyx_NewRef(x);
#if CYTHON_USE_TYPE_SLOTS
......@@ -287,32 +325,30 @@ static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {
#if PY_MAJOR_VERSION < 3
if (m && m->nb_int) {
name = "int";
res = PyNumber_Int(x);
res = m->nb_int(x);
}
else if (m && m->nb_long) {
name = "long";
res = PyNumber_Long(x);
res = m->nb_long(x);
}
#else
if (m && m->nb_int) {
if (likely(m && m->nb_int)) {
name = "int";
res = PyNumber_Long(x);
res = m->nb_int(x);
}
#endif
#else
if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) {
res = PyNumber_Int(x);
}
#endif
if (res) {
if (likely(res)) {
#if PY_MAJOR_VERSION < 3
if (!PyInt_Check(res) && !PyLong_Check(res)) {
if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) {
#else
if (!PyLong_Check(res)) {
if (unlikely(!PyLong_CheckExact(res))) {
#endif
PyErr_Format(PyExc_TypeError,
"__%.4s__ returned non-%.4s (type %.200s)",
name, name, Py_TYPE(res)->tp_name);
Py_DECREF(res);
return NULL;
return __Pyx_PyNumber_IntOrLongWrongResultType(res, name);
}
}
else if (!PyErr_Occurred()) {
......
This diff is collapsed.
......@@ -289,7 +289,7 @@ Finds end of innermost nested class or method definition."
(set (make-local-variable 'end-of-defun-function)
#'cython-end-of-defun)
(set (make-local-variable 'compile-command)
(format cython-default-compile-format (shell-quote-argument buffer-file-name)))
(format cython-default-compile-format (shell-quote-argument (or buffer-file-name ""))))
(set (make-local-variable 'add-log-current-defun-function)
#'cython-current-defun)
(add-hook 'which-func-functions #'cython-current-defun nil t)
......
......@@ -30,7 +30,7 @@ def pyx_library(
pyx_srcs = []
pxd_srcs = []
for src in srcs:
if src.endswith('.pyx') or (src.endwith('.py')
if src.endswith('.pyx') or (src.endswith('.py')
and src[:-3] + '.pxd' in srcs):
pyx_srcs.append(src)
elif src.endswith('.py'):
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -6,5 +6,4 @@ async def foo():
_ERRORS = """
5:4: 'yield from' not supported here
5:4: 'yield' not allowed in async coroutines (use 'await')
"""
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment