mirror of
https://github.com/davidhalter/parso.git
synced 2025-12-06 21:04:29 +08:00
Compare commits
59 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2ca629a2f6 | ||
|
|
5c1e953c17 | ||
|
|
a5f565ae10 | ||
|
|
a29ec25598 | ||
|
|
04360cdfe7 | ||
|
|
4824534f8a | ||
|
|
647073b1b9 | ||
|
|
50445f424e | ||
|
|
5b5d855fab | ||
|
|
04e18ebb01 | ||
|
|
d3cfcc24b8 | ||
|
|
89646e0970 | ||
|
|
bc8566e964 | ||
|
|
89932c368d | ||
|
|
dcae8cda92 | ||
|
|
1f6683b8ac | ||
|
|
0ec02e1d7f | ||
|
|
8db1498185 | ||
|
|
26e882d19c | ||
|
|
3a506b44ac | ||
|
|
bae36f8ab0 | ||
|
|
94268815e8 | ||
|
|
5a57e8df06 | ||
|
|
aa82a1d39a | ||
|
|
e0f74dd8ad | ||
|
|
7df254440e | ||
|
|
05a8236d3f | ||
|
|
2067845cef | ||
|
|
dcdd3bbc8e | ||
|
|
82868580a2 | ||
|
|
a18baf0d2c | ||
|
|
ed803f5749 | ||
|
|
032c7563c4 | ||
|
|
b83c641057 | ||
|
|
97d9aeafb7 | ||
|
|
7a277c7302 | ||
|
|
5993765e0a | ||
|
|
435d310c2b | ||
|
|
8aa280342a | ||
|
|
73c61bca4a | ||
|
|
60ed141d80 | ||
|
|
091e72562c | ||
|
|
fc6202ffb3 | ||
|
|
0e201810fa | ||
|
|
7a9739c5d6 | ||
|
|
1bf9ca94bb | ||
|
|
a4d28d2eda | ||
|
|
36ddbaddf4 | ||
|
|
b944fb9145 | ||
|
|
7a85409da7 | ||
|
|
e79c0755eb | ||
|
|
9ab5937a3c | ||
|
|
d3e58955a9 | ||
|
|
a21ec2c0ad | ||
|
|
910a660c6f | ||
|
|
68fa70b959 | ||
|
|
fa0bf4951c | ||
|
|
ba2c0ad41a | ||
|
|
4b32408001 |
@@ -7,6 +7,7 @@ python:
|
||||
- 3.4
|
||||
- 3.5
|
||||
- 3.6
|
||||
- 3.7
|
||||
- pypy
|
||||
matrix:
|
||||
allow_failures:
|
||||
|
||||
@@ -5,6 +5,7 @@ David Halter (@davidhalter) <davidhalter88@gmail.com>
|
||||
|
||||
Code Contributors
|
||||
=================
|
||||
Alisdair Robertson (@robodair)
|
||||
|
||||
|
||||
Code Contributors (to Jedi and therefore possibly to this library)
|
||||
|
||||
@@ -4,7 +4,13 @@ Changelog
|
||||
---------
|
||||
|
||||
|
||||
0.1.0 (2017-05-30)
|
||||
0.1.1 (2017-11-05)
|
||||
+++++++++++++++++++
|
||||
|
||||
- Fixed a few bugs in the caching layer
|
||||
- Added support for Python 3.7
|
||||
|
||||
0.1.0 (2017-09-04)
|
||||
+++++++++++++++++++
|
||||
|
||||
- Pulling the library out of Jedi. Some APIs will definitely change.
|
||||
|
||||
68
README.rst
68
README.rst
@@ -1,5 +1,5 @@
|
||||
###################################################################
|
||||
parso - A Python Parser Written in Python
|
||||
parso - A Python Parser
|
||||
###################################################################
|
||||
|
||||
.. image:: https://secure.travis-ci.org/davidhalter/parso.png?branch=master
|
||||
@@ -10,19 +10,52 @@ parso - A Python Parser Written in Python
|
||||
:target: https://coveralls.io/r/davidhalter/parso
|
||||
:alt: Coverage Status
|
||||
|
||||
.. image:: https://raw.githubusercontent.com/davidhalter/parso/master/docs/_static/logo_characters.png
|
||||
|
||||
Parso is a Python parser that supports error recovery and round-trip parsing.
|
||||
Parso is a Python parser that supports error recovery and round-trip parsing
|
||||
for different Python versions (in multiple Python versions). Parso is also able
|
||||
to list multiple syntax errors in your python file.
|
||||
|
||||
Parso has been battle-tested by jedi_. It was pulled out of jedi to be useful
|
||||
for other projects as well.
|
||||
|
||||
Parso is very simplistic. It consists of a small API to parse Python and
|
||||
analyse the parsing tree.
|
||||
Parso consists of a small API to parse Python and analyse the syntax tree.
|
||||
|
||||
A simple example:
|
||||
|
||||
Ressources
|
||||
==========
|
||||
.. code-block:: python
|
||||
|
||||
>>> import parso
|
||||
>>> module = parso.parse('hello + 1', version="3.6")
|
||||
>>> expr = module.children[0]
|
||||
>>> expr
|
||||
PythonNode(arith_expr, [<Name: hello@1,0>, <Operator: +>, <Number: 1>])
|
||||
>>> print(expr.get_code())
|
||||
hello + 1
|
||||
>>> name = expr.children[0]
|
||||
>>> name
|
||||
<Name: hello@1,0>
|
||||
>>> name.end_pos
|
||||
(1, 5)
|
||||
>>> expr.end_pos
|
||||
(1, 9)
|
||||
|
||||
To list multiple issues:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> grammar = parso.load_grammar()
|
||||
>>> module = grammar.parse('foo +\nbar\ncontinue')
|
||||
>>> error1, error2 = grammar.iter_errors(module)
|
||||
>>> error1.message
|
||||
'SyntaxError: invalid syntax'
|
||||
>>> error2.message
|
||||
"SyntaxError: 'continue' not properly in loop"
|
||||
|
||||
Resources
|
||||
=========
|
||||
|
||||
- `Testing <http://parso.readthedocs.io/en/latest/docs/development.html#testing>`_
|
||||
- `PyPI <https://pypi.python.org/pypi/parso>`_
|
||||
- `Docs <https://parso.readthedocs.org/en/latest/>`_
|
||||
- Uses `semantic versioning <http://semver.org/>`_
|
||||
@@ -41,34 +74,17 @@ Future
|
||||
Known Issues
|
||||
============
|
||||
|
||||
- Python3.6's f-strings are not parsed, yet. This means no errors are found in them.
|
||||
- `async`/`await` are already used as keywords in Python3.6.
|
||||
- `from __future__ import print_function` is not supported,
|
||||
- `from __future__ import print_function` is not ignored.
|
||||
|
||||
Testing
|
||||
=======
|
||||
|
||||
The test suite depends on ``tox`` and ``pytest``::
|
||||
|
||||
pip install tox pytest
|
||||
|
||||
To run the tests for all supported Python versions::
|
||||
|
||||
tox
|
||||
|
||||
If you want to test only a specific Python version (e.g. Python 2.7), it's as
|
||||
easy as ::
|
||||
|
||||
tox -e py27
|
||||
|
||||
Tests are also run automatically on `Travis CI
|
||||
<https://travis-ci.org/davidhalter/parso/>`_.
|
||||
|
||||
Acknowledgements
|
||||
================
|
||||
|
||||
- Guido van Rossum (@gvanrossum) for creating the parser generator pgen2
|
||||
(originally used in lib2to3).
|
||||
- `Salome Schneider <https://www.crepes-schnaegg.ch/cr%C3%AApes-schn%C3%A4gg/kunst-f%C3%BCrs-cr%C3%AApes-mobil/>`_
|
||||
for the extremely awesome parso logo.
|
||||
|
||||
|
||||
.. _jedi: https://github.com/davidhalter/jedi
|
||||
|
||||
30
conftest.py
30
conftest.py
@@ -11,9 +11,12 @@ import parso
|
||||
from parso import cache
|
||||
from parso.utils import parse_version_string
|
||||
|
||||
|
||||
collect_ignore = ["setup.py"]
|
||||
|
||||
VERSIONS_2 = '2.6', '2.7'
|
||||
VERSIONS_3 = '3.3', '3.4', '3.5', '3.6', '3.7'
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def clean_parso_cache():
|
||||
"""
|
||||
@@ -49,20 +52,11 @@ def pytest_generate_tests(metafunc):
|
||||
ids=[c.name for c in cases]
|
||||
)
|
||||
elif 'each_version' in metafunc.fixturenames:
|
||||
metafunc.parametrize(
|
||||
'each_version',
|
||||
['2.6', '2.7', '3.3', '3.4', '3.5', '3.6'],
|
||||
)
|
||||
metafunc.parametrize('each_version', VERSIONS_2 + VERSIONS_3)
|
||||
elif 'each_py2_version' in metafunc.fixturenames:
|
||||
metafunc.parametrize(
|
||||
'each_py2_version',
|
||||
['2.6', '2.7'],
|
||||
)
|
||||
metafunc.parametrize('each_py2_version', VERSIONS_2)
|
||||
elif 'each_py3_version' in metafunc.fixturenames:
|
||||
metafunc.parametrize(
|
||||
'each_py3_version',
|
||||
['3.3', '3.4', '3.5', '3.6'],
|
||||
)
|
||||
metafunc.parametrize('each_py3_version', VERSIONS_3)
|
||||
|
||||
|
||||
class NormalizerIssueCase(object):
|
||||
@@ -97,16 +91,6 @@ def pytest_configure(config):
|
||||
root.addHandler(ch)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def each_py3_version():
|
||||
return '3.3', '3.4', '3.5', '3.6'
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def each_py2_version():
|
||||
return '2.6', '2.7'
|
||||
|
||||
|
||||
class Checker():
|
||||
def __init__(self, version, is_passing):
|
||||
self.version = version
|
||||
|
||||
@@ -5,45 +5,48 @@
|
||||
set -eu -o pipefail
|
||||
|
||||
BASE_DIR=$(dirname $(readlink -f "$0"))
|
||||
cd $(BASE_DIR)
|
||||
cd $BASE_DIR
|
||||
|
||||
git fetch --tags
|
||||
|
||||
PROJECT_NAME=parso
|
||||
BRANCH=master
|
||||
BUILD_FOLDER=build
|
||||
FOLDER=$BUILD_FOLDER/$PROJECT_NAME
|
||||
|
||||
# Test first.
|
||||
#tox
|
||||
|
||||
[ -d $BUILD_FOLDER ] || mkdir $BUILD_FOLDER
|
||||
# Remove the previous deployment first.
|
||||
rm -rf $FOLDER
|
||||
# Checkout the right branch
|
||||
cd $BUILD_FOLDER
|
||||
rm -rf $PROJECT_NAME
|
||||
git clone .. $PROJECT_NAME
|
||||
cd $PROJECT_NAME
|
||||
git checkout $BRANCH
|
||||
|
||||
# Test first.
|
||||
tox
|
||||
|
||||
# Create tag
|
||||
tag=v$(python -c "import $PROJECT_NAME; print($PROJECT_NAME.__version__)")
|
||||
|
||||
master_ref=$(git show-ref -s $BRANCH)
|
||||
tag_ref=$(git show-ref -s $tag | true)
|
||||
if [ $tag_ref ]; then
|
||||
if [ $tag_ref != $master_ref ]; then
|
||||
master_ref=$(git show-ref -s heads/$BRANCH)
|
||||
tag_ref=$(git show-ref -s $tag || true)
|
||||
if [[ $tag_ref ]]; then
|
||||
if [[ $tag_ref != $master_ref ]]; then
|
||||
echo 'Cannot tag something that has already been tagged with another commit.'
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
git tag $BRANCH
|
||||
git tag $tag
|
||||
git push --tags
|
||||
fi
|
||||
|
||||
# Package and upload to PyPI
|
||||
rm -rf dist/
|
||||
#rm -rf dist/ - Not needed anymore, because the folder is never reused.
|
||||
echo `pwd`
|
||||
python setup.py sdist bdist_wheel
|
||||
# Maybe do a pip install twine before.
|
||||
twine upload dist/*
|
||||
|
||||
cd $(BASE_DIR)
|
||||
# Back in the development directory fetch tags.
|
||||
git fetch --tags
|
||||
cd $BASE_DIR
|
||||
# The tags have been pushed to this repo. Push the tags to github, now.
|
||||
git push --tags
|
||||
|
||||
BIN
docs/_static/logo.png
vendored
BIN
docs/_static/logo.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 28 KiB After Width: | Height: | Size: 200 KiB |
BIN
docs/_static/logo_characters.png
vendored
Normal file
BIN
docs/_static/logo_characters.png
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 55 KiB |
4
docs/_themes/flask/layout.html
vendored
4
docs/_themes/flask/layout.html
vendored
@@ -6,8 +6,8 @@
|
||||
{% endif %}
|
||||
<link media="only screen and (max-device-width: 480px)" href="{{
|
||||
pathto('_static/small_flask.css', 1) }}" type= "text/css" rel="stylesheet" />
|
||||
<a href="https://github.com/davidhalter/jedi">
|
||||
<img style="position: absolute; top: 0; right: 0; border: 0;" src="https://s3.amazonaws.com/github/ribbons/forkme_right_red_aa0000.png" alt="Fork me on GitHub">
|
||||
<a href="https://github.com/davidhalter/parso">
|
||||
<img style="position: absolute; top: 0; right: 0; border: 0;" src="https://s3.amazonaws.com/github/ribbons/forkme_right_red_aa0000.png" alt="Fork me">
|
||||
</a>
|
||||
{% endblock %}
|
||||
{%- block relbar2 %}{% endblock %}
|
||||
|
||||
@@ -274,7 +274,7 @@ autodoc_default_flags = []
|
||||
# -- Options for intersphinx module --------------------------------------------
|
||||
|
||||
intersphinx_mapping = {
|
||||
'http://docs.python.org/': None,
|
||||
'http://docs.python.org/': ('https://docs.python.org/3.6', None),
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -4,7 +4,8 @@ Development
|
||||
===========
|
||||
|
||||
If you want to contribute anything to |parso|, just open an issue or pull
|
||||
request to discuss it. We welcome changes!
|
||||
request to discuss it. We welcome changes! Please check the ``CONTRIBUTING.md``
|
||||
file in the repository, first.
|
||||
|
||||
|
||||
Deprecations Process
|
||||
|
||||
@@ -3,8 +3,8 @@
|
||||
Installation and Configuration
|
||||
==============================
|
||||
|
||||
The preferred way
|
||||
-----------------
|
||||
The preferred way (pip)
|
||||
-----------------------
|
||||
|
||||
On any system you can install |parso| directly from the Python package index
|
||||
using pip::
|
||||
|
||||
@@ -1,29 +1,42 @@
|
||||
.. include:: ../global.rst
|
||||
|
||||
.. _parser-tree:
|
||||
|
||||
Parser Tree
|
||||
===========
|
||||
|
||||
Usage
|
||||
-----
|
||||
The parser tree is returned by calling :py:meth:`parso.Grammar.parse`.
|
||||
|
||||
.. automodule:: parso.python
|
||||
.. note:: Note that parso positions are always 1 based for lines and zero
|
||||
based for columns. This means the first position in a file is (1, 0).
|
||||
|
||||
Parser Tree Base Classes
|
||||
------------------------
|
||||
|
||||
Generally there are two types of classes you will deal with:
|
||||
:py:class:`parso.tree.Leaf` and :py:class:`parso.tree.BaseNode`.
|
||||
|
||||
.. autoclass:: parso.tree.BaseNode
|
||||
:show-inheritance:
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
|
||||
Parser Tree Base Class
|
||||
----------------------
|
||||
.. autoclass:: parso.tree.Leaf
|
||||
:show-inheritance:
|
||||
:members:
|
||||
|
||||
All nodes and leaves have these methods/properties:
|
||||
|
||||
.. autoclass:: parso.tree.NodeOrLeaf
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
Python Parser Tree
|
||||
------------------
|
||||
|
||||
.. currentmodule:: parso.python.tree
|
||||
|
||||
.. automodule:: parso.python.tree
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
@@ -4,12 +4,13 @@ Usage
|
||||
=====
|
||||
|
||||
|parso| works around grammars. You can simply create Python grammars by calling
|
||||
``load_grammar``. Grammars (with a custom tokenizer and custom parser trees)
|
||||
can also be created by directly instantiating ``Grammar``. More information
|
||||
:py:func:`parso.load_grammar`. Grammars (with a custom tokenizer and custom parser trees)
|
||||
can also be created by directly instantiating :py:func:`parso.Grammar`. More information
|
||||
about the resulting objects can be found in the :ref:`parser tree documentation
|
||||
<parser-tree>`.
|
||||
|
||||
The simplest way of using parso is without even loading a grammar:
|
||||
The simplest way of using parso is without even loading a grammar
|
||||
(:py:func:`parso.parse`):
|
||||
|
||||
.. sourcecode:: python
|
||||
|
||||
@@ -17,7 +18,31 @@ The simplest way of using parso is without even loading a grammar:
|
||||
>>> parso.parse('foo + bar')
|
||||
<Module: @1-1>
|
||||
|
||||
.. automodule:: parso.grammar
|
||||
Loading a Grammar
|
||||
-----------------
|
||||
|
||||
Typically if you want to work with one specific Python version, use:
|
||||
|
||||
.. autofunction:: parso.load_grammar
|
||||
|
||||
Grammar methods
|
||||
---------------
|
||||
|
||||
You will get back a grammar object that you can use to parse code and find
|
||||
issues in it:
|
||||
|
||||
.. autoclass:: parso.Grammar
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
|
||||
Error Retrieval
|
||||
---------------
|
||||
|
||||
|parso| is able to find multiple errors in your source code. Iterating through
|
||||
those errors yields the following instances:
|
||||
|
||||
.. autoclass:: parso.normalizer.Issue
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
@@ -25,17 +50,17 @@ The simplest way of using parso is without even loading a grammar:
|
||||
Utility
|
||||
-------
|
||||
|
||||
.. autofunction:: parso.parse
|
||||
|parso| also offers some utility functions that can be really useful:
|
||||
|
||||
.. automodule:: parso.utils
|
||||
:members:
|
||||
:undoc-members:
|
||||
.. autofunction:: parso.parse
|
||||
.. autofunction:: parso.split_lines
|
||||
.. autofunction:: parso.python_bytes_to_unicode
|
||||
|
||||
|
||||
Used By
|
||||
-------
|
||||
|
||||
- jedi_ (which is used by IPython and a lot of plugins).
|
||||
- jedi_ (which is used by IPython and a lot of editor plugins).
|
||||
|
||||
|
||||
.. _jedi: https://github.com/davidhalter/jedi
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. include global.rst
|
||||
|
||||
parso - A Python Parser Written in Python
|
||||
=========================================
|
||||
parso - A Python Parser
|
||||
=======================
|
||||
|
||||
Release v\ |release|. (:doc:`Installation <docs/installation>`)
|
||||
|
||||
|
||||
@@ -1,10 +1,19 @@
|
||||
"""
|
||||
parso is a Python parser. It's really easy to use and supports multiple Python
|
||||
versions, file caching, round-trips and other stuff:
|
||||
r"""
|
||||
Parso is a Python parser that supports error recovery and round-trip parsing
|
||||
for different Python versions (in multiple Python versions). Parso is also able
|
||||
to list multiple syntax errors in your python file.
|
||||
|
||||
>>> from parso import load_grammar
|
||||
>>> grammar = load_grammar(version='2.7')
|
||||
>>> module = grammar.parse('hello + 1')
|
||||
Parso has been battle-tested by jedi_. It was pulled out of jedi to be useful
|
||||
for other projects as well.
|
||||
|
||||
Parso consists of a small API to parse Python and analyse the syntax tree.
|
||||
|
||||
.. _jedi: https://github.com/davidhalter/jedi
|
||||
|
||||
A simple example:
|
||||
|
||||
>>> import parso
|
||||
>>> module = parso.parse('hello + 1', version="3.6")
|
||||
>>> expr = module.children[0]
|
||||
>>> expr
|
||||
PythonNode(arith_expr, [<Name: hello@1,0>, <Operator: +>, <Number: 1>])
|
||||
@@ -17,19 +26,32 @@ hello + 1
|
||||
(1, 5)
|
||||
>>> expr.end_pos
|
||||
(1, 9)
|
||||
|
||||
To list multiple issues:
|
||||
|
||||
>>> grammar = parso.load_grammar()
|
||||
>>> module = grammar.parse('foo +\nbar\ncontinue')
|
||||
>>> error1, error2 = grammar.iter_errors(module)
|
||||
>>> error1.message
|
||||
'SyntaxError: invalid syntax'
|
||||
>>> error2.message
|
||||
"SyntaxError: 'continue' not properly in loop"
|
||||
"""
|
||||
|
||||
from parso.parser import ParserSyntaxError
|
||||
from parso.grammar import Grammar, load_grammar
|
||||
from parso.utils import split_lines, python_bytes_to_unicode
|
||||
|
||||
|
||||
__version__ = '0.0.3'
|
||||
__version__ = '0.1.1'
|
||||
|
||||
|
||||
def parse(code=None, **kwargs):
|
||||
"""
|
||||
A utility function to parse Python with the current Python version. Params
|
||||
are documented in ``Grammar.parse``.
|
||||
A utility function to avoid loading grammars.
|
||||
Params are documented in :py:meth:`parso.Grammar.parse`.
|
||||
|
||||
:param str version: The version used by :py:func:`parso.load_grammar`.
|
||||
"""
|
||||
version = kwargs.pop('version', None)
|
||||
grammar = load_grammar(version=version)
|
||||
|
||||
@@ -4,13 +4,19 @@ import sys
|
||||
import hashlib
|
||||
import gc
|
||||
import shutil
|
||||
import pickle
|
||||
import platform
|
||||
import errno
|
||||
import logging
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except:
|
||||
import pickle
|
||||
|
||||
from parso._compatibility import FileNotFoundError
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
_PICKLE_VERSION = 30
|
||||
"""
|
||||
@@ -111,7 +117,7 @@ def _load_from_file_system(hashed_grammar, path, p_time, cache_path=None):
|
||||
return None
|
||||
else:
|
||||
parser_cache.setdefault(hashed_grammar, {})[path] = module_cache_item
|
||||
logging.debug('pickle loaded: %s', path)
|
||||
LOG.debug('pickle loaded: %s', path)
|
||||
return module_cache_item.node
|
||||
|
||||
|
||||
@@ -125,7 +131,7 @@ def save_module(hashed_grammar, path, module, lines, pickling=True, cache_path=N
|
||||
item = _NodeCacheItem(module, lines, p_time)
|
||||
parser_cache.setdefault(hashed_grammar, {})[path] = item
|
||||
if pickling and path is not None:
|
||||
_save_to_file_system(hashed_grammar, path, item)
|
||||
_save_to_file_system(hashed_grammar, path, item, cache_path=cache_path)
|
||||
|
||||
|
||||
def _save_to_file_system(hashed_grammar, path, item, cache_path=None):
|
||||
|
||||
@@ -19,10 +19,11 @@ _loaded_grammars = {}
|
||||
|
||||
class Grammar(object):
|
||||
"""
|
||||
Create custom grammars by calling this. It's not really supported, yet.
|
||||
:py:func:`parso.load_grammar` returns instances of this class.
|
||||
|
||||
:param text: A BNF representation of your grammar.
|
||||
Creating custom grammars by calling this is not supported, yet.
|
||||
"""
|
||||
#:param text: A BNF representation of your grammar.
|
||||
_error_normalizer_config = None
|
||||
_token_namespace = None
|
||||
_default_normalizer_config = pep8.PEP8NormalizerConfig()
|
||||
@@ -44,33 +45,38 @@ class Grammar(object):
|
||||
If you need finer grained control over the parsed instance, there will be
|
||||
other ways to access it.
|
||||
|
||||
:param code str: A unicode string that contains Python code.
|
||||
:param path str: The path to the file you want to open. Only needed for caching.
|
||||
:param error_recovery bool: If enabled, any code will be returned. If
|
||||
:param str code: A unicode or bytes string. When it's not possible to
|
||||
decode bytes to a string, returns a
|
||||
:py:class:`UnicodeDecodeError`.
|
||||
:param bool error_recovery: If enabled, any code will be returned. If
|
||||
it is invalid, it will be returned as an error node. If disabled,
|
||||
you will get a ParseError when encountering syntax errors in your
|
||||
code.
|
||||
:param start_symbol str: The grammar symbol that you want to parse. Only
|
||||
:param str start_symbol: The grammar symbol that you want to parse. Only
|
||||
allowed to be used when error_recovery is False.
|
||||
:param cache bool: Keeps a copy of the parser tree in RAM and on disk
|
||||
:param str path: The path to the file you want to open. Only needed for caching.
|
||||
:param bool cache: Keeps a copy of the parser tree in RAM and on disk
|
||||
if a path is given. Returns the cached trees if the corresponding
|
||||
files on disk have not changed.
|
||||
:param diff_cache bool: Diffs the cached python module against the new
|
||||
:param bool diff_cache: Diffs the cached python module against the new
|
||||
code and tries to parse only the parts that have changed. Returns
|
||||
the same (changed) module that is found in cache. Using this option
|
||||
requires you to not do anything anymore with the old cached module,
|
||||
because the contents of it might have changed.
|
||||
:param cache_path bool: If given saves the parso cache in this
|
||||
requires you to not do anything anymore with the cached modules
|
||||
under that path, because the contents of it might change. This
|
||||
option is still somewhat experimental. If you want stability,
|
||||
please don't use it.
|
||||
:param bool cache_path: If given saves the parso cache in this
|
||||
directory. If not given, defaults to the default cache places on
|
||||
each platform.
|
||||
|
||||
:return: A syntax tree node. Typically the module.
|
||||
:return: A subclass of :py:class:`parso.tree.NodeOrLeaf`. Typically a
|
||||
:py:class:`parso.python.tree.Module`.
|
||||
"""
|
||||
if 'start_pos' in kwargs:
|
||||
raise TypeError("parse() got an unexpected keyworda argument.")
|
||||
return self._parse(code=code, **kwargs)
|
||||
|
||||
def _parse(self, code=None, path=None, error_recovery=True,
|
||||
def _parse(self, code=None, error_recovery=True, path=None,
|
||||
start_symbol=None, cache=False, diff_cache=False,
|
||||
cache_path=None, start_pos=(1, 0)):
|
||||
"""
|
||||
@@ -88,10 +94,7 @@ class Grammar(object):
|
||||
if error_recovery and start_symbol != 'file_input':
|
||||
raise NotImplementedError("This is currently not implemented.")
|
||||
|
||||
if cache and code is None and path is not None:
|
||||
# With the current architecture we cannot load from cache if the
|
||||
# code is given, because we just load from cache if it's not older than
|
||||
# the latest change (file last modified).
|
||||
if cache and path is not None:
|
||||
module_node = load_module(self._hashed, path, cache_path=cache_path)
|
||||
if module_node is not None:
|
||||
return module_node
|
||||
@@ -152,6 +155,11 @@ class Grammar(object):
|
||||
return ns
|
||||
|
||||
def iter_errors(self, node):
|
||||
"""
|
||||
Given a :py:class:`parso.tree.NodeOrLeaf` returns a generator of
|
||||
:py:class:`parso.normalizer.Issue` objects. For Python this is
|
||||
a list of syntax/indentation errors.
|
||||
"""
|
||||
if self._error_normalizer_config is None:
|
||||
raise ValueError("No error normalizer specified for this grammar.")
|
||||
|
||||
@@ -237,16 +245,19 @@ class PythonFStringGrammar(Grammar):
|
||||
|
||||
def load_grammar(**kwargs):
|
||||
"""
|
||||
Loads a Python grammar. The default version is the current Python version.
|
||||
Loads a :py:class:`parso.Grammar`. The default version is the current Python
|
||||
version.
|
||||
|
||||
If you need support for a specific version, please use e.g.
|
||||
`version='3.3'`.
|
||||
:param str version: A python version string, e.g. ``version='3.3'``.
|
||||
"""
|
||||
def load_grammar(language='python', version=None):
|
||||
if language == 'python':
|
||||
version_info = parse_version_string(version)
|
||||
|
||||
file = 'python/grammar%s%s.txt' % (version_info.major, version_info.minor)
|
||||
file = os.path.join(
|
||||
'python',
|
||||
'grammar%s%s.txt' % (version_info.major, version_info.minor)
|
||||
)
|
||||
|
||||
global _loaded_grammars
|
||||
path = os.path.join(os.path.dirname(__file__), file)
|
||||
|
||||
@@ -121,8 +121,18 @@ class Issue(object):
|
||||
def __init__(self, node, code, message):
|
||||
self._node = node
|
||||
self.code = code
|
||||
"""
|
||||
An integer code that stands for the type of error.
|
||||
"""
|
||||
self.message = message
|
||||
"""
|
||||
A message (string) for the issue.
|
||||
"""
|
||||
self.start_pos = node.start_pos
|
||||
"""
|
||||
The start position position of the error as a tuple (line, column). As
|
||||
always in |parso| the first line is 1 and the first column 0.
|
||||
"""
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.start_pos == other.start_pos and self.code == other.code
|
||||
|
||||
@@ -16,7 +16,10 @@ fallback token code OP, but the parser needs the actual token code.
|
||||
|
||||
"""
|
||||
|
||||
import pickle
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except:
|
||||
import pickle
|
||||
|
||||
|
||||
class Grammar(object):
|
||||
|
||||
@@ -16,6 +16,8 @@ from parso.python.tree import EndMarker
|
||||
from parso.python.tokenize import (NEWLINE, PythonToken, ERROR_DEDENT,
|
||||
ENDMARKER, INDENT, DEDENT)
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _get_last_line(node_or_leaf):
|
||||
last_leaf = node_or_leaf.get_last_leaf()
|
||||
@@ -116,7 +118,7 @@ class DiffParser(object):
|
||||
|
||||
Returns the new module node.
|
||||
'''
|
||||
logging.debug('diff parser start')
|
||||
LOG.debug('diff parser start')
|
||||
# Reset the used names cache so they get regenerated.
|
||||
self._module._used_names = None
|
||||
|
||||
@@ -127,11 +129,11 @@ class DiffParser(object):
|
||||
line_length = len(new_lines)
|
||||
sm = difflib.SequenceMatcher(None, old_lines, self._parser_lines_new)
|
||||
opcodes = sm.get_opcodes()
|
||||
logging.debug('diff parser calculated')
|
||||
logging.debug('diff: line_lengths old: %s, new: %s' % (len(old_lines), line_length))
|
||||
LOG.debug('diff parser calculated')
|
||||
LOG.debug('diff: line_lengths old: %s, new: %s' % (len(old_lines), line_length))
|
||||
|
||||
for operation, i1, i2, j1, j2 in opcodes:
|
||||
logging.debug('diff %s old[%s:%s] new[%s:%s]',
|
||||
LOG.debug('diff %s old[%s:%s] new[%s:%s]',
|
||||
operation, i1 + 1, i2, j1 + 1, j2)
|
||||
|
||||
if j2 == line_length and new_lines[-1] == '':
|
||||
@@ -161,12 +163,12 @@ class DiffParser(object):
|
||||
% (last_pos, line_length, ''.join(diff))
|
||||
)
|
||||
|
||||
logging.debug('diff parser end')
|
||||
LOG.debug('diff parser end')
|
||||
return self._module
|
||||
|
||||
def _enabled_debugging(self, old_lines, lines_new):
|
||||
if self._module.get_code() != ''.join(lines_new):
|
||||
logging.warning('parser issue:\n%s\n%s', ''.join(old_lines),
|
||||
LOG.warning('parser issue:\n%s\n%s', ''.join(old_lines),
|
||||
''.join(lines_new))
|
||||
|
||||
def _copy_from_old_parser(self, line_offset, until_line_old, until_line_new):
|
||||
@@ -203,7 +205,7 @@ class DiffParser(object):
|
||||
from_ = copied_nodes[0].get_start_pos_of_prefix()[0] + line_offset
|
||||
to = self._nodes_stack.parsed_until_line
|
||||
|
||||
logging.debug('diff actually copy %s to %s', from_, to)
|
||||
LOG.debug('diff actually copy %s to %s', from_, to)
|
||||
# Since there are potential bugs that might loop here endlessly, we
|
||||
# just stop here.
|
||||
assert last_until_line != self._nodes_stack.parsed_until_line \
|
||||
@@ -248,7 +250,7 @@ class DiffParser(object):
|
||||
nodes = node.children
|
||||
|
||||
self._nodes_stack.add_parsed_nodes(nodes)
|
||||
logging.debug(
|
||||
LOG.debug(
|
||||
'parse_part from %s to %s (to %s in part parser)',
|
||||
nodes[0].get_start_pos_of_prefix()[0],
|
||||
self._nodes_stack.parsed_until_line,
|
||||
|
||||
150
parso/python/grammar37.txt
Normal file
150
parso/python/grammar37.txt
Normal file
@@ -0,0 +1,150 @@
|
||||
# Grammar for Python
|
||||
|
||||
# NOTE WELL: You should also follow all the steps listed at
|
||||
# https://docs.python.org/devguide/grammar.html
|
||||
|
||||
# Start symbols for the grammar:
|
||||
# single_input is a single interactive statement;
|
||||
# file_input is a module or sequence of commands read from an input file;
|
||||
# eval_input is the input for the eval() functions.
|
||||
# NB: compound_stmt in single_input is followed by extra NEWLINE!
|
||||
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
|
||||
file_input: (NEWLINE | stmt)* ENDMARKER
|
||||
eval_input: testlist NEWLINE* ENDMARKER
|
||||
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
|
||||
decorators: decorator+
|
||||
decorated: decorators (classdef | funcdef | async_funcdef)
|
||||
|
||||
# NOTE: Francisco Souza/Reinoud Elhorst, using ASYNC/'await' keywords instead of
|
||||
# skipping python3.5+ compatibility, in favour of 3.7 solution
|
||||
async_funcdef: 'async' funcdef
|
||||
funcdef: 'def' NAME parameters ['->' test] ':' suite
|
||||
|
||||
parameters: '(' [typedargslist] ')'
|
||||
typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [
|
||||
'*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]
|
||||
| '**' tfpdef [',']]]
|
||||
| '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]
|
||||
| '**' tfpdef [','])
|
||||
tfpdef: NAME [':' test]
|
||||
varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [
|
||||
'*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
|
||||
| '**' vfpdef [',']]]
|
||||
| '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
|
||||
| '**' vfpdef [',']
|
||||
)
|
||||
vfpdef: NAME
|
||||
|
||||
stmt: simple_stmt | compound_stmt
|
||||
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
|
||||
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
|
||||
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
|
||||
expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |
|
||||
('=' (yield_expr|testlist_star_expr))*)
|
||||
annassign: ':' test ['=' test]
|
||||
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
|
||||
augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |
|
||||
'<<=' | '>>=' | '**=' | '//=')
|
||||
# For normal and annotated assignments, additional restrictions enforced by the interpreter
|
||||
del_stmt: 'del' exprlist
|
||||
pass_stmt: 'pass'
|
||||
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
|
||||
break_stmt: 'break'
|
||||
continue_stmt: 'continue'
|
||||
return_stmt: 'return' [testlist]
|
||||
yield_stmt: yield_expr
|
||||
raise_stmt: 'raise' [test ['from' test]]
|
||||
import_stmt: import_name | import_from
|
||||
import_name: 'import' dotted_as_names
|
||||
# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS
|
||||
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
|
||||
'import' ('*' | '(' import_as_names ')' | import_as_names))
|
||||
import_as_name: NAME ['as' NAME]
|
||||
dotted_as_name: dotted_name ['as' NAME]
|
||||
import_as_names: import_as_name (',' import_as_name)* [',']
|
||||
dotted_as_names: dotted_as_name (',' dotted_as_name)*
|
||||
dotted_name: NAME ('.' NAME)*
|
||||
global_stmt: 'global' NAME (',' NAME)*
|
||||
nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
|
||||
assert_stmt: 'assert' test [',' test]
|
||||
|
||||
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt
|
||||
async_stmt: 'async' (funcdef | with_stmt | for_stmt)
|
||||
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
|
||||
while_stmt: 'while' test ':' suite ['else' ':' suite]
|
||||
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
|
||||
try_stmt: ('try' ':' suite
|
||||
((except_clause ':' suite)+
|
||||
['else' ':' suite]
|
||||
['finally' ':' suite] |
|
||||
'finally' ':' suite))
|
||||
with_stmt: 'with' with_item (',' with_item)* ':' suite
|
||||
with_item: test ['as' expr]
|
||||
# NB compile.c makes sure that the default except clause is last
|
||||
except_clause: 'except' [test ['as' NAME]]
|
||||
suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
|
||||
|
||||
test: or_test ['if' or_test 'else' test] | lambdef
|
||||
test_nocond: or_test | lambdef_nocond
|
||||
lambdef: 'lambda' [varargslist] ':' test
|
||||
lambdef_nocond: 'lambda' [varargslist] ':' test_nocond
|
||||
or_test: and_test ('or' and_test)*
|
||||
and_test: not_test ('and' not_test)*
|
||||
not_test: 'not' not_test | comparison
|
||||
comparison: expr (comp_op expr)*
|
||||
# <> isn't actually a valid comparison operator in Python. It's here for the
|
||||
# sake of a __future__ import described in PEP 401 (which really works :-)
|
||||
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
|
||||
star_expr: '*' expr
|
||||
expr: xor_expr ('|' xor_expr)*
|
||||
xor_expr: and_expr ('^' and_expr)*
|
||||
and_expr: shift_expr ('&' shift_expr)*
|
||||
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
|
||||
arith_expr: term (('+'|'-') term)*
|
||||
term: factor (('*'|'@'|'/'|'%'|'//') factor)*
|
||||
factor: ('+'|'-'|'~') factor | power
|
||||
power: atom_expr ['**' factor]
|
||||
atom_expr: ['await'] atom trailer*
|
||||
atom: ('(' [yield_expr|testlist_comp] ')' |
|
||||
'[' [testlist_comp] ']' |
|
||||
'{' [dictorsetmaker] '}' |
|
||||
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
|
||||
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
|
||||
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
|
||||
subscriptlist: subscript (',' subscript)* [',']
|
||||
subscript: test | [test] ':' [test] [sliceop]
|
||||
sliceop: ':' [test]
|
||||
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
|
||||
testlist: test (',' test)* [',']
|
||||
dictorsetmaker: ( ((test ':' test | '**' expr)
|
||||
(comp_for | (',' (test ':' test | '**' expr))* [','])) |
|
||||
((test | star_expr)
|
||||
(comp_for | (',' (test | star_expr))* [','])) )
|
||||
|
||||
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
|
||||
|
||||
arglist: argument (',' argument)* [',']
|
||||
|
||||
# The reason that keywords are test nodes instead of NAME is that using NAME
|
||||
# results in an ambiguity. ast.c makes sure it's a NAME.
|
||||
# "test '=' test" is really "keyword '=' test", but we have no such token.
|
||||
# These need to be in a single rule to avoid grammar that is ambiguous
|
||||
# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,
|
||||
# we explicitly match '*' here, too, to give it proper precedence.
|
||||
# Illegal combinations and orderings are blocked in ast.c:
|
||||
# multiple (test comp_for) arguments are blocked; keyword unpackings
|
||||
# that precede iterable unpackings are blocked; etc.
|
||||
argument: ( test [comp_for] |
|
||||
test '=' test |
|
||||
'**' test |
|
||||
'*' test )
|
||||
|
||||
comp_iter: comp_for | comp_if
|
||||
comp_for: ['async'] 'for' exprlist 'in' or_test [comp_iter]
|
||||
comp_if: 'if' test_nocond [comp_iter]
|
||||
|
||||
# not used in grammar, but may appear in "node" passed from Parser to Compiler
|
||||
encoding_decl: NAME
|
||||
|
||||
yield_expr: 'yield' [yield_arg]
|
||||
yield_arg: 'from' test | testlist
|
||||
@@ -145,7 +145,7 @@ class BackslashNode(IndentationNode):
|
||||
|
||||
|
||||
def _is_magic_name(name):
|
||||
return name.value.startswith('__') and name.value.startswith('__')
|
||||
return name.value.startswith('__') and name.value.endswith('__')
|
||||
|
||||
|
||||
class PEP8Normalizer(ErrorFinder):
|
||||
|
||||
@@ -62,17 +62,18 @@ def maybe(*choices):
|
||||
|
||||
# Return the empty string, plus all of the valid string prefixes.
|
||||
def _all_string_prefixes(version_info):
|
||||
def different_case_versions(prefix):
|
||||
for s in _itertools.product(*[(c, c.upper()) for c in prefix]):
|
||||
yield ''.join(s)
|
||||
# The valid string prefixes. Only contain the lower case versions,
|
||||
# and don't contain any permuations (include 'fr', but not
|
||||
# 'rf'). The various permutations will be generated.
|
||||
_valid_string_prefixes = ['b', 'r', 'u', 'br']
|
||||
_valid_string_prefixes = ['b', 'r', 'u']
|
||||
if version_info >= (3, 0):
|
||||
_valid_string_prefixes.append('br')
|
||||
|
||||
if version_info >= (3, 6):
|
||||
_valid_string_prefixes += ['f', 'fr']
|
||||
if version_info <= (2, 7):
|
||||
# TODO this is actually not 100% valid. ur is valid in Python 2.7,
|
||||
# while ru is not.
|
||||
# TODO rb is also not valid.
|
||||
_valid_string_prefixes.append('ur')
|
||||
|
||||
# if we add binary f-strings, add: ['fb', 'fbr']
|
||||
result = set([''])
|
||||
@@ -80,8 +81,11 @@ def _all_string_prefixes(version_info):
|
||||
for t in _itertools.permutations(prefix):
|
||||
# create a list with upper and lower versions of each
|
||||
# character
|
||||
for s in _itertools.product(*[(c, c.upper()) for c in t]):
|
||||
result.add(''.join(s))
|
||||
result.update(different_case_versions(t))
|
||||
if version_info <= (2, 7):
|
||||
# In Python 2 the order cannot just be random.
|
||||
result.update(different_case_versions('ur'))
|
||||
result.update(different_case_versions('br'))
|
||||
return result
|
||||
|
||||
|
||||
|
||||
@@ -1,16 +1,18 @@
|
||||
"""
|
||||
If you know what an syntax tree is, you'll see that this module is pretty much
|
||||
that. The classes represent syntax elements like functions and imports.
|
||||
This is the syntax tree for Python syntaxes (2 & 3). The classes represent
|
||||
syntax elements like functions and imports.
|
||||
|
||||
This is the "business logic" part of the parser. There's a lot of logic here
|
||||
that makes it easier for Jedi (and other libraries) to deal with a Python syntax
|
||||
tree.
|
||||
All of the nodes can be traced back to the `Python grammar file
|
||||
<https://docs.python.org/3/reference/grammar.html>`_. If you want to know how
|
||||
a tree is structured, just analyse that file (for each Python version it's a
|
||||
bit different).
|
||||
|
||||
By using `get_code` on a module, you can get back the 1-to-1 representation of
|
||||
the input given to the parser. This is important if you are using refactoring.
|
||||
There's a lot of logic here that makes it easier for Jedi (and other libraries)
|
||||
to deal with a Python syntax tree.
|
||||
|
||||
The easiest way to play with this module is to use :class:`parsing.Parser`.
|
||||
:attr:`parsing.Parser.module` holds an instance of :class:`Module`:
|
||||
By using :py:meth:`parso.tree.NodeOrLeaf.get_code` on a module, you can get
|
||||
back the 1-to-1 representation of the input given to the parser. This is
|
||||
important if you want to refactor a parser tree.
|
||||
|
||||
>>> from parso import parse
|
||||
>>> parser = parse('import os')
|
||||
@@ -23,6 +25,21 @@ Any subclasses of :class:`Scope`, including :class:`Module` has an attribute
|
||||
|
||||
>>> list(module.iter_imports())
|
||||
[<ImportName: import os@1,0>]
|
||||
|
||||
Changes to the Python Grammar
|
||||
-----------------------------
|
||||
|
||||
A few things have changed when looking at Python grammar files:
|
||||
|
||||
- :class:`Param` does not exist in Python grammar files. It is essentially a
|
||||
part of a ``parameters`` node. |parso| splits it up to make it easier to
|
||||
analyse parameters. However this just makes it easier to deal with the syntax
|
||||
tree, it doesn't actually change the valid syntax.
|
||||
- A few nodes like `lambdef` and `lambdef_nocond` have been merged in the
|
||||
syntax tree to make it easier to do deal with them.
|
||||
|
||||
Parser Tree Classes
|
||||
-------------------
|
||||
"""
|
||||
|
||||
import re
|
||||
@@ -32,6 +49,17 @@ from parso.tree import Node, BaseNode, Leaf, ErrorNode, ErrorLeaf, \
|
||||
search_ancestor
|
||||
from parso.python.prefix import split_prefix
|
||||
|
||||
_FLOW_CONTAINERS = set(['if_stmt', 'while_stmt', 'for_stmt', 'try_stmt',
|
||||
'with_stmt', 'async_stmt', 'suite'])
|
||||
_RETURN_STMT_CONTAINERS = set(['suite', 'simple_stmt']) | _FLOW_CONTAINERS
|
||||
_FUNC_CONTAINERS = set(['suite', 'simple_stmt', 'decorated']) | _FLOW_CONTAINERS
|
||||
_GET_DEFINITION_TYPES = set([
|
||||
'expr_stmt', 'comp_for', 'with_stmt', 'for_stmt', 'import_name',
|
||||
'import_from', 'param'
|
||||
])
|
||||
_IMPORTS = set(['import_name', 'import_from'])
|
||||
|
||||
|
||||
|
||||
class DocstringMixin(object):
|
||||
__slots__ = ()
|
||||
@@ -42,7 +70,7 @@ class DocstringMixin(object):
|
||||
"""
|
||||
if self.type == 'file_input':
|
||||
node = self.children[0]
|
||||
elif isinstance(self, ClassOrFunc):
|
||||
elif self.type in ('funcdef', 'classdef'):
|
||||
node = self.children[self.children.index(':') + 1]
|
||||
if node.type == 'suite': # Normally a suite
|
||||
node = node.children[1] # -> NEWLINE stmt
|
||||
@@ -67,25 +95,11 @@ class PythonMixin(object):
|
||||
"""
|
||||
__slots__ = ()
|
||||
|
||||
def get_definition(self):
|
||||
if self.type in ('newline', 'endmarker'):
|
||||
raise ValueError('Cannot get the indentation of whitespace or indentation.')
|
||||
scope = self
|
||||
while scope.parent is not None:
|
||||
parent = scope.parent
|
||||
if isinstance(scope, (PythonNode, PythonLeaf)) and parent.type not in ('simple_stmt', 'file_input'):
|
||||
if scope.type == 'testlist_comp':
|
||||
try:
|
||||
if scope.children[1].type == 'comp_for':
|
||||
return scope.children[1]
|
||||
except IndexError:
|
||||
pass
|
||||
scope = parent
|
||||
else:
|
||||
break
|
||||
return scope
|
||||
|
||||
def get_name_of_position(self, position):
|
||||
"""
|
||||
Given a (line, column) tuple, returns a :py:class:`Name` or ``None`` if
|
||||
there is no name at that position.
|
||||
"""
|
||||
for c in self.children:
|
||||
if isinstance(c, Leaf):
|
||||
if c.type == 'name' and c.start_pos <= position <= c.end_pos:
|
||||
@@ -104,6 +118,9 @@ class PythonLeaf(PythonMixin, Leaf):
|
||||
return split_prefix(self, self.get_start_pos_of_prefix())
|
||||
|
||||
def get_start_pos_of_prefix(self):
|
||||
"""
|
||||
Basically calls :py:meth:`parso.tree.NodeOrLeaf.get_start_pos_of_prefix`.
|
||||
"""
|
||||
# TODO it is really ugly that we have to override it. Maybe change
|
||||
# indent error leafs somehow? No idea how, though.
|
||||
previous_leaf = self.get_previous_leaf()
|
||||
@@ -173,21 +190,50 @@ class Name(_LeafWithoutNewlines):
|
||||
self.line, self.column)
|
||||
|
||||
def is_definition(self):
|
||||
if self.parent.type in ('power', 'atom_expr'):
|
||||
# In `self.x = 3` self is not a definition, but x is.
|
||||
return False
|
||||
"""
|
||||
Returns True if the name is being defined.
|
||||
"""
|
||||
return self.get_definition() is not None
|
||||
|
||||
def get_definition(self, import_name_always=False):
|
||||
"""
|
||||
Returns None if there's on definition for a name.
|
||||
|
||||
:param import_name_alway: Specifies if an import name is always a
|
||||
definition. Normally foo in `from foo import bar` is not a
|
||||
definition.
|
||||
"""
|
||||
node = self.parent
|
||||
type_ = node.type
|
||||
if type_ in ('power', 'atom_expr'):
|
||||
# In `self.x = 3` self is not a definition, but x is.
|
||||
return None
|
||||
|
||||
if type_ in ('funcdef', 'classdef'):
|
||||
if self == node.name:
|
||||
return node
|
||||
return None
|
||||
|
||||
if type_ == 'except_clause':
|
||||
# TODO in Python 2 this doesn't work correctly. See grammar file.
|
||||
# I think we'll just let it be. Python 2 will be gone in a few
|
||||
# years.
|
||||
if self.get_previous_sibling() == 'as':
|
||||
return node.parent # The try_stmt.
|
||||
return None
|
||||
|
||||
while node is not None:
|
||||
if node.type == 'suite':
|
||||
return None
|
||||
if node.type in _GET_DEFINITION_TYPES:
|
||||
if self in node.get_defined_names():
|
||||
return node
|
||||
if import_name_always and node.type in _IMPORTS:
|
||||
return node
|
||||
return None
|
||||
node = node.parent
|
||||
return None
|
||||
|
||||
stmt = self.get_definition()
|
||||
if stmt.type in ('funcdef', 'classdef', 'param'):
|
||||
return self == stmt.name
|
||||
elif stmt.type == 'for_stmt':
|
||||
return self.start_pos < stmt.children[2].start_pos
|
||||
elif stmt.type == 'try_stmt':
|
||||
return self.get_previous_sibling() == 'as'
|
||||
else:
|
||||
return stmt.type in ('expr_stmt', 'import_name', 'import_from',
|
||||
'comp_for', 'with_stmt') \
|
||||
and self in stmt.get_defined_names()
|
||||
|
||||
|
||||
class Literal(PythonLeaf):
|
||||
@@ -279,8 +325,7 @@ class Scope(PythonBaseNode, DocstringMixin):
|
||||
for element in children:
|
||||
if element.type in names:
|
||||
yield element
|
||||
if element.type in ('suite', 'simple_stmt', 'decorated') \
|
||||
or isinstance(element, Flow):
|
||||
if element.type in _FUNC_CONTAINERS:
|
||||
for e in scan(element.children):
|
||||
yield e
|
||||
|
||||
@@ -315,14 +360,15 @@ class Module(Scope):
|
||||
super(Module, self).__init__(children)
|
||||
self._used_names = None
|
||||
|
||||
def iter_future_import_names(self):
|
||||
def _iter_future_import_names(self):
|
||||
"""
|
||||
:return list of str: A list of future import names.
|
||||
:return: A list of future import names.
|
||||
:rtype: list of str
|
||||
"""
|
||||
# TODO this is a strange scan and not fully correct. I think Python's
|
||||
# parser does it in a different way and scans for the first
|
||||
# statement/import with a tokenizer (to check for syntax changes like
|
||||
# the future print statement).
|
||||
# In Python it's not allowed to use future imports after the first
|
||||
# actual (non-future) statement. However this is not a linter here,
|
||||
# just return all future imports. If people want to scan for issues
|
||||
# they should use the API.
|
||||
for imp in self.iter_imports():
|
||||
if imp.type == 'import_from' and imp.level == 0:
|
||||
for path in imp.get_paths():
|
||||
@@ -330,21 +376,22 @@ class Module(Scope):
|
||||
if len(names) == 2 and names[0] == '__future__':
|
||||
yield names[1]
|
||||
|
||||
def has_explicit_absolute_import(self):
|
||||
def _has_explicit_absolute_import(self):
|
||||
"""
|
||||
Checks if imports in this module are explicitly absolute, i.e. there
|
||||
is a ``__future__`` import.
|
||||
Currently not public, might be in the future.
|
||||
:return bool:
|
||||
"""
|
||||
for name in self.iter_future_import_names():
|
||||
for name in self._iter_future_import_names():
|
||||
if name == 'absolute_import':
|
||||
return True
|
||||
return False
|
||||
|
||||
def get_used_names(self):
|
||||
"""
|
||||
Returns all the `Name` leafs that exist in this module. Tihs includes
|
||||
both definitions and references of names.
|
||||
Returns all the :class:`Name` leafs that exist in this module. This
|
||||
includes both definitions and references of names.
|
||||
"""
|
||||
if self._used_names is None:
|
||||
# Don't directly use self._used_names to eliminate a lookup.
|
||||
@@ -383,7 +430,7 @@ class ClassOrFunc(Scope):
|
||||
|
||||
def get_decorators(self):
|
||||
"""
|
||||
:return list of Decorator:
|
||||
:rtype: list of :class:`Decorator`
|
||||
"""
|
||||
decorated = self.parent
|
||||
if decorated.type == 'decorated':
|
||||
@@ -398,13 +445,6 @@ class ClassOrFunc(Scope):
|
||||
class Class(ClassOrFunc):
|
||||
"""
|
||||
Used to store the parsed contents of a python class.
|
||||
|
||||
:param name: The Class name.
|
||||
:type name: str
|
||||
:param supers: The super classes of a Class.
|
||||
:type supers: list
|
||||
:param start_pos: The start position (line, column) of the class.
|
||||
:type start_pos: tuple(int, int)
|
||||
"""
|
||||
type = 'classdef'
|
||||
__slots__ = ()
|
||||
@@ -518,14 +558,54 @@ class Function(ClassOrFunc):
|
||||
"""
|
||||
Returns a generator of `yield_expr`.
|
||||
"""
|
||||
# TODO This is incorrect, yields are also possible in a statement.
|
||||
return self._search_in_scope('yield_expr')
|
||||
def scan(children):
|
||||
for element in children:
|
||||
if element.type in ('classdef', 'funcdef', 'lambdef'):
|
||||
continue
|
||||
|
||||
try:
|
||||
nested_children = element.children
|
||||
except AttributeError:
|
||||
if element.value == 'yield':
|
||||
if element.parent.type == 'yield_expr':
|
||||
yield element.parent
|
||||
else:
|
||||
yield element
|
||||
else:
|
||||
for result in scan(nested_children):
|
||||
yield result
|
||||
|
||||
return scan(self.children)
|
||||
|
||||
def iter_return_stmts(self):
|
||||
"""
|
||||
Returns a generator of `return_stmt`.
|
||||
"""
|
||||
return self._search_in_scope('return_stmt')
|
||||
def scan(children):
|
||||
for element in children:
|
||||
if element.type == 'return_stmt' \
|
||||
or element.type == 'keyword' and element.value == 'return':
|
||||
yield element
|
||||
if element.type in _RETURN_STMT_CONTAINERS:
|
||||
for e in scan(element.children):
|
||||
yield e
|
||||
|
||||
return scan(self.children)
|
||||
|
||||
def iter_raise_stmts(self):
|
||||
"""
|
||||
Returns a generator of `raise_stmt`. Includes raise statements inside try-except blocks
|
||||
"""
|
||||
def scan(children):
|
||||
for element in children:
|
||||
if element.type == 'raise_stmt' \
|
||||
or element.type == 'keyword' and element.value == 'raise':
|
||||
yield element
|
||||
if element.type in _RETURN_STMT_CONTAINERS:
|
||||
for e in scan(element.children):
|
||||
yield e
|
||||
|
||||
return scan(self.children)
|
||||
|
||||
def is_generator(self):
|
||||
"""
|
||||
@@ -651,6 +731,9 @@ class ForStmt(Flow):
|
||||
"""
|
||||
return self.children[3]
|
||||
|
||||
def get_defined_names(self):
|
||||
return _defined_names(self.children[1])
|
||||
|
||||
|
||||
class TryStmt(Flow):
|
||||
type = 'try_stmt'
|
||||
@@ -662,7 +745,6 @@ class TryStmt(Flow):
|
||||
Returns ``[None]`` for except clauses without an exception given.
|
||||
"""
|
||||
for node in self.children:
|
||||
# TODO this is not correct. We're not returning an except clause.
|
||||
if node.type == 'except_clause':
|
||||
yield node.children[1]
|
||||
elif node == 'except':
|
||||
@@ -685,8 +767,7 @@ class WithStmt(Flow):
|
||||
names += _defined_names(with_item.children[2])
|
||||
return names
|
||||
|
||||
def get_context_manager_from_name(self, name):
|
||||
# TODO Replace context_manager with test?
|
||||
def get_test_node_from_name(self, name):
|
||||
node = name.parent
|
||||
if node.type != 'with_item':
|
||||
raise ValueError('The name is not actually part of a with statement.')
|
||||
@@ -996,7 +1077,7 @@ class Param(PythonBaseNode):
|
||||
@property
|
||||
def annotation(self):
|
||||
"""
|
||||
The default is the test node that appears after `->`. Is `None` in case
|
||||
The default is the test node that appears after `:`. Is `None` in case
|
||||
no annotation is present.
|
||||
"""
|
||||
tfpdef = self._tfpdef()
|
||||
@@ -1025,6 +1106,9 @@ class Param(PythonBaseNode):
|
||||
else:
|
||||
return self._tfpdef()
|
||||
|
||||
def get_defined_names(self):
|
||||
return [self.name]
|
||||
|
||||
@property
|
||||
def position_index(self):
|
||||
"""
|
||||
@@ -1077,4 +1161,5 @@ class CompFor(PythonBaseNode):
|
||||
"""
|
||||
Returns the a list of `Name` that the comprehension defines.
|
||||
"""
|
||||
return _defined_names(self.children[1])
|
||||
# allow async for
|
||||
return _defined_names(self.children[self.children.index('for') + 1])
|
||||
|
||||
@@ -4,12 +4,12 @@ from parso._compatibility import utf8_repr, encoding, py_version
|
||||
|
||||
def search_ancestor(node, *node_types):
|
||||
"""
|
||||
Recursively looks at the parents of a node and checks if the type names
|
||||
match.
|
||||
Recursively looks at the parents of a node and returns the first found node
|
||||
that matches node_types. Returns ``None`` if no matching node is found.
|
||||
|
||||
:param node: The node that is looked at.
|
||||
:param node_types: A tuple or a string of type names that are
|
||||
searched for.
|
||||
:param node: The ancestors of this node will be checked.
|
||||
:param node_types: type names that are searched for.
|
||||
:type node_types: tuple of str
|
||||
"""
|
||||
while True:
|
||||
node = node.parent
|
||||
@@ -22,6 +22,10 @@ class NodeOrLeaf(object):
|
||||
The base class for nodes and leaves.
|
||||
"""
|
||||
__slots__ = ()
|
||||
type = None
|
||||
'''
|
||||
The type is a string that typically matches the types of the grammar file.
|
||||
'''
|
||||
|
||||
def get_root_node(self):
|
||||
"""
|
||||
@@ -35,8 +39,8 @@ class NodeOrLeaf(object):
|
||||
|
||||
def get_next_sibling(self):
|
||||
"""
|
||||
The node immediately following the invocant in their parent's children
|
||||
list. If the invocant does not have a next sibling, it is None
|
||||
Returns the node immediately following this node in this parent's
|
||||
children list. If this node does not have a next sibling, it is None
|
||||
"""
|
||||
# Can't use index(); we need to test by identity
|
||||
for i, child in enumerate(self.parent.children):
|
||||
@@ -48,8 +52,9 @@ class NodeOrLeaf(object):
|
||||
|
||||
def get_previous_sibling(self):
|
||||
"""
|
||||
The node/leaf immediately preceding the invocant in their parent's
|
||||
children list. If the invocant does not have a previous sibling, it is
|
||||
Returns the node immediately preceding this node in this parent's
|
||||
children list. If this node does not have a previous sibling, it is
|
||||
None.
|
||||
None.
|
||||
"""
|
||||
# Can't use index(); we need to test by identity
|
||||
@@ -62,7 +67,7 @@ class NodeOrLeaf(object):
|
||||
def get_previous_leaf(self):
|
||||
"""
|
||||
Returns the previous leaf in the parser tree.
|
||||
Raises an IndexError if it's the first element in the parser tree.
|
||||
Returns `None` if this is the first element in the parser tree.
|
||||
"""
|
||||
node = self
|
||||
while True:
|
||||
@@ -85,7 +90,7 @@ class NodeOrLeaf(object):
|
||||
def get_next_leaf(self):
|
||||
"""
|
||||
Returns the next leaf in the parser tree.
|
||||
Returns `None` if it's the last element in the parser tree.
|
||||
Returns None if this is the last element in the parser tree.
|
||||
"""
|
||||
node = self
|
||||
while True:
|
||||
@@ -135,19 +140,19 @@ class NodeOrLeaf(object):
|
||||
@abstractmethod
|
||||
def get_first_leaf(self):
|
||||
"""
|
||||
Returns the first leaf of a node or itself it's a leaf.
|
||||
Returns the first leaf of a node or itself if this is a leaf.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_last_leaf(self):
|
||||
"""
|
||||
Returns the last leaf of a node or itself it's a leaf.
|
||||
Returns the last leaf of a node or itself if this is a leaf.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_code(self, include_prefix=True):
|
||||
"""
|
||||
Returns the code that was the input of the parser.
|
||||
Returns the code that was input the input for the parser for this node.
|
||||
|
||||
:param include_prefix: Removes the prefix (whitespace and comments) of
|
||||
e.g. a statement.
|
||||
@@ -155,13 +160,27 @@ class NodeOrLeaf(object):
|
||||
|
||||
|
||||
class Leaf(NodeOrLeaf):
|
||||
'''
|
||||
Leafs are basically tokens with a better API. Leafs exactly know where they
|
||||
were defined and what text preceeds them.
|
||||
'''
|
||||
__slots__ = ('value', 'parent', 'line', 'column', 'prefix')
|
||||
|
||||
def __init__(self, value, start_pos, prefix=''):
|
||||
self.value = value
|
||||
'''
|
||||
:py:func:`str` The value of the current token.
|
||||
'''
|
||||
self.start_pos = start_pos
|
||||
self.prefix = prefix
|
||||
'''
|
||||
:py:func:`str` Typically a mixture of whitespace and comments. Stuff
|
||||
that is syntactically irrelevant for the syntax tree.
|
||||
'''
|
||||
self.parent = None
|
||||
'''
|
||||
The parent :class:`BaseNode` of this leaf.
|
||||
'''
|
||||
|
||||
@property
|
||||
def start_pos(self):
|
||||
@@ -219,9 +238,7 @@ class TypedLeaf(Leaf):
|
||||
class BaseNode(NodeOrLeaf):
|
||||
"""
|
||||
The super class for all nodes.
|
||||
|
||||
If you create custom nodes, you will probably want to inherit from this
|
||||
``BaseNode``.
|
||||
A node has children, a type and possibly a parent node.
|
||||
"""
|
||||
__slots__ = ('children', 'parent')
|
||||
type = None
|
||||
@@ -230,7 +247,14 @@ class BaseNode(NodeOrLeaf):
|
||||
for c in children:
|
||||
c.parent = self
|
||||
self.children = children
|
||||
"""
|
||||
A list of :class:`NodeOrLeaf` child nodes.
|
||||
"""
|
||||
self.parent = None
|
||||
'''
|
||||
The parent :class:`BaseNode` of this leaf.
|
||||
None if this is the root node.
|
||||
'''
|
||||
|
||||
@property
|
||||
def start_pos(self):
|
||||
@@ -254,6 +278,14 @@ class BaseNode(NodeOrLeaf):
|
||||
return self._get_code_for_children(self.children, include_prefix)
|
||||
|
||||
def get_leaf_for_position(self, position, include_prefixes=False):
|
||||
"""
|
||||
Get the :py:class:`parso.tree.Leaf` at ``position``
|
||||
|
||||
:param tuple position: A position tuple, row, column. Rows start from 1
|
||||
:param bool include_prefixes: If ``False``, ``None`` will be returned if ``position`` falls
|
||||
on whitespace or comments before a leaf
|
||||
:return: :py:class:`parso.tree.Leaf` at ``position``, or ``None``
|
||||
"""
|
||||
def binary_search(lower, upper):
|
||||
if lower == upper:
|
||||
element = self.children[lower]
|
||||
|
||||
@@ -11,10 +11,10 @@ Version = namedtuple('Version', 'major, minor, micro')
|
||||
|
||||
def split_lines(string, keepends=False):
|
||||
r"""
|
||||
A str.splitlines for Python code. In contrast to Python's ``str.splitlines``,
|
||||
Intended for Python code. In contrast to Python's :py:meth:`str.splitlines`,
|
||||
looks at form feeds and other special characters as normal text. Just
|
||||
splits ``\n`` and ``\r\n``.
|
||||
Also different: Returns ``['']`` for an empty string input.
|
||||
Also different: Returns ``[""]`` for an empty string input.
|
||||
|
||||
In Python 2.7 form feeds are used as normal characters when using
|
||||
str.splitlines. However in Python 3 somewhere there was a decision to split
|
||||
@@ -48,9 +48,14 @@ def split_lines(string, keepends=False):
|
||||
return re.split('\n|\r\n', string)
|
||||
|
||||
|
||||
def python_bytes_to_unicode(source, default_encoding='utf-8', errors='strict'):
|
||||
def python_bytes_to_unicode(source, encoding='utf-8', errors='strict'):
|
||||
"""
|
||||
`errors` can be 'strict', 'replace' or 'ignore'.
|
||||
Checks for unicode BOMs and PEP 263 encoding declarations. Then returns a
|
||||
unicode object like in :py:meth:`bytes.decode`.
|
||||
|
||||
:param encoding: See :py:meth:`bytes.decode` documentation.
|
||||
:param errors: See :py:meth:`bytes.decode` documentation. ``errors`` can be
|
||||
``'strict'``, ``'replace'`` or ``'ignore'``.
|
||||
"""
|
||||
def detect_encoding():
|
||||
"""
|
||||
@@ -70,7 +75,7 @@ def python_bytes_to_unicode(source, default_encoding='utf-8', errors='strict'):
|
||||
return possible_encoding.group(1)
|
||||
else:
|
||||
# the default if nothing else has been set -> PEP 263
|
||||
return default_encoding
|
||||
return encoding
|
||||
|
||||
if isinstance(source, unicode):
|
||||
# only cast str/bytes
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
addopts = --doctest-modules
|
||||
|
||||
# Ignore broken files inblackbox test directories
|
||||
norecursedirs = .* docs scripts normalizer_issue_files
|
||||
norecursedirs = .* docs scripts normalizer_issue_files build
|
||||
|
||||
# Activate `clean_jedi_cache` fixture for all tests. This should be
|
||||
# fine as long as we are using `clean_jedi_cache` as a session scoped
|
||||
|
||||
2
setup.py
2
setup.py
@@ -14,7 +14,7 @@ readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read()
|
||||
|
||||
setup(name='parso',
|
||||
version=parso.__version__,
|
||||
description='A Python parser written in Python.',
|
||||
description='A Python Parser',
|
||||
author=__AUTHOR__,
|
||||
author_email=__AUTHOR_EMAIL__,
|
||||
include_package_data=True,
|
||||
|
||||
@@ -10,14 +10,14 @@ def test_explicit_absolute_imports():
|
||||
Detect modules with ``from __future__ import absolute_import``.
|
||||
"""
|
||||
module = parse("from __future__ import absolute_import")
|
||||
assert module.has_explicit_absolute_import()
|
||||
assert module._has_explicit_absolute_import()
|
||||
|
||||
|
||||
def test_no_explicit_absolute_imports():
|
||||
"""
|
||||
Detect modules without ``from __future__ import absolute_import``.
|
||||
"""
|
||||
assert not parse("1").has_explicit_absolute_import()
|
||||
assert not parse("1")._has_explicit_absolute_import()
|
||||
|
||||
|
||||
def test_dont_break_imports_without_namespaces():
|
||||
@@ -26,4 +26,4 @@ def test_dont_break_imports_without_namespaces():
|
||||
assume that all imports have non-``None`` namespaces.
|
||||
"""
|
||||
src = "from __future__ import absolute_import\nimport xyzzy"
|
||||
assert parse(src).has_explicit_absolute_import()
|
||||
assert parse(src)._has_explicit_absolute_import()
|
||||
|
||||
@@ -115,3 +115,68 @@ def test_ellipsis_py2(each_py2_version):
|
||||
subscript = trailer.children[1]
|
||||
assert subscript.type == 'subscript'
|
||||
assert [leaf.value for leaf in subscript.children] == ['.', '.', '.']
|
||||
|
||||
|
||||
def get_yield_exprs(code, version):
|
||||
return list(parse(code, version=version).children[0].iter_yield_exprs())
|
||||
|
||||
|
||||
def get_return_stmts(code):
|
||||
return list(parse(code).children[0].iter_return_stmts())
|
||||
|
||||
|
||||
def get_raise_stmts(code, child):
|
||||
return list(parse(code).children[child].iter_raise_stmts())
|
||||
|
||||
|
||||
def test_yields(each_version):
|
||||
y, = get_yield_exprs('def x(): yield', each_version)
|
||||
assert y.value == 'yield'
|
||||
assert y.type == 'keyword'
|
||||
|
||||
y, = get_yield_exprs('def x(): (yield 1)', each_version)
|
||||
assert y.type == 'yield_expr'
|
||||
|
||||
y, = get_yield_exprs('def x(): [1, (yield)]', each_version)
|
||||
assert y.type == 'keyword'
|
||||
|
||||
|
||||
def test_yield_from():
|
||||
y, = get_yield_exprs('def x(): (yield from 1)', '3.3')
|
||||
assert y.type == 'yield_expr'
|
||||
|
||||
|
||||
def test_returns():
|
||||
r, = get_return_stmts('def x(): return')
|
||||
assert r.value == 'return'
|
||||
assert r.type == 'keyword'
|
||||
|
||||
r, = get_return_stmts('def x(): return 1')
|
||||
assert r.type == 'return_stmt'
|
||||
|
||||
|
||||
def test_raises():
|
||||
code = """
|
||||
def single_function():
|
||||
raise Exception
|
||||
def top_function():
|
||||
def inner_function():
|
||||
raise NotImplementedError()
|
||||
inner_function()
|
||||
raise Exception
|
||||
def top_function_three():
|
||||
try:
|
||||
raise NotImplementedError()
|
||||
except NotImplementedError:
|
||||
pass
|
||||
raise Exception
|
||||
"""
|
||||
|
||||
r = get_raise_stmts(code, 0) # Lists in a simple Function
|
||||
assert len(list(r)) == 1
|
||||
|
||||
r = get_raise_stmts(code, 1) # Doesn't Exceptions list in closures
|
||||
assert len(list(r)) == 1
|
||||
|
||||
r = get_raise_stmts(code, 2) # Lists inside try-catch
|
||||
assert len(list(r)) == 2
|
||||
|
||||
@@ -254,3 +254,19 @@ def test_multiline_str_literals(each_version):
|
||||
|
||||
def test_py2_backticks(works_in_py2):
|
||||
works_in_py2.parse("`1`")
|
||||
|
||||
|
||||
def test_py2_string_prefixes(works_in_py2):
|
||||
works_in_py2.parse("ur'1'")
|
||||
works_in_py2.parse("Ur'1'")
|
||||
works_in_py2.parse("UR'1'")
|
||||
_invalid_syntax("ru'1'", works_in_py2.version)
|
||||
|
||||
|
||||
def py_br(each_version):
|
||||
_parse('br""', each_version)
|
||||
|
||||
|
||||
def test_py3_rb(works_ge_py3):
|
||||
works_ge_py3.parse("rb'1'")
|
||||
works_ge_py3.parse("RB'1'")
|
||||
|
||||
@@ -172,8 +172,8 @@ def test_ur_literals():
|
||||
check('Ur""', is_literal=not py_version >= 30)
|
||||
check('UR""', is_literal=not py_version >= 30)
|
||||
check('bR""')
|
||||
# Starting with Python 3.3 this ordering is also possible, but we just
|
||||
# enable it for all versions. It doesn't hurt.
|
||||
# Starting with Python 3.3 this ordering is also possible.
|
||||
if py_version >= 33:
|
||||
check('Rb""')
|
||||
# Starting with Python 3.6 format strings where introduced.
|
||||
check('fr""', is_literal=py_version >= 36)
|
||||
|
||||
4
tox.ini
4
tox.ini
@@ -10,11 +10,11 @@ setenv =
|
||||
# tox corrupts __pycache__, solution from here:
|
||||
PYTHONDONTWRITEBYTECODE=1
|
||||
commands =
|
||||
py.test {posargs:parso test}
|
||||
pytest {posargs:parso test}
|
||||
[testenv:cov]
|
||||
deps =
|
||||
coverage
|
||||
{[testenv]deps}
|
||||
commands =
|
||||
coverage run --source parso -m py.test
|
||||
coverage run --source parso -m pytest
|
||||
coverage report
|
||||
|
||||
Reference in New Issue
Block a user