311 Commits

Author SHA1 Message Date
Dave Halter
59df3fab43 Some small changes to the changelog 2019-06-20 21:15:53 +02:00
Dave Halter
803cb5f25f Make parso work at least somewhat with an older Jedi version 2019-06-20 20:33:14 +02:00
Dave Halter
3fa8630ba9 Use an immutable map for used names, so that it can be use for hashing 2019-06-18 09:12:33 +02:00
Dave Halter
1ca5ae4008 Bump the version number to the next release: 0.5.0 2019-06-13 17:26:08 +02:00
Dave Halter
c3c16169b5 Ignore positional only arguments slash when listing params 2019-06-09 22:55:37 +02:00
Dave Halter
ecbe2b9926 Add positional only arguments to grammar 2019-06-09 21:15:03 +02:00
Dave Halter
1929c144dc Increate the _PICKLE_VERSION to avoid issues with the latest breaking change 2019-06-09 18:11:21 +02:00
Dave Halter
b5d50392a4 comp_for is now called sync_comp_for for all Python versions to be compatible with the Python 3.8 Grammar 2019-06-09 18:00:32 +02:00
Dave Halter
a7aa23a7f0 Parse named expressions 2019-06-02 23:34:37 +02:00
Dave Halter
5430415d44 Change a test, because it doesn't really matter
The test had changed behavior for Python 3.8, a syntax error of:

SyntaxError: unexpected EOF while parsing

instead of

SyntaxError: invalid syntax
2019-06-02 22:54:45 +02:00
Dave Halter
6cdd47fe2b f-string syntax in Python 3.8 was enhanced
See e.g. https://twitter.com/raymondh/status/1135253771846471680
2019-06-02 22:48:47 +02:00
Dave Halter
917b4421f3 Fix fstring format spec parsing, fixes #74 2019-06-02 15:18:42 +02:00
Dave Halter
4f5fdd5a70 Add release notes for the next release 0.4.1 2019-06-02 11:28:00 +02:00
prim
93ddf5322a parse long number notation (#72)
* parse long number notation

* parse long number notation
2019-06-02 11:14:15 +02:00
Dave Halter
a9b61149eb Fix get_decorators for async functions 2019-05-27 01:08:42 +02:00
Dave Halter
de416b082e Make it clear that get_last_modified should not raise an exception, but return None, if it cannot look up a file 2019-05-22 00:16:26 +02:00
Carl Meyer
4b440159b1 Fix __init__.pyi re-exports. 2019-05-10 09:12:32 +02:00
Carl Meyer
6f2d2362c9 Add type stubs. 2019-05-10 09:12:32 +02:00
Dave Halter
8a06f0da05 0.4.0 release notes 2019-04-05 18:57:21 +02:00
Dave Halter
bd95989c2e Change the default tox environments to test
These version will be tested before deploying
2019-04-05 18:55:23 +02:00
Miro Hrončok
57e91262cd Add Python 3.8 to tox.ini
Otherwise we get:

    Matching undeclared envs is deprecated.
    Be sure all the envs that Tox should run are declared in the tox config.
2019-04-05 18:43:43 +02:00
Miro Hrončok
476383cca9 Test on Python 3.8 2019-04-05 18:43:43 +02:00
Dave Halter
b2ab64d8f9 Fix Python 3.8 error issues 2019-04-05 18:30:48 +02:00
Dave Halter
18cbeb1a3d Fix an issue, because sync_comp_for exists now 2019-04-05 16:27:17 +02:00
Dave Halter
a5686d6cda PEP 8 2019-04-05 16:25:45 +02:00
Dave Halter
dfe7fba08e continue in finally is no longer an error 2019-04-05 16:17:30 +02:00
Dave Halter
6db7f40942 Python 2 compatibility 2019-04-03 01:24:06 +02:00
Dave Halter
d5eb96309c Increase the pickle version. With all the changes lately, it's better this way 2019-04-03 01:07:25 +02:00
Dave Halter
4c65368056 Some minor changes to file_io 2019-03-27 01:02:27 +01:00
Dave Halter
3e2956264c Add FileIO to make it possible to cache e.g. files from zip files 2019-03-25 00:48:59 +01:00
Dave Halter
e77a67cd36 PEP 8 2019-03-22 20:17:59 +01:00
Daniel Hahler
c4d6de2aab tests: add coverage tox factor, use it on Travis 2019-03-22 11:01:22 +01:00
Daniel Hahler
7770e73609 ci: Travis: use dist=xenial 2019-03-22 11:01:22 +01:00
Dave Halter
acccb4f28d 0.3.4 release 2019-02-13 00:19:07 +01:00
Dave Halter
3f6fc8a5ad Fix an f-string tokenizer issue 2019-02-13 00:17:37 +01:00
Dave Halter
f1ee7614c9 Release of 0.3.3 2019-02-06 09:55:18 +01:00
Dave Halter
58850f8bfa Rename a test 2019-02-06 09:51:46 +01:00
Dave Halter
d38a60278e Remove some unused code 2019-02-06 09:50:27 +01:00
Dave Halter
6c65aea47d Fix working with async functions in the diff parser, fixes #56 2019-02-06 09:31:46 +01:00
Dave Halter
0d37ff865c Fix bytes/fstring mixing when using iter_errors, fixes #57. 2019-02-06 01:28:47 +01:00
Dave Halter
076e296497 Improve a docstring, fixes #55. 2019-01-26 21:34:56 +01:00
Dave Halter
a2b153e3c1 Upgrade the Changelog 2019-01-24 00:42:53 +01:00
Dave Halter
bb2855897b Escape a backslash properly 2019-01-24 00:31:26 +01:00
Dave Halter
9c9e6ffede Bump the parso version to 0.3.2 2019-01-24 00:22:30 +01:00
Dave Halter
b5d8175eaa f-string parts are also PythonLeaf instances 2019-01-24 00:15:53 +01:00
Dave Halter
32a83b932a Fix get_start_pos_of_prefix 2019-01-24 00:00:06 +01:00
Dave Halter
01ae01a382 Remove dead code 2019-01-23 23:28:18 +01:00
Dave Halter
5fbc207892 Refactor f-string support 2019-01-23 10:58:36 +01:00
Dave Halter
60e4591837 Fix: End detection for strings was mostly wrong, fixes #51 2019-01-23 10:13:25 +01:00
Dave Halter
ef56debb78 Fix f-string escapes, fixes #48
The tokenizer was not detecting backslash escapes for f-string endings properly
2019-01-22 22:20:32 +01:00
Dave Halter
dc2582f488 Tokenizer: Simplify end of string regexes 2019-01-22 09:33:57 +01:00
Dave Halter
fe69989fbc Add a comment from the Python3.7 code base 2019-01-21 00:33:02 +01:00
Dave Halter
ce8b531175 Fix diff parser: The previous fix was a bit off 2019-01-20 19:03:45 +01:00
Dave Halter
069c08883a Change fuzzer: Add ways to not always use correct parse input 2019-01-20 18:18:13 +01:00
Dave Halter
0da0a8655a Fix diff parser: issue with opening brackets 2019-01-20 00:41:11 +01:00
Dave Halter
3d890c3a00 Async doesn't work in 3.4 2019-01-19 12:59:35 +01:00
Dave Halter
956ea55048 Skip some tests for Python2.6 and Python3.3 2019-01-19 12:08:39 +01:00
Dave Halter
0bd17bee2c Fix diff parser: DEDENT as error leaves should also be ignored and reparsed 2019-01-18 18:41:08 +01:00
Dave Halter
f3015efb2d Fix diff parser: error dedents in between nodes should be ignored for now when copying 2019-01-18 02:43:12 +01:00
Dave Halter
197391dc53 Fix diff parser: Don't copy error nodes/leaves in the beginning, leads to strange issues in some cases 2019-01-17 23:48:00 +01:00
Dave Halter
32321a74b1 Diff fuzzer: Create a check to see if the errors make sense. 2019-01-17 22:07:59 +01:00
Dave Halter
52d01685ba Fix diff parser: Don't copy DEDENT tokens at the beginning 2019-01-17 21:31:13 +01:00
Dave Halter
e591b929eb Fix diff parser: Skip last leaves for last line offset leaves 2019-01-17 00:15:38 +01:00
Dave Halter
dac4c445a7 Fix indentation error tokens 2019-01-16 23:21:31 +01:00
Dave Halter
20fd32b45d Fix diff parser: Avoid side effects for prefix 2019-01-14 21:37:19 +01:00
Dave Halter
9cc8178998 Fix tokenizer: backslashes sometimes led to newline token generation 2019-01-14 09:59:16 +01:00
Dave Halter
1e25445176 Make lines easier copyable in the fuzzer 2019-01-14 01:50:39 +01:00
Dave Halter
d7171ae927 Fix tokenizer: Carriage returns after backslashes were not properly handled 2019-01-14 01:49:09 +01:00
Dave Halter
d3d28480ed Fix in diff parser: prefix calculation was wrong when copying nodes 2019-01-14 01:00:17 +01:00
Dave Halter
564be7882e Replace --print-diff with --print-code 2019-01-14 00:20:49 +01:00
Dave Halter
76c5754b76 Fix diff parser generation for empty files 2019-01-13 23:38:35 +01:00
Dave Halter
55247a5a2c Docopt should not be needed for tests 2019-01-13 23:24:17 +01:00
Dave Halter
7ae1efe5c7 Fix tokenizer: Form feeds and multiline docstrings didn't work together 2019-01-13 23:16:09 +01:00
Dave Halter
01dba7f8ce Fix diff parser: Need to calculate the prefix for the diff tokenizer better 2019-01-13 22:38:53 +01:00
Dave Halter
ea8a758051 Remove copied nodes stuff, to simplify some things 2019-01-13 19:57:23 +01:00
Dave Halter
a7e24a37e7 Fix newline endings and a few parser/copy counts 2019-01-13 19:55:18 +01:00
Dave Halter
f80d9de7a0 Feature: The diff parser fuzzer is now able to use random Python fragments
This hopefully leads to the fuzzer finding more and faster issues in the diff
parser.
2019-01-13 16:00:36 +01:00
Dave Halter
eaee2b9ca0 Fix: The Python 3.8 grammar did not include f-string support 2019-01-13 15:51:46 +01:00
Dave Halter
dd1761da96 Fix tokenizer: Closing parentheses in the wrong place should not lead to strange behavior 2019-01-13 14:52:33 +01:00
Dave Halter
e10802ab09 Fix end positions with error dedents 2019-01-13 14:14:16 +01:00
Dave Halter
3d402d0a77 Fix diff parser tests for Python 2 2019-01-10 09:26:42 +01:00
Dave Halter
f6a8b997f2 Randomize the fuzzer a bit more with inserting characters 2019-01-10 01:22:24 +01:00
Dave Halter
94c2681c8e Simplify the regexes 2019-01-10 01:21:56 +01:00
Dave Halter
610a820799 Fix a regex clause that was totally wrong 2019-01-10 01:00:08 +01:00
Dave Halter
57320af6eb Fix another tokenizer issue 2019-01-09 00:55:54 +01:00
Dave Halter
574e1c63e8 Apply \r changes in syntax trees 2019-01-09 00:34:19 +01:00
Dave Halter
fbaad7883f Actually make \r usable 2019-01-08 20:03:08 +01:00
Dave Halter
b1f613fe16 Fix split lines for Python code
Some characters like Vertical Tab or File Separator were used as line separators.
This is not legal. Line Separators in Python are only Carriage Return \r and Line Feed \n.
2019-01-08 08:42:30 +01:00
Dave Halter
f4696a6245 Add \r as a valid linebreak for splitlines 2019-01-07 18:46:16 +01:00
Dave Halter
48c1a0e590 Move split_lines tests around 2019-01-07 18:40:34 +01:00
Dave Halter
6f63147f69 Start generating really random strings with the fuzzer 2019-01-06 20:51:49 +01:00
Dave Halter
94bd48bae1 Fix tokenizer: Dedents before error tokens are properly done, now. 2019-01-06 19:26:49 +01:00
Dave Halter
edbceba4f8 Fix diff parser: Also check async with 2019-01-06 16:25:28 +01:00
Dave Halter
b33c2b3ae1 Make the diff parser use a lot of different files by default 2019-01-06 15:43:37 +01:00
Dave Halter
65a0748f4f Fix diff parser: Forgot that with statments are also flows 2019-01-06 15:41:16 +01:00
Dave Halter
c442cf98be Fix valid graph asserting for some dedents that are errors 2019-01-06 12:39:04 +01:00
Dave Halter
65b15b05e3 Fix diff parser: If funcs are not copied, errors shouldn't either 2019-01-06 11:39:51 +01:00
Dave Halter
26aee1c6a9 Better documentation for the fuzz diff parser script 2019-01-06 01:10:15 +01:00
Dave Halter
c88a862bae Rename a test 2019-01-06 01:08:15 +01:00
Dave Halter
d6b0585933 More verbose output for the diff fuzzer 2019-01-06 01:05:07 +01:00
Dave Halter
6eba40b4c5 Fix diff parser: error dedent issues 2019-01-06 01:00:34 +01:00
Dave Halter
428bde0573 Fix diff parser: Avoid indentation issues 2019-01-05 22:40:31 +01:00
Dave Halter
d1d866f6c6 Use the right diff order in debug output 2019-01-05 18:36:48 +01:00
Dave Halter
a8ec75fedd Fix diff parser: The prefix was wrong in some copy cases 2019-01-05 18:33:38 +01:00
Dave Halter
deaf1f310b Make fuzz parser compatible with Python 2 2019-01-05 14:57:58 +01:00
Dave Halter
2a881bf875 Make it possible to print all diffs in fuzzer 2019-01-05 14:50:59 +01:00
Dave Halter
4d713f56e9 Introduce a redo flag 'only_last' to narrow down issues 2019-01-05 14:20:30 +01:00
Dave Halter
d202fdea49 Add docopt to testing dependencies 2019-01-05 14:09:14 +01:00
Dave Halter
5e6d5dec59 Rewrite the fuzz diff parser to cache errors (so we can re-run those) 2019-01-05 14:05:19 +01:00
Dave Halter
c1846dd082 Fix diff parser: Decorators were sometimes parsed without their functions 2019-01-05 09:29:00 +01:00
Dave Halter
5da51720cd Fix tokenizer: Dedents should only happen after newlines 2019-01-03 11:44:17 +01:00
Dave Halter
fde64d0eae Usability for diff parser fuzzing 2019-01-02 17:31:07 +01:00
Dave Halter
430f13af5e Fix for diff parser: Rewrite prefix logic and don't mutate prematurely 2019-01-02 17:28:01 +01:00
Dave Halter
96ae6a078b Fix diff parser: positioning of functions if decorators were removed 2019-01-02 13:16:22 +01:00
Dave Halter
a9f58b7c45 Ignore ERROR_DEDENT in graph validation 2019-01-02 12:15:05 +01:00
Dave Halter
e0d0e57bd0 Add a small diff parser fuzzer
It should help us find the rest of the issues that the diff parser has
2019-01-02 11:26:31 +01:00
Dave Halter
d2542983e9 Fix diff parser: get_last_line was sometimes wrong
Now the calculation is way simpler. Still annoying that it even happened.
2019-01-02 01:39:53 +01:00
Dave Halter
64cf24d9da Fix error reporting order for diff issues 2019-01-02 00:33:43 +01:00
Dave Halter
02f48a68f2 Clean up the test diff parser file 2019-01-01 23:37:31 +01:00
Dave Halter
c7c464e5e9 Avoid nasty side effects in creation of Node
This issue led to bugs in Jedi, because Jedi used the nodes in a wrong way.
2019-01-01 23:35:20 +01:00
Dave Halter
29325d3052 Make parso errors even more informative 2018-12-31 11:47:02 +01:00
Dave Halter
750b8af37b Fix diff parser get_last_line calculation 2018-12-31 01:25:11 +01:00
Dave Halter
0126a38bd1 Fix graph asserting for error indents 2018-12-30 18:20:55 +01:00
Dave Halter
c2985c111e Better checks for checking valid graphs 2018-12-30 16:34:11 +01:00
Dave Halter
45f9d4b204 Create better ways for debugging the diff parser 2018-12-30 16:03:54 +01:00
Dave Halter
f99fe6ad21 Fix diff-parser: Copying parts of if else should not lead to the whole thing being copied 2018-12-30 15:25:17 +01:00
Dave Halter
a64c32bb2a Reenable diff parser parser counting in all tests 2018-12-30 02:46:44 +01:00
Dave Halter
e5fb1927bb Fix: Make the NodesStack to a NodesTree
This fixes an issue with positions that were doubled if the stack was closed too early.
2018-12-30 01:27:37 +01:00
Dave Halter
0ef4809377 Fix for diff parser : Make sure that start_pos are growing always
The problem was that functions/classes were sometimes not well positioned. Now
all diff tests are ensuring that leaves always grow.
2018-12-28 21:49:49 +01:00
Dave Halter
29456a6c0a Add a check to see if leaves have the right start positions 2018-12-28 02:24:22 +01:00
Dave Halter
ada84ed063 Add parso version to an exception 2018-12-27 13:33:10 +01:00
Thomas A Caswell
1c7b078db0 MNT: add grammar for python 3.8
Coppied from cpython commit 1dd035954bb03c41b954ebbd63969b4bcb0e106e
2018-12-22 12:39:08 +01:00
Hugo
930ec08ab0 Use SVG badges
And update some links to HTTPS.
2018-09-28 18:51:36 +02:00
Daniel Hahler
a90622040d tox.ini: simplify deps 2018-09-22 10:02:38 +02:00
Daniel Hahler
98c02f7d79 tox: add pypy to envlist for tox-travis
Fixes deprecation warning
(https://travis-ci.org/davidhalter/parso/jobs/431468986).
2018-09-22 10:02:38 +02:00
Daniel Hahler
d6d6c5038f setup.py: add "testing" extras_require
Ref: https://github.com/davidhalter/parso/issues/15#issuecomment-339964845
2018-09-22 10:02:38 +02:00
Michael Käufl
3be8ac7786 Add Python 3.7 stable to test matrix and update classifiers 2018-09-13 00:28:27 +02:00
Anders Hovmöller
96f1582b6e Update usage.rst 2018-08-02 22:17:14 +02:00
Dave Halter
7064ecf3fb Don't use invalid escape sequences in regex, see https://github.com/davidhalter/jedi-vim/issues/843 2018-07-12 08:53:48 +02:00
Dave Halter
e6bc924fba Use a setuptools that still supports py33 2018-07-09 20:53:06 +02:00
Dave Halter
59605438e9 3.1 release notes 2018-07-09 20:27:10 +02:00
Dave Halter
e7f71a3eba Use one simple functions to check for funcdefs in diff parser 2018-07-08 20:30:31 +02:00
Dave Halter
3f7aad84f9 Make sure to treat async funcdefs the same way as normal funcdefs 2018-07-08 20:18:15 +02:00
Dave Halter
52e3db4834 Fix an issue in the diff parser
Forgot to check for functions/classes that were part of a decorator/async func.

Fixes https://github.com/davidhalter/jedi/issues/1132
2018-07-06 01:25:06 +02:00
Dave Halter
0daf4d9068 Asterisks in function definitions may be at the end of a func without a comma, fixes #44 2018-07-04 09:51:34 +02:00
Dave Halter
29b6232541 Remove some TODOs that were fixed 2018-07-03 19:28:05 +02:00
Dave Halter
e05d7fd59f Error recovery should not match the whole line in case of an invalid token, fixes #40 2018-07-03 01:31:32 +02:00
Daniel Hahler
7f964c26f2 docs: enable searchbox 2018-07-02 00:51:36 +02:00
Dave Halter
ff67de248f Merge branch 'pgen' 2018-06-29 18:14:03 +02:00
Dave Halter
1af5d9d46b Add a changelog for 0.3.0 2018-06-29 18:13:53 +02:00
Dave Halter
fce3ead829 Bump version to 0.3.0 2018-06-29 18:04:55 +02:00
Dave Halter
55d5d39c53 Add a private API for jedi to work with the parser stack 2018-06-29 10:04:54 +02:00
Dave Halter
c8bf23b787 Remove get_tos_nodes and get_tos_first_tokens, because they are not used (not even in Jedi) 2018-06-29 00:00:09 +02:00
Dave Halter
98c9a1ec7f Better documentation for _add_token 2018-06-28 10:11:44 +02:00
Dave Halter
ecdb90d9bc Way better documentation for the DFA generator 2018-06-28 10:08:09 +02:00
Dave Halter
375ebf2181 Better documentation of the parser generator 2018-06-28 09:49:35 +02:00
Dave Halter
badb2fe010 Move transition_to_generator to transitions 2018-06-28 09:42:37 +02:00
Dave Halter
8e118c913c Remove note about print as absolute import. This is probably not going to happen anymore, Python 2 is pretty much end-of-life 2018-06-28 01:01:46 +02:00
Dave Halter
52fc8fc569 Finish the stack in a way we want to. 2018-06-28 00:59:55 +02:00
Dave Halter
97cdb448d4 Pass tokens around and not all the different token values 2018-06-28 00:33:22 +02:00
Dave Halter
603b67ee6d Just always pass token objects to the tokenizer 2018-06-28 00:18:44 +02:00
Dave Halter
7686273287 Use the stack from the parser itself 2018-06-28 00:12:18 +02:00
Dave Halter
692436ba12 Don't use grammar as an argument anymore, because it's already there 2018-06-28 00:01:47 +02:00
Dave Halter
f7d3d4e82f Merge the PgenParser and our own parser 2018-06-27 23:45:04 +02:00
Dave Halter
edce279dee Remove a function that was no longer used 2018-06-27 23:19:57 +02:00
Dave Halter
a9e40eb578 Simplify error recovery for suites 2018-06-27 22:21:17 +02:00
Dave Halter
b14f518306 Rename the last usage of ilabel to transition 2018-06-27 00:18:27 +02:00
Dave Halter
8407894b25 Fix python 2 tests 2018-06-27 00:15:00 +02:00
Dave Halter
e4efebc9f3 s/ilabel/transition/g 2018-06-26 23:05:04 +02:00
Dave Halter
f66e47c540 Check better for more transitions 2018-06-26 22:53:02 +02:00
Dave Halter
706a92ee0d Merge branch 'master' of github.com:davidhalter/parso 2018-06-26 10:23:17 +02:00
Dave Halter
91d864b23d Make it clearer which things are public in pgen 2018-06-26 10:22:38 +02:00
Dave Halter
e20f2069ba Move the grammar to a fitting file. 2018-06-26 10:20:05 +02:00
Dave Halter
4cf198285a Move things out of the grammar class 2018-06-26 10:15:31 +02:00
Dave Halter
30cf491b4f Move the Grammar to the pgen module 2018-06-26 10:08:44 +02:00
Dave Halter
c1675da0cb Make nonterminal_to_dfas public 2018-06-26 09:56:49 +02:00
Dave Halter
7b7b66eb3c Get rid of the first_terminal variable in the grammar generator 2018-06-26 09:48:13 +02:00
Dave Halter
5d46c3e18b Trying to reduce the amount of variables used in first sets 2018-06-26 01:04:22 +02:00
Dave Halter
e9fde82512 Remove the overlapcheck, it's probably not needed anymore 2018-06-26 01:00:06 +02:00
Dave Halter
a46ecbb499 Fix an ambiguity issue
Unfortunately had to refactor most of the transition generation
2018-06-26 00:58:19 +02:00
Dave Halter
da5aa8a2ab Better detection of ambiguities 2018-06-25 01:56:02 +02:00
Dave Halter
43d4a8a834 Don't use a function that doesn't work 2018-06-24 23:45:18 +02:00
Dave Halter
309033ae2d Work with token types whenever possible 2018-06-24 17:59:32 +02:00
Dave Halter
2a9d8632fe Remove label caching 2018-06-24 17:56:13 +02:00
Dave Halter
530a324643 Remove labels 2018-06-24 17:55:22 +02:00
Dave Halter
71003bc20e Just use caching instead of strange transitions 2018-06-24 17:29:03 +02:00
Dave Halter
c5d141bf60 Make some more things faster 2018-06-24 16:43:28 +02:00
Dave Halter
e958b241c7 Use some tokenize names directly 2018-06-24 16:39:48 +02:00
Dave Halter
34ab35558f Remove a lot of the old token code 2018-06-24 16:31:58 +02:00
Dave Halter
03de9cebb8 Introduce TokenTypes 2018-06-24 16:24:09 +02:00
Dave Halter
6098d89150 Add PythonTokens to get rid of a lot of the token module eventually 2018-06-24 13:03:07 +02:00
Dave Halter
ff4358cd97 Merge remote-tracking branch 'origin/master' into pgen 2018-06-24 11:49:34 +02:00
Dave Halter
b5378e4602 Use token.OP and use reserved words
This change breaks the tokenizer backwards compatibility a bit. Details of
operators is now part of the parser and not the tokenizer anymore. The parser
does this anyway, so we don't need the complexity in the tokenizer.
2018-06-24 11:28:23 +02:00
Dave Halter
33e321a539 Don't set the root node before it's not actually defined 2018-06-22 13:04:00 +02:00
Dave Halter
a890ddd6cc Remove make_first 2018-06-22 12:54:27 +02:00
Dave Halter
1362d4f05d Remove more unused grammar stuff 2018-06-22 12:53:38 +02:00
Dave Halter
532aef2342 Remove nonterminal2number and number2nonterminal, they are no longer used 2018-06-22 12:52:44 +02:00
Dave Halter
878b4b2d3b Use nonterminals instead of numbers if possible 2018-06-22 12:47:02 +02:00
Dave Halter
87299335c4 Remove more unused code 2018-06-22 12:36:48 +02:00
Dave Halter
4f0e9c0fd7 Remove old dfas and states from the parser generator 2018-06-22 11:47:18 +02:00
Dave Halter
67ca091631 delete a lot of the old parser code 2018-06-22 11:44:42 +02:00
Dave Halter
4e5ba02dbb Fix the final issues of the new parser 2018-06-22 11:38:34 +02:00
Dave Halter
a85f544901 Fix all tests except diff tests. Mostly error recovery fixes 2018-06-22 11:12:10 +02:00
Dave Halter
9e8066c6fd Fix a lot of the old error recovery 2018-06-22 09:56:58 +02:00
Dave Halter
68eab72229 Some slight changes to error recovery 2018-06-22 01:59:39 +02:00
Dave Halter
d9264609f2 Get quite a bit of the error recovery working 2018-06-22 01:56:29 +02:00
Dave Halter
79c7e0b59d Use the new parser. Error recovery is not yet working 2018-06-22 00:38:18 +02:00
Dave Halter
f03a87b876 Actually parse some first things with the new approach 2018-06-21 23:56:34 +02:00
Dave Halter
2a082d69df Add better reprs 2018-06-21 22:13:27 +02:00
Dave Halter
e6fc739670 Get most things ready for plans 2018-06-21 21:32:05 +02:00
Dave Halter
12e11b3d16 Remove a while loop that is not necessary 2018-06-21 21:14:14 +02:00
Dave Halter
cc8038966b isfinal -> is_final 2018-06-21 21:11:46 +02:00
Dave Halter
31aecf2d35 Calculate the first plans in a very messy way 2018-06-21 21:10:28 +02:00
Dave Halter
d8554d86d1 A lot of new code to hopefully transition to a better parsing mechanism in the future 2018-06-21 18:17:32 +02:00
Dave Halter
d691bf0fd1 Some minor changes 2018-06-20 09:42:21 +02:00
Dave Halter
5712ffb5ca Introduce a label cache that is currently not used 2018-06-18 20:14:29 +02:00
Dave Halter
55d6a69aad Some more renames 2018-06-18 01:14:09 +02:00
Dave Halter
453471eeb6 Move some ParserGenerator stuff into the Grammar class 2018-06-18 00:15:31 +02:00
Dave Halter
a06c3a3129 name -> nonterminal 2018-06-17 23:10:27 +02:00
Dave Halter
73ce57428b Try to completely remove the word symbol and use nonterminal
The ones that we could not remove are in grammar.py, because that's the public documented API.
2018-06-17 18:30:20 +02:00
Dave Halter
640f544af9 One instance of symbol -> terminal 2018-06-17 18:12:13 +02:00
Dave Halter
b6cbf306d7 Use the name first_terminals instead of first 2018-06-17 18:09:12 +02:00
Dave Halter
95e4ecf592 next -> next_ 2018-06-17 17:49:43 +02:00
Dave Halter
fbed1ecfe0 More dict to set 2018-06-17 17:45:40 +02:00
Dave Halter
1f27fa9320 Use more nonterminal/terminal terminology 2018-06-17 17:37:15 +02:00
Dave Halter
23362ec2d3 Start using the term nonterminal 2018-06-17 16:40:11 +02:00
Dave Halter
6b391af071 Use sets instead of dicts if possible 2018-06-17 16:36:27 +02:00
Dave Halter
c43cb21a0e Add docstring 2018-06-15 17:57:59 +02:00
Dave Halter
24346a0d32 Add some comments 2018-06-15 10:21:00 +02:00
Dave Halter
9d452ec66a Use a set instead of dict if it's not necessary 2018-06-15 00:35:39 +02:00
Dave Halter
567e0d7aed Simplify some code 2018-06-15 00:06:56 +02:00
Dave Halter
1f02327cff Some refactorings for simplicity 2018-06-14 18:18:57 +02:00
Dave Halter
8c348aee6f Use NFAArc as a class 2018-06-14 10:29:19 +02:00
Dave Halter
a277ccf288 Some more renames 2018-06-14 01:10:23 +02:00
Dave Halter
a5ce2caab6 Another rename 2018-06-13 23:19:52 +02:00
Dave Halter
da4df9c0f1 Rename 2018-06-13 21:12:48 +02:00
Dave Halter
bd444df417 Move the grammar parsing to a separate module 2018-06-13 21:01:06 +02:00
Dave Halter
275dbca1b9 More renames 2018-06-13 20:54:18 +02:00
Dave Halter
9a0b6f4928 Rename 2018-06-13 20:47:16 +02:00
Dave Halter
fc5560874b Separate generating dfas from parsing 2018-06-13 20:46:36 +02:00
Dave Halter
6e5a520e7b Cleanup some names 2018-06-13 10:18:05 +02:00
Dave Halter
dcabf3d415 Remove some code that is not used anymore 2018-06-13 10:15:21 +02:00
Dave Halter
3bc82d112d Some cleanups and documentation 2018-06-13 10:10:17 +02:00
Dave Halter
ec186a78f8 Move out some more functions out of classes 2018-06-13 01:50:25 +02:00
Dave Halter
3818fb2b22 Some more refactorings for clarification 2018-06-13 01:44:44 +02:00
Dave Halter
95ddeb4012 Factor out start_symbol into a better position 2018-06-13 00:54:18 +02:00
Dave Halter
f638abb08e Refactor out dfas 2018-06-13 00:31:45 +02:00
Dave Halter
f8558df27a Document pgen grammars a bit better 2018-06-13 00:27:51 +02:00
Dave Halter
bae56e72e1 addarc -> add_arc 2018-06-12 22:13:08 +02:00
Dave Halter
41c38311f7 Refactor some things in pgen 2018-06-12 22:08:19 +02:00
Dave Halter
eeb456a6d4 Move some initializations 2018-06-12 22:00:49 +02:00
Dave Halter
1c0956d9e0 Change license again. The year shouldn't matter 2018-06-12 21:58:27 +02:00
Dave Halter
c17156bd36 Make first private 2018-06-12 21:56:26 +02:00
Dave Halter
8865aa452c Change copyright years 2018-06-12 21:55:37 +02:00
Dave Halter
e0c79a9fcc Remove some code from the grammar 2018-06-12 21:49:18 +02:00
Dave Halter
3c08b1b058 Separate the grammar generation from the grammar parsing 2018-06-12 21:30:09 +02:00
Dave Halter
0f32673092 In pgen now everything is named grammar and not c 2018-06-12 18:17:02 +02:00
Dave Halter
1e18163402 Better recovery for online classes and functions 2018-06-12 13:23:49 +02:00
Dave Halter
cef9f1bdbd Fix one-line error recovery for all things that are using a suite
Fixes https://github.com/davidhalter/jedi/issues/1138.
2018-06-12 12:56:27 +02:00
Dave Halter
23db71a5f7 Add a debug function for first tokens 2018-06-12 01:50:37 +02:00
Aaron Meurer
34154d05a0 Don't mutate the standard library token.tok_name dictionary (#42)
* Don't mutate the standard library token.tok_name dictionary

Fixes #41.

* More robust test that tok_name isn't mutated

This test now works in Python 2.7, and actually tests something in Python 3.7,
and it's better anyway because it tests the whole dictionary instead of just
one token.

* Fix test_tok_name_copied in Python 3.7 and PyPy

Apparently Python 3.7 adds N_TOKENS to the tok_name dictionary, and PyPy
doesn't have NT_OFFSET in it.
2018-06-08 18:46:16 +02:00
Dave Halter
6f385bdba1 Not testing Python 3.3 anymore on travis. It seems to be broken 2018-05-21 12:55:49 +02:00
Dave Halter
4fc31c58b3 Add a changelog for 0.2.1 2018-05-21 12:48:31 +02:00
Dave Halter
689decc66c Push the version 2018-05-21 12:44:10 +02:00
Dave Halter
c2eacdb81c The diff parser was slighly off with prefixes, fixes #1121 2018-05-20 19:13:50 +02:00
Dave Halter
ac0bf4fcdd A better repr for the endmarker 2018-05-17 09:56:16 +02:00
Dave Halter
948f9ccecc Merge branch 'master' of github.com:davidhalter/parso 2018-04-23 23:42:11 +02:00
Dave Halter
f20106d88e Fix a prefix issue with error leafs. 2018-04-22 19:28:30 +02:00
Aaron Meurer
f4912f6c17 Use the correct field name in the PythonToken repr 2018-04-19 22:20:23 +02:00
Jonas Tranberg
bf5a4b7c2c Added path param to load_grammar for loading custom grammar files 2018-04-19 10:16:33 +02:00
Dave Halter
579146b501 Don't test python 2.6 in tox by default, because the newer pip versions don't support it anymore 2018-04-15 14:46:35 +02:00
Dave Halter
deb4dbce1c Set a release date 2018-04-15 13:54:10 +02:00
Dave Halter
8eda8decea Fix whitespace issues with prefixes 2018-04-07 15:34:58 +02:00
Dave Halter
f6935935c0 Use proper leafs for fstring start/end 2018-04-07 12:32:33 +02:00
Dave Halter
d3fa7e1cad Fix a Python 2 related issue. 2018-04-07 11:14:41 +02:00
Dave Halter
83d9abd036 Forgot to delete another print. WTF I'm tired 2018-04-07 02:22:45 +02:00
Dave Halter
222e9117b4 Unfortunately forgot to delete a print 2018-04-07 02:21:59 +02:00
Dave Halter
eda2207e6c Start to write a changelog for 0.2.0 2018-04-07 02:20:00 +02:00
Dave Halter
a91e5f2775 Merge branch 'fstrings'
f-strings are now parsed as part of the Python grammar and not in separate
steps.

Note that this is not the way that CPython does it. CPython still uses multiple
parse steps in ast.c.
2018-04-07 02:17:02 +02:00
Dave Halter
cba4f2ccc1 Fix the syntax errors from f-strings 2018-04-07 02:14:35 +02:00
Dave Halter
8f1a436ba1 Remove the old f-string grammar and fix the tests with the new syntax 2018-04-07 02:11:26 +02:00
Dave Halter
9941348ec6 Add python 3.7 to tox 2018-04-06 20:30:07 +02:00
Dave Halter
afb71dc762 Remove the f-string rules and replace them with new ones 2018-04-06 20:29:22 +02:00
Dave Halter
0d96b12566 Fix the fstring syntax if there's a conversion with exclamation mark 2018-04-06 09:59:15 +02:00
Dave Halter
9d2ce4bcd4 Fix a few fstring error gatherings 2018-04-06 09:50:07 +02:00
Dave Halter
a3e280c2b9 Use strings as a non-terminal symbol in all grammars
This makes it easier to write the same logic for all Python versions
2018-04-05 09:55:19 +02:00
Dave Halter
7c7f4f4e54 Fix a test 2018-04-05 00:45:23 +02:00
Dave Halter
56b3e2cdc8 Also use the fstring modfications for the 3.7 grammar 2018-04-05 00:45:03 +02:00
Dave Halter
97f042c6ba Remove clutter from the grammar 2018-03-31 14:26:12 +02:00
Dave Halter
b1aa7c6a79 Cleanup a lot of details in the tokenizer for fstrings 2018-03-31 14:25:29 +02:00
Dave Halter
235fda3fbb Fix a few things so that the tokenizer can at least parse the grammar. 2018-03-30 22:13:18 +02:00
Dave Halter
d8d2e596a5 A first implementation of the fstring tokenizer 2018-03-30 20:50:49 +02:00
Dave Halter
e05ce5ae31 Revert "A small improvement in checks"
The problem with this commit is that it probably makes some checks slower. It's
still slightly more beautiful, but we leave it for now.

This reverts commit 25e4ea9c24.
2018-03-28 09:51:37 +02:00
Dave Halter
25e4ea9c24 A small improvement in checks 2018-03-28 02:16:37 +02:00
Dave Halter
9f88fe16a3 Added the fstring grammar without the tokenization part
This means that fstrings are not yet parsed, because there are no f-string tokens.
2018-03-28 02:03:18 +02:00
Dave Halter
ba0e7a2e9d A comparison was slightly off 2018-03-24 22:40:12 +01:00
Dave Halter
dc80152ff8 Ignore the pytest cache 2018-03-23 20:24:08 +01:00
Dave Halter
9e3154d167 Fix an error message change in Python 3.7 2018-03-23 20:23:15 +01:00
Dave Halter
065da34272 Fix an issue in the diff parser about endmarker newlines
This was discovered in https://github.com/davidhalter/jedi/issues/1000.
2018-03-11 23:41:18 +01:00
Dave Halter
f89809de9a Remove the copyright for good 2018-01-09 23:27:46 +01:00
Chris Lamb
332c57ebcb Remove copyright years from documentation. (Closes: #25) 2018-01-09 23:24:39 +01:00
lcolaholicl
acb173b703 Fix typo: containes→contains 2017-12-31 13:59:41 +01:00
Daniel Hahler
47e78b37fe tox: use older pytest only for py26/py33
Follow-up to https://github.com/davidhalter/parso/commit/73439d5863daea.
2017-12-30 22:11:07 +01:00
Dave Halter
fc44af6165 Merge branch 'master' of github.com:davidhalter/parso 2017-12-30 18:08:17 +01:00
Dave Halter
73439d5863 Don't use a newer pytest version
Otherwise python 3.3 and 2.6 are not working anymore
2017-12-30 18:07:58 +01:00
Dave Halter
085aad3038 The tags should be annotated if possible 2017-12-30 14:05:50 +01:00
lcolaholicl
7db500bfbc Fix another typo.
Delete accidentally repeated `None.`
2017-12-30 13:52:33 +01:00
lcolaholicl
e689f3dce6 Fix typo
keyworda → keyword
2017-12-30 13:52:33 +01:00
Max Nordlund
b076cdc12a Fix typo 2017-12-26 03:09:40 +01:00
Dave Halter
0dea94c801 Bump version for the next release 2017-11-05 15:11:51 +01:00
Dave Halter
6cf487aee2 Use 3.7-dev not 3.7 for travis 2017-11-05 14:39:13 +01:00
66 changed files with 3787 additions and 1879 deletions

View File

@@ -1,4 +1,5 @@
[run]
source = parso
[report]
# Regexes for lines to exclude from consideration

2
.gitignore vendored
View File

@@ -9,3 +9,5 @@
/dist/
parso.egg-info/
/.cache/
/.pytest_cache
test/fuzz-redo.pickle

View File

@@ -1,26 +1,25 @@
dist: xenial
language: python
sudo: false
python:
- 2.6
- 2.7
- 3.3
- 3.4
- 3.5
- 3.6
- 3.7
- pypy
- 3.8-dev
- pypy2.7-6.0
- pypy3.5-6.0
matrix:
allow_failures:
- env: TOXENV=cov
include:
- python: 3.5
env: TOXENV=cov
env: TOXENV=py35-coverage
install:
- pip install --quiet tox-travis
script:
- tox
after_script:
- if [ $TOXENV == "cov" ]; then
pip install --quiet coveralls;
coveralls;
- |
if [ "${TOXENV%-coverage}" == "$TOXENV" ]; then
pip install --quiet coveralls;
coveralls;
fi

View File

@@ -3,6 +3,61 @@
Changelog
---------
0.5.0 (2019-06-20)
++++++++++++++++++
- **Breaking Change** comp_for is now called sync_comp_for for all Python
versions to be compatible with the Python 3.8 Grammar
- Added .pyi stubs for a lot of the parso API
- Small FileIO changes
0.4.0 (2019-04-05)
++++++++++++++++++
- Python 3.8 support
- FileIO support, it's now possible to use abstract file IO, support is alpha
0.3.4 (2018-02-13)
+++++++++++++++++++
- Fix an f-string tokenizer error
0.3.3 (2018-02-06)
+++++++++++++++++++
- Fix async errors in the diff parser
- A fix in iter_errors
- This is a very small bugfix release
0.3.2 (2018-01-24)
+++++++++++++++++++
- 20+ bugfixes in the diff parser and 3 in the tokenizer
- A fuzzer for the diff parser, to give confidence that the diff parser is in a
good shape.
- Some bugfixes for f-string
0.3.1 (2018-07-09)
+++++++++++++++++++
- Bugfixes in the diff parser and keyword-only arguments
0.3.0 (2018-06-30)
+++++++++++++++++++
- Rewrote the pgen2 parser generator.
0.2.1 (2018-05-21)
+++++++++++++++++++
- A bugfix for the diff parser.
- Grammar files can now be loaded from a specific path.
0.2.0 (2018-04-15)
+++++++++++++++++++
- f-strings are now parsed as a part of the normal Python grammar. This makes
it way easier to deal with them.
0.1.1 (2017-11-05)
+++++++++++++++++++

View File

@@ -2,12 +2,13 @@
parso - A Python Parser
###################################################################
.. image:: https://secure.travis-ci.org/davidhalter/parso.png?branch=master
:target: http://travis-ci.org/davidhalter/parso
:alt: Travis-CI build status
.. image:: https://coveralls.io/repos/davidhalter/parso/badge.png?branch=master
:target: https://coveralls.io/r/davidhalter/parso
.. image:: https://travis-ci.org/davidhalter/parso.svg?branch=master
:target: https://travis-ci.org/davidhalter/parso
:alt: Travis CI build status
.. image:: https://coveralls.io/repos/github/davidhalter/parso/badge.svg?branch=master
:target: https://coveralls.io/github/davidhalter/parso?branch=master
:alt: Coverage Status
.. image:: https://raw.githubusercontent.com/davidhalter/parso/master/docs/_static/logo_characters.png
@@ -55,10 +56,10 @@ To list multiple issues:
Resources
=========
- `Testing <http://parso.readthedocs.io/en/latest/docs/development.html#testing>`_
- `Testing <https://parso.readthedocs.io/en/latest/docs/development.html#testing>`_
- `PyPI <https://pypi.python.org/pypi/parso>`_
- `Docs <https://parso.readthedocs.org/en/latest/>`_
- Uses `semantic versioning <http://semver.org/>`_
- Uses `semantic versioning <https://semver.org/>`_
Installation
============

View File

@@ -14,7 +14,7 @@ from parso.utils import parse_version_string
collect_ignore = ["setup.py"]
VERSIONS_2 = '2.6', '2.7'
VERSIONS_3 = '3.3', '3.4', '3.5', '3.6', '3.7'
VERSIONS_3 = '3.3', '3.4', '3.5', '3.6', '3.7', '3.8'
@pytest.fixture(scope='session')
@@ -57,6 +57,8 @@ def pytest_generate_tests(metafunc):
metafunc.parametrize('each_py2_version', VERSIONS_2)
elif 'each_py3_version' in metafunc.fixturenames:
metafunc.parametrize('each_py3_version', VERSIONS_3)
elif 'version_ge_py36' in metafunc.fixturenames:
metafunc.parametrize('version_ge_py36', ['3.6', '3.7'])
class NormalizerIssueCase(object):
@@ -151,8 +153,11 @@ def works_ge_py3(each_version):
@pytest.fixture
def works_ge_py35(each_version):
"""
Works only greater equal Python 3.3.
"""
version_info = parse_version_string(each_version)
return Checker(each_version, version_info >= (3, 5))
@pytest.fixture
def works_ge_py38(each_version):
version_info = parse_version_string(each_version)
return Checker(each_version, version_info >= (3, 8))

View File

@@ -36,7 +36,7 @@ if [[ $tag_ref ]]; then
exit 1
fi
else
git tag $tag
git tag -a $tag
git push --tags
fi

View File

@@ -19,7 +19,6 @@
{% endblock %}
{%- block footer %}
<div class="footer">
&copy; Copyright {{ copyright }}.
Created using <a href="http://sphinx.pocoo.org/">Sphinx</a>.
</div>
{% if pagename == 'index' %}

View File

@@ -13,7 +13,6 @@
import sys
import os
import datetime
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
@@ -45,7 +44,7 @@ master_doc = 'index'
# General information about the project.
project = u'parso'
copyright = u'2012 - {today.year}, parso contributors'.format(today=datetime.date.today())
copyright = u'parso contributors'
import parso
from parso.utils import version_info
@@ -145,7 +144,7 @@ html_sidebars = {
#'relations.html',
'ghbuttons.html',
#'sourcelink.html',
#'searchbox.html'
'searchbox.html'
]
}

View File

@@ -61,6 +61,8 @@ Used By
-------
- jedi_ (which is used by IPython and a lot of editor plugins).
- mutmut_ (mutation tester)
.. _jedi: https://github.com/davidhalter/jedi
.. _mutmut: https://github.com/boxed/mutmut

View File

@@ -43,7 +43,7 @@ from parso.grammar import Grammar, load_grammar
from parso.utils import split_lines, python_bytes_to_unicode
__version__ = '0.1.1'
__version__ = '0.5.0'
def parse(code=None, **kwargs):

19
parso/__init__.pyi Normal file
View File

@@ -0,0 +1,19 @@
from typing import Any, Optional, Union
from parso.grammar import Grammar as Grammar, load_grammar as load_grammar
from parso.parser import ParserSyntaxError as ParserSyntaxError
from parso.utils import python_bytes_to_unicode as python_bytes_to_unicode, split_lines as split_lines
__version__: str = ...
def parse(
code: Optional[Union[str, bytes]],
*,
version: Optional[str] = None,
error_recovery: bool = True,
path: Optional[str] = None,
start_symbol: Optional[str] = None,
cache: bool = False,
diff_cache: bool = False,
cache_path: Optional[str] = None,
) -> Any: ...

View File

@@ -36,7 +36,7 @@ except AttributeError:
def u(string):
"""Cast to unicode DAMMIT!
Written because Python2 repr always implicitly casts to a string, so we
have to cast back to a unicode (and we now that we always deal with valid
have to cast back to a unicode (and we know that we always deal with valid
unicode, because we check that in the beginning).
"""
if py_version >= 30:

View File

@@ -18,7 +18,7 @@ from parso._compatibility import FileNotFoundError
LOG = logging.getLogger(__name__)
_PICKLE_VERSION = 30
_PICKLE_VERSION = 32
"""
Version number (integer) for file system cache.
@@ -45,6 +45,7 @@ we generate something similar. See:
http://docs.python.org/3/library/sys.html#sys.implementation
"""
def _get_default_cache_path():
if platform.system().lower() == 'windows':
dir_ = os.path.join(os.getenv('LOCALAPPDATA') or '~', 'Parso', 'Parso')
@@ -54,6 +55,7 @@ def _get_default_cache_path():
dir_ = os.path.join(os.getenv('XDG_CACHE_HOME') or '~/.cache', 'parso')
return os.path.expanduser(dir_)
_default_cache_path = _get_default_cache_path()
"""
The path where the cache is stored.
@@ -76,21 +78,25 @@ class _NodeCacheItem(object):
self.change_time = change_time
def load_module(hashed_grammar, path, cache_path=None):
def load_module(hashed_grammar, file_io, cache_path=None):
"""
Returns a module or None, if it fails.
"""
try:
p_time = os.path.getmtime(path)
except FileNotFoundError:
p_time = file_io.get_last_modified()
if p_time is None:
return None
try:
module_cache_item = parser_cache[hashed_grammar][path]
module_cache_item = parser_cache[hashed_grammar][file_io.path]
if p_time <= module_cache_item.change_time:
return module_cache_item.node
except KeyError:
return _load_from_file_system(hashed_grammar, path, p_time, cache_path=cache_path)
return _load_from_file_system(
hashed_grammar,
file_io.path,
p_time,
cache_path=cache_path
)
def _load_from_file_system(hashed_grammar, path, p_time, cache_path=None):
@@ -121,9 +127,10 @@ def _load_from_file_system(hashed_grammar, path, p_time, cache_path=None):
return module_cache_item.node
def save_module(hashed_grammar, path, module, lines, pickling=True, cache_path=None):
def save_module(hashed_grammar, file_io, module, lines, pickling=True, cache_path=None):
path = file_io.path
try:
p_time = None if path is None else os.path.getmtime(path)
p_time = None if path is None else file_io.get_last_modified()
except OSError:
p_time = None
pickling = False

35
parso/file_io.py Normal file
View File

@@ -0,0 +1,35 @@
import os
class FileIO(object):
def __init__(self, path):
self.path = path
def read(self): # Returns bytes/str
# We would like to read unicode here, but we cannot, because we are not
# sure if it is a valid unicode file. Therefore just read whatever is
# here.
with open(self.path, 'rb') as f:
return f.read()
def get_last_modified(self):
"""
Returns float - timestamp or None, if path doesn't exist.
"""
try:
return os.path.getmtime(self.path)
except OSError:
# Might raise FileNotFoundError, OSError for Python 2
return None
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self.path)
class KnownContentFileIO(FileIO):
def __init__(self, path, content):
super(KnownContentFileIO, self).__init__(path)
self._content = content
def read(self):
return self._content

View File

@@ -2,17 +2,17 @@ import hashlib
import os
from parso._compatibility import FileNotFoundError, is_pypy
from parso.pgen2.pgen import generate_grammar
from parso.pgen2 import generate_grammar
from parso.utils import split_lines, python_bytes_to_unicode, parse_version_string
from parso.python.diff import DiffParser
from parso.python.tokenize import tokenize_lines, tokenize
from parso.python import token
from parso.python.token import PythonTokenTypes
from parso.cache import parser_cache, load_module, save_module
from parso.parser import BaseParser
from parso.python.parser import Parser as PythonParser
from parso.python.errors import ErrorFinderConfig
from parso.python import pep8
from parso.python import fstring
from parso.file_io import FileIO, KnownContentFileIO
_loaded_grammars = {}
@@ -21,7 +21,7 @@ class Grammar(object):
"""
:py:func:`parso.load_grammar` returns instances of this class.
Creating custom grammars by calling this is not supported, yet.
Creating custom none-python grammars by calling this is not supported, yet.
"""
#:param text: A BNF representation of your grammar.
_error_normalizer_config = None
@@ -52,8 +52,8 @@ class Grammar(object):
it is invalid, it will be returned as an error node. If disabled,
you will get a ParseError when encountering syntax errors in your
code.
:param str start_symbol: The grammar symbol that you want to parse. Only
allowed to be used when error_recovery is False.
:param str start_symbol: The grammar rule (nonterminal) that you want
to parse. Only allowed to be used when error_recovery is False.
:param str path: The path to the file you want to open. Only needed for caching.
:param bool cache: Keeps a copy of the parser tree in RAM and on disk
if a path is given. Returns the cached trees if the corresponding
@@ -73,36 +73,40 @@ class Grammar(object):
:py:class:`parso.python.tree.Module`.
"""
if 'start_pos' in kwargs:
raise TypeError("parse() got an unexpected keyworda argument.")
raise TypeError("parse() got an unexpected keyword argument.")
return self._parse(code=code, **kwargs)
def _parse(self, code=None, error_recovery=True, path=None,
start_symbol=None, cache=False, diff_cache=False,
cache_path=None, start_pos=(1, 0)):
cache_path=None, file_io=None, start_pos=(1, 0)):
"""
Wanted python3.5 * operator and keyword only arguments. Therefore just
wrap it all.
start_pos here is just a parameter internally used. Might be public
sometime in the future.
"""
if code is None and path is None:
if code is None and path is None and file_io is None:
raise TypeError("Please provide either code or a path.")
if start_symbol is None:
start_symbol = self._start_symbol
start_symbol = self._start_nonterminal
if error_recovery and start_symbol != 'file_input':
raise NotImplementedError("This is currently not implemented.")
if cache and path is not None:
module_node = load_module(self._hashed, path, cache_path=cache_path)
if file_io is None:
if code is None:
file_io = FileIO(path)
else:
file_io = KnownContentFileIO(path, code)
if cache and file_io.path is not None:
module_node = load_module(self._hashed, file_io, cache_path=cache_path)
if module_node is not None:
return module_node
if code is None:
with open(path, 'rb') as f:
code = f.read()
code = file_io.read()
code = python_bytes_to_unicode(code)
lines = split_lines(code, keepends=True)
@@ -111,7 +115,7 @@ class Grammar(object):
raise TypeError("You have to define a diff parser to be able "
"to use this option.")
try:
module_cache_item = parser_cache[self._hashed][path]
module_cache_item = parser_cache[self._hashed][file_io.path]
except KeyError:
pass
else:
@@ -126,7 +130,7 @@ class Grammar(object):
old_lines=old_lines,
new_lines=lines
)
save_module(self._hashed, path, new_node, lines,
save_module(self._hashed, file_io, new_node, lines,
# Never pickle in pypy, it's slow as hell.
pickling=cache and not is_pypy,
cache_path=cache_path)
@@ -137,12 +141,12 @@ class Grammar(object):
p = self._parser(
self._pgen_grammar,
error_recovery=error_recovery,
start_symbol=start_symbol
start_nonterminal=start_symbol
)
root_node = p.parse(tokens=tokens)
if cache or diff_cache:
save_module(self._hashed, path, root_node, lines,
save_module(self._hashed, file_io, root_node, lines,
# Never pickle in pypy, it's slow as hell.
pickling=cache and not is_pypy,
cache_path=cache_path)
@@ -186,17 +190,16 @@ class Grammar(object):
normalizer.walk(node)
return normalizer.issues
def __repr__(self):
labels = self._pgen_grammar.number2symbol.values()
txt = ' '.join(list(labels)[:3]) + ' ...'
nonterminals = self._pgen_grammar.nonterminal_to_dfas.keys()
txt = ' '.join(list(nonterminals)[:3]) + ' ...'
return '<%s:%s>' % (self.__class__.__name__, txt)
class PythonGrammar(Grammar):
_error_normalizer_config = ErrorFinderConfig()
_token_namespace = token
_start_symbol = 'file_input'
_token_namespace = PythonTokenTypes
_start_nonterminal = 'file_input'
def __init__(self, version_info, bnf_text):
super(PythonGrammar, self).__init__(
@@ -215,46 +218,19 @@ class PythonGrammar(Grammar):
return tokenize(code, self.version_info)
class PythonFStringGrammar(Grammar):
_token_namespace = fstring.TokenNamespace
_start_symbol = 'fstring'
def __init__(self):
super(PythonFStringGrammar, self).__init__(
text=fstring.GRAMMAR,
tokenizer=fstring.tokenize,
parser=fstring.Parser
)
def parse(self, code, **kwargs):
return self._parse(code, **kwargs)
def _parse(self, code, error_recovery=True, start_pos=(1, 0)):
tokens = self._tokenizer(code, start_pos=start_pos)
p = self._parser(
self._pgen_grammar,
error_recovery=error_recovery,
start_symbol=self._start_symbol,
)
return p.parse(tokens=tokens)
def parse_leaf(self, leaf, error_recovery=True):
code = leaf._get_payload()
return self.parse(code, error_recovery=True, start_pos=leaf.start_pos)
def load_grammar(**kwargs):
"""
Loads a :py:class:`parso.Grammar`. The default version is the current Python
version.
:param str version: A python version string, e.g. ``version='3.3'``.
:param str path: A path to a grammar file
"""
def load_grammar(language='python', version=None):
def load_grammar(language='python', version=None, path=None):
if language == 'python':
version_info = parse_version_string(version)
file = os.path.join(
file = path or os.path.join(
'python',
'grammar%s%s.txt' % (version_info.major, version_info.minor)
)
@@ -273,10 +249,6 @@ def load_grammar(**kwargs):
except FileNotFoundError:
message = "Python version %s is currently not supported." % version
raise NotImplementedError(message)
elif language == 'python-f-string':
if version is not None:
raise NotImplementedError("Currently different versions are not supported.")
return PythonFStringGrammar()
else:
raise NotImplementedError("No support for language %s." % language)

38
parso/grammar.pyi Normal file
View File

@@ -0,0 +1,38 @@
from typing import Any, Callable, Generic, Optional, Sequence, TypeVar, Union
from typing_extensions import Literal
from parso.utils import PythonVersionInfo
_Token = Any
_NodeT = TypeVar("_NodeT")
class Grammar(Generic[_NodeT]):
_default_normalizer_config: Optional[Any] = ...
_error_normalizer_config: Optional[Any] = None
_start_nonterminal: str = ...
_token_namespace: Optional[str] = None
def __init__(
self,
text: str,
tokenizer: Callable[[Sequence[str], int], Sequence[_Token]],
parser: Any = ...,
diff_parser: Any = ...,
) -> None: ...
def parse(
self,
code: Union[str, bytes] = ...,
error_recovery: bool = ...,
path: Optional[str] = ...,
start_symbol: Optional[str] = ...,
cache: bool = ...,
diff_cache: bool = ...,
cache_path: Optional[str] = ...,
) -> _NodeT: ...
class PythonGrammar(Grammar):
version_info: PythonVersionInfo
def __init__(self, version_info: PythonVersionInfo, bnf_text: str) -> None: ...
def load_grammar(
language: Literal["python"] = "python", version: Optional[str] = ..., path: str = ...
) -> Grammar: ...

View File

@@ -41,8 +41,8 @@ class Normalizer(use_metaclass(_NormalizerMeta)):
except AttributeError:
return self.visit_leaf(node)
else:
with self.visit_node(node):
return ''.join(self.visit(child) for child in children)
with self.visit_node(node):
return ''.join(self.visit(child) for child in children)
@contextmanager
def visit_node(self, node):
@@ -147,7 +147,6 @@ class Issue(object):
return '<%s: %s>' % (self.__class__.__name__, self.code)
class Rule(object):
code = None
message = None

View File

@@ -1,3 +1,11 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright David Halter and Contributors
# Modifications are dual-licensed: MIT and PSF.
# 99% of the code is different from pgen2, now.
"""
The ``Parser`` tries to convert the available Python code in an easy to read
format, something like an abstract syntax tree. The classes who represent this
@@ -16,7 +24,7 @@ complexity of the ``Parser`` (there's another parser sitting inside
``Statement``, which produces ``Array`` and ``Call``).
"""
from parso import tree
from parso.pgen2.parse import PgenParser
from parso.pgen2.generator import ReservedString
class ParserSyntaxError(Exception):
@@ -30,7 +38,76 @@ class ParserSyntaxError(Exception):
self.error_leaf = error_leaf
class InternalParseError(Exception):
"""
Exception to signal the parser is stuck and error recovery didn't help.
Basically this shouldn't happen. It's a sign that something is really
wrong.
"""
def __init__(self, msg, type_, value, start_pos):
Exception.__init__(self, "%s: type=%r, value=%r, start_pos=%r" %
(msg, type_.name, value, start_pos))
self.msg = msg
self.type = type
self.value = value
self.start_pos = start_pos
class Stack(list):
def _allowed_transition_names_and_token_types(self):
def iterate():
# An API just for Jedi.
for stack_node in reversed(self):
for transition in stack_node.dfa.transitions:
if isinstance(transition, ReservedString):
yield transition.value
else:
yield transition # A token type
if not stack_node.dfa.is_final:
break
return list(iterate())
class StackNode(object):
def __init__(self, dfa):
self.dfa = dfa
self.nodes = []
@property
def nonterminal(self):
return self.dfa.from_rule
def __repr__(self):
return '%s(%s, %s)' % (self.__class__.__name__, self.dfa, self.nodes)
def _token_to_transition(grammar, type_, value):
# Map from token to label
if type_.contains_syntax:
# Check for reserved words (keywords)
try:
return grammar.reserved_syntax_strings[value]
except KeyError:
pass
return type_
class BaseParser(object):
"""Parser engine.
A Parser instance contains state pertaining to the current token
sequence, and should not be used concurrently by different threads
to parse separate token sequences.
See python/tokenize.py for how to get input tokens by a string.
When a syntax error occurs, error_recovery() is called.
"""
node_map = {}
default_node = tree.Node
@@ -38,41 +115,97 @@ class BaseParser(object):
}
default_leaf = tree.Leaf
def __init__(self, pgen_grammar, start_symbol='file_input', error_recovery=False):
def __init__(self, pgen_grammar, start_nonterminal='file_input', error_recovery=False):
self._pgen_grammar = pgen_grammar
self._start_symbol = start_symbol
self._start_nonterminal = start_nonterminal
self._error_recovery = error_recovery
def parse(self, tokens):
start_number = self._pgen_grammar.symbol2number[self._start_symbol]
self.pgen_parser = PgenParser(
self._pgen_grammar, self.convert_node, self.convert_leaf,
self.error_recovery, start_number
)
first_dfa = self._pgen_grammar.nonterminal_to_dfas[self._start_nonterminal][0]
self.stack = Stack([StackNode(first_dfa)])
node = self.pgen_parser.parse(tokens)
# The stack is empty now, we don't need it anymore.
del self.pgen_parser
return node
for token in tokens:
self._add_token(token)
def error_recovery(self, pgen_grammar, stack, arcs, typ, value, start_pos, prefix,
add_token_callback):
while True:
tos = self.stack[-1]
if not tos.dfa.is_final:
# We never broke out -- EOF is too soon -- Unfinished statement.
# However, the error recovery might have added the token again, if
# the stack is empty, we're fine.
raise InternalParseError(
"incomplete input", token.type, token.value, token.start_pos
)
if len(self.stack) > 1:
self._pop()
else:
return self.convert_node(tos.nonterminal, tos.nodes)
def error_recovery(self, token):
if self._error_recovery:
raise NotImplementedError("Error Recovery is not implemented")
else:
error_leaf = tree.ErrorLeaf('TODO %s' % typ, value, start_pos, prefix)
type_, value, start_pos, prefix = token
error_leaf = tree.ErrorLeaf(type_, value, start_pos, prefix)
raise ParserSyntaxError('SyntaxError: invalid syntax', error_leaf)
def convert_node(self, pgen_grammar, type_, children):
# TODO REMOVE symbol, we don't want type here.
symbol = pgen_grammar.number2symbol[type_]
def convert_node(self, nonterminal, children):
try:
return self.node_map[symbol](children)
node = self.node_map[nonterminal](children)
except KeyError:
return self.default_node(symbol, children)
node = self.default_node(nonterminal, children)
for c in children:
c.parent = node
return node
def convert_leaf(self, pgen_grammar, type_, value, prefix, start_pos):
def convert_leaf(self, type_, value, prefix, start_pos):
try:
return self.leaf_map[type_](value, start_pos, prefix)
except KeyError:
return self.default_leaf(value, start_pos, prefix)
def _add_token(self, token):
"""
This is the only core function for parsing. Here happens basically
everything. Everything is well prepared by the parser generator and we
only apply the necessary steps here.
"""
grammar = self._pgen_grammar
stack = self.stack
type_, value, start_pos, prefix = token
transition = _token_to_transition(grammar, type_, value)
while True:
try:
plan = stack[-1].dfa.transitions[transition]
break
except KeyError:
if stack[-1].dfa.is_final:
self._pop()
else:
self.error_recovery(token)
return
except IndexError:
raise InternalParseError("too much input", type_, value, start_pos)
stack[-1].dfa = plan.next_dfa
for push in plan.dfa_pushes:
stack.append(StackNode(push))
leaf = self.convert_leaf(type_, value, prefix, start_pos)
stack[-1].nodes.append(leaf)
def _pop(self):
tos = self.stack.pop()
# If there's exactly one child, return that child instead of
# creating a new node. We still create expr_stmt and
# file_input though, because a lot of Jedi depends on its
# logic.
if len(tos.nodes) == 1:
new_node = tos.nodes[0]
else:
new_node = self.convert_node(tos.dfa.from_rule, tos.nodes)
self.stack[-1].nodes.append(new_node)

View File

@@ -4,5 +4,7 @@
# Modifications:
# Copyright 2006 Google, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Copyright 2014 David Halter. Integration into Jedi.
# Copyright 2014 David Halter and Contributors
# Modifications are dual-licensed: MIT and PSF.
from parso.pgen2.generator import generate_grammar

1
parso/pgen2/__init__.pyi Normal file
View File

@@ -0,0 +1 @@
from parso.pgen2.generator import generate_grammar as generate_grammar

358
parso/pgen2/generator.py Normal file
View File

@@ -0,0 +1,358 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright David Halter and Contributors
# Modifications are dual-licensed: MIT and PSF.
"""
This module defines the data structures used to represent a grammar.
Specifying grammars in pgen is possible with this grammar::
grammar: (NEWLINE | rule)* ENDMARKER
rule: NAME ':' rhs NEWLINE
rhs: items ('|' items)*
items: item+
item: '[' rhs ']' | atom ['+' | '*']
atom: '(' rhs ')' | NAME | STRING
This grammar is self-referencing.
This parser generator (pgen2) was created by Guido Rossum and used for lib2to3.
Most of the code has been refactored to make it more Pythonic. Since this was a
"copy" of the CPython Parser parser "pgen", there was some work needed to make
it more readable. It should also be slightly faster than the original pgen2,
because we made some optimizations.
"""
from ast import literal_eval
from parso.pgen2.grammar_parser import GrammarParser, NFAState
class Grammar(object):
"""
Once initialized, this class supplies the grammar tables for the
parsing engine implemented by parse.py. The parsing engine
accesses the instance variables directly.
The only important part in this parsers are dfas and transitions between
dfas.
"""
def __init__(self, start_nonterminal, rule_to_dfas, reserved_syntax_strings):
self.nonterminal_to_dfas = rule_to_dfas # Dict[str, List[DFAState]]
self.reserved_syntax_strings = reserved_syntax_strings
self.start_nonterminal = start_nonterminal
class DFAPlan(object):
"""
Plans are used for the parser to create stack nodes and do the proper
DFA state transitions.
"""
def __init__(self, next_dfa, dfa_pushes=[]):
self.next_dfa = next_dfa
self.dfa_pushes = dfa_pushes
def __repr__(self):
return '%s(%s, %s)' % (self.__class__.__name__, self.next_dfa, self.dfa_pushes)
class DFAState(object):
"""
The DFAState object is the core class for pretty much anything. DFAState
are the vertices of an ordered graph while arcs and transitions are the
edges.
Arcs are the initial edges, where most DFAStates are not connected and
transitions are then calculated to connect the DFA state machines that have
different nonterminals.
"""
def __init__(self, from_rule, nfa_set, final):
assert isinstance(nfa_set, set)
assert isinstance(next(iter(nfa_set)), NFAState)
assert isinstance(final, NFAState)
self.from_rule = from_rule
self.nfa_set = nfa_set
self.arcs = {} # map from terminals/nonterminals to DFAState
# In an intermediary step we set these nonterminal arcs (which has the
# same structure as arcs). These don't contain terminals anymore.
self.nonterminal_arcs = {}
# Transitions are basically the only thing that the parser is using
# with is_final. Everyting else is purely here to create a parser.
self.transitions = {} #: Dict[Union[TokenType, ReservedString], DFAPlan]
self.is_final = final in nfa_set
def add_arc(self, next_, label):
assert isinstance(label, str)
assert label not in self.arcs
assert isinstance(next_, DFAState)
self.arcs[label] = next_
def unifystate(self, old, new):
for label, next_ in self.arcs.items():
if next_ is old:
self.arcs[label] = new
def __eq__(self, other):
# Equality test -- ignore the nfa_set instance variable
assert isinstance(other, DFAState)
if self.is_final != other.is_final:
return False
# Can't just return self.arcs == other.arcs, because that
# would invoke this method recursively, with cycles...
if len(self.arcs) != len(other.arcs):
return False
for label, next_ in self.arcs.items():
if next_ is not other.arcs.get(label):
return False
return True
__hash__ = None # For Py3 compatibility.
def __repr__(self):
return '<%s: %s is_final=%s>' % (
self.__class__.__name__, self.from_rule, self.is_final
)
class ReservedString(object):
"""
Most grammars will have certain keywords and operators that are mentioned
in the grammar as strings (e.g. "if") and not token types (e.g. NUMBER).
This class basically is the former.
"""
def __init__(self, value):
self.value = value
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self.value)
def _simplify_dfas(dfas):
"""
This is not theoretically optimal, but works well enough.
Algorithm: repeatedly look for two states that have the same
set of arcs (same labels pointing to the same nodes) and
unify them, until things stop changing.
dfas is a list of DFAState instances
"""
changes = True
while changes:
changes = False
for i, state_i in enumerate(dfas):
for j in range(i + 1, len(dfas)):
state_j = dfas[j]
if state_i == state_j:
#print " unify", i, j
del dfas[j]
for state in dfas:
state.unifystate(state_j, state_i)
changes = True
break
def _make_dfas(start, finish):
"""
Uses the powerset construction algorithm to create DFA states from sets of
NFA states.
Also does state reduction if some states are not needed.
"""
# To turn an NFA into a DFA, we define the states of the DFA
# to correspond to *sets* of states of the NFA. Then do some
# state reduction.
assert isinstance(start, NFAState)
assert isinstance(finish, NFAState)
def addclosure(nfa_state, base_nfa_set):
assert isinstance(nfa_state, NFAState)
if nfa_state in base_nfa_set:
return
base_nfa_set.add(nfa_state)
for nfa_arc in nfa_state.arcs:
if nfa_arc.nonterminal_or_string is None:
addclosure(nfa_arc.next, base_nfa_set)
base_nfa_set = set()
addclosure(start, base_nfa_set)
states = [DFAState(start.from_rule, base_nfa_set, finish)]
for state in states: # NB states grows while we're iterating
arcs = {}
# Find state transitions and store them in arcs.
for nfa_state in state.nfa_set:
for nfa_arc in nfa_state.arcs:
if nfa_arc.nonterminal_or_string is not None:
nfa_set = arcs.setdefault(nfa_arc.nonterminal_or_string, set())
addclosure(nfa_arc.next, nfa_set)
# Now create the dfa's with no None's in arcs anymore. All Nones have
# been eliminated and state transitions (arcs) are properly defined, we
# just need to create the dfa's.
for nonterminal_or_string, nfa_set in arcs.items():
for nested_state in states:
if nested_state.nfa_set == nfa_set:
# The DFA state already exists for this rule.
break
else:
nested_state = DFAState(start.from_rule, nfa_set, finish)
states.append(nested_state)
state.add_arc(nested_state, nonterminal_or_string)
return states # List of DFAState instances; first one is start
def _dump_nfa(start, finish):
print("Dump of NFA for", start.from_rule)
todo = [start]
for i, state in enumerate(todo):
print(" State", i, state is finish and "(final)" or "")
for label, next_ in state.arcs:
if next_ in todo:
j = todo.index(next_)
else:
j = len(todo)
todo.append(next_)
if label is None:
print(" -> %d" % j)
else:
print(" %s -> %d" % (label, j))
def _dump_dfas(dfas):
print("Dump of DFA for", dfas[0].from_rule)
for i, state in enumerate(dfas):
print(" State", i, state.is_final and "(final)" or "")
for nonterminal, next_ in state.arcs.items():
print(" %s -> %d" % (nonterminal, dfas.index(next_)))
def generate_grammar(bnf_grammar, token_namespace):
"""
``bnf_text`` is a grammar in extended BNF (using * for repetition, + for
at-least-once repetition, [] for optional parts, | for alternatives and ()
for grouping).
It's not EBNF according to ISO/IEC 14977. It's a dialect Python uses in its
own parser.
"""
rule_to_dfas = {}
start_nonterminal = None
for nfa_a, nfa_z in GrammarParser(bnf_grammar).parse():
#_dump_nfa(a, z)
dfas = _make_dfas(nfa_a, nfa_z)
#_dump_dfas(dfas)
# oldlen = len(dfas)
_simplify_dfas(dfas)
# newlen = len(dfas)
rule_to_dfas[nfa_a.from_rule] = dfas
#print(nfa_a.from_rule, oldlen, newlen)
if start_nonterminal is None:
start_nonterminal = nfa_a.from_rule
reserved_strings = {}
for nonterminal, dfas in rule_to_dfas.items():
for dfa_state in dfas:
for terminal_or_nonterminal, next_dfa in dfa_state.arcs.items():
if terminal_or_nonterminal in rule_to_dfas:
dfa_state.nonterminal_arcs[terminal_or_nonterminal] = next_dfa
else:
transition = _make_transition(
token_namespace,
reserved_strings,
terminal_or_nonterminal
)
dfa_state.transitions[transition] = DFAPlan(next_dfa)
_calculate_tree_traversal(rule_to_dfas)
return Grammar(start_nonterminal, rule_to_dfas, reserved_strings)
def _make_transition(token_namespace, reserved_syntax_strings, label):
"""
Creates a reserved string ("if", "for", "*", ...) or returns the token type
(NUMBER, STRING, ...) for a given grammar terminal.
"""
if label[0].isalpha():
# A named token (e.g. NAME, NUMBER, STRING)
return getattr(token_namespace, label)
else:
# Either a keyword or an operator
assert label[0] in ('"', "'"), label
assert not label.startswith('"""') and not label.startswith("'''")
value = literal_eval(label)
try:
return reserved_syntax_strings[value]
except KeyError:
r = reserved_syntax_strings[value] = ReservedString(value)
return r
def _calculate_tree_traversal(nonterminal_to_dfas):
"""
By this point we know how dfas can move around within a stack node, but we
don't know how we can add a new stack node (nonterminal transitions).
"""
# Map from grammar rule (nonterminal) name to a set of tokens.
first_plans = {}
nonterminals = list(nonterminal_to_dfas.keys())
nonterminals.sort()
for nonterminal in nonterminals:
if nonterminal not in first_plans:
_calculate_first_plans(nonterminal_to_dfas, first_plans, nonterminal)
# Now that we have calculated the first terminals, we are sure that
# there is no left recursion or ambiguities.
for dfas in nonterminal_to_dfas.values():
for dfa_state in dfas:
for nonterminal, next_dfa in dfa_state.nonterminal_arcs.items():
for transition, pushes in first_plans[nonterminal].items():
dfa_state.transitions[transition] = DFAPlan(next_dfa, pushes)
def _calculate_first_plans(nonterminal_to_dfas, first_plans, nonterminal):
"""
Calculates the first plan in the first_plans dictionary for every given
nonterminal. This is going to be used to know when to create stack nodes.
"""
dfas = nonterminal_to_dfas[nonterminal]
new_first_plans = {}
first_plans[nonterminal] = None # dummy to detect left recursion
# We only need to check the first dfa. All the following ones are not
# interesting to find first terminals.
state = dfas[0]
for transition, next_ in state.transitions.items():
# It's a string. We have finally found a possible first token.
new_first_plans[transition] = [next_.next_dfa]
for nonterminal2, next_ in state.nonterminal_arcs.items():
# It's a nonterminal and we have either a left recursion issue
# in the grammar or we have to recurse.
try:
first_plans2 = first_plans[nonterminal2]
except KeyError:
first_plans2 = _calculate_first_plans(nonterminal_to_dfas, first_plans, nonterminal2)
else:
if first_plans2 is None:
raise ValueError("left recursion for rule %r" % nonterminal)
for t, pushes in first_plans2.items():
check = new_first_plans.get(t)
if check is not None:
raise ValueError(
"Rule %s is ambiguous; %s is the"
" start of the rule %s as well as %s."
% (nonterminal, t, nonterminal2, check[-1].from_rule)
)
new_first_plans[t] = [next_] + pushes
first_plans[nonterminal] = new_first_plans
return new_first_plans

38
parso/pgen2/generator.pyi Normal file
View File

@@ -0,0 +1,38 @@
from typing import Any, Generic, Mapping, Sequence, Set, TypeVar, Union
from parso.pgen2.grammar_parser import NFAState
_TokenTypeT = TypeVar("_TokenTypeT")
class Grammar(Generic[_TokenTypeT]):
nonterminal_to_dfas: Mapping[str, Sequence[DFAState[_TokenTypeT]]]
reserved_syntax_strings: Mapping[str, ReservedString]
start_nonterminal: str
def __init__(
self,
start_nonterminal: str,
rule_to_dfas: Mapping[str, Sequence[DFAState]],
reserved_syntax_strings: Mapping[str, ReservedString],
) -> None: ...
class DFAPlan:
next_dfa: DFAState
dfa_pushes: Sequence[DFAState]
class DFAState(Generic[_TokenTypeT]):
from_rule: str
nfa_set: Set[NFAState]
is_final: bool
arcs: Mapping[str, DFAState] # map from all terminals/nonterminals to DFAState
nonterminal_arcs: Mapping[str, DFAState]
transitions: Mapping[Union[_TokenTypeT, ReservedString], DFAPlan]
def __init__(
self, from_rule: str, nfa_set: Set[NFAState], final: NFAState
) -> None: ...
class ReservedString:
value: str
def __init__(self, value: str) -> None: ...
def __repr__(self) -> str: ...
def generate_grammar(bnf_grammar: str, token_namespace: Any) -> Grammar[Any]: ...

View File

@@ -1,128 +0,0 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright 2014 David Halter. Integration into Jedi.
# Modifications are dual-licensed: MIT and PSF.
"""This module defines the data structures used to represent a grammar.
These are a bit arcane because they are derived from the data
structures used by Python's 'pgen' parser generator.
There's also a table here mapping operators to their names in the
token module; the Python tokenize module reports all operators as the
fallback token code OP, but the parser needs the actual token code.
"""
try:
import cPickle as pickle
except:
import pickle
class Grammar(object):
"""Pgen parsing tables conversion class.
Once initialized, this class supplies the grammar tables for the
parsing engine implemented by parse.py. The parsing engine
accesses the instance variables directly. The class here does not
provide initialization of the tables; several subclasses exist to
do this (see the conv and pgen modules).
The load() method reads the tables from a pickle file, which is
much faster than the other ways offered by subclasses. The pickle
file is written by calling dump() (after loading the grammar
tables using a subclass). The report() method prints a readable
representation of the tables to stdout, for debugging.
The instance variables are as follows:
symbol2number -- a dict mapping symbol names to numbers. Symbol
numbers are always 256 or higher, to distinguish
them from token numbers, which are between 0 and
255 (inclusive).
number2symbol -- a dict mapping numbers to symbol names;
these two are each other's inverse.
states -- a list of DFAs, where each DFA is a list of
states, each state is a list of arcs, and each
arc is a (i, j) pair where i is a label and j is
a state number. The DFA number is the index into
this list. (This name is slightly confusing.)
Final states are represented by a special arc of
the form (0, j) where j is its own state number.
dfas -- a dict mapping symbol numbers to (DFA, first)
pairs, where DFA is an item from the states list
above, and first is a set of tokens that can
begin this grammar rule (represented by a dict
whose values are always 1).
labels -- a list of (x, y) pairs where x is either a token
number or a symbol number, and y is either None
or a string; the strings are keywords. The label
number is the index in this list; label numbers
are used to mark state transitions (arcs) in the
DFAs.
start -- the number of the grammar's start symbol.
keywords -- a dict mapping keyword strings to arc labels.
tokens -- a dict mapping token numbers to arc labels.
"""
def __init__(self, bnf_text):
self.symbol2number = {}
self.number2symbol = {}
self.states = []
self.dfas = {}
self.labels = [(0, "EMPTY")]
self.keywords = {}
self.tokens = {}
self.symbol2label = {}
self.label2symbol = {}
self.start = 256
def dump(self, filename):
"""Dump the grammar tables to a pickle file."""
with open(filename, "wb") as f:
pickle.dump(self.__dict__, f, 2)
def load(self, filename):
"""Load the grammar tables from a pickle file."""
with open(filename, "rb") as f:
d = pickle.load(f)
self.__dict__.update(d)
def copy(self):
"""
Copy the grammar.
"""
new = self.__class__()
for dict_attr in ("symbol2number", "number2symbol", "dfas", "keywords",
"tokens", "symbol2label"):
setattr(new, dict_attr, getattr(self, dict_attr).copy())
new.labels = self.labels[:]
new.states = self.states[:]
new.start = self.start
return new
def report(self):
"""Dump the grammar tables to standard output, for debugging."""
from pprint import pprint
print("s2n")
pprint(self.symbol2number)
print("n2s")
pprint(self.number2symbol)
print("states")
pprint(self.states)
print("dfas")
pprint(self.dfas)
print("labels")
pprint(self.labels)
print("start", self.start)

View File

@@ -0,0 +1,156 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright David Halter and Contributors
# Modifications are dual-licensed: MIT and PSF.
from parso.python.tokenize import tokenize
from parso.utils import parse_version_string
from parso.python.token import PythonTokenTypes
class GrammarParser():
"""
The parser for Python grammar files.
"""
def __init__(self, bnf_grammar):
self._bnf_grammar = bnf_grammar
self.generator = tokenize(
bnf_grammar,
version_info=parse_version_string('3.6')
)
self._gettoken() # Initialize lookahead
def parse(self):
# grammar: (NEWLINE | rule)* ENDMARKER
while self.type != PythonTokenTypes.ENDMARKER:
while self.type == PythonTokenTypes.NEWLINE:
self._gettoken()
# rule: NAME ':' rhs NEWLINE
self._current_rule_name = self._expect(PythonTokenTypes.NAME)
self._expect(PythonTokenTypes.OP, ':')
a, z = self._parse_rhs()
self._expect(PythonTokenTypes.NEWLINE)
yield a, z
def _parse_rhs(self):
# rhs: items ('|' items)*
a, z = self._parse_items()
if self.value != "|":
return a, z
else:
aa = NFAState(self._current_rule_name)
zz = NFAState(self._current_rule_name)
while True:
# Add the possibility to go into the state of a and come back
# to finish.
aa.add_arc(a)
z.add_arc(zz)
if self.value != "|":
break
self._gettoken()
a, z = self._parse_items()
return aa, zz
def _parse_items(self):
# items: item+
a, b = self._parse_item()
while self.type in (PythonTokenTypes.NAME, PythonTokenTypes.STRING) \
or self.value in ('(', '['):
c, d = self._parse_item()
# Need to end on the next item.
b.add_arc(c)
b = d
return a, b
def _parse_item(self):
# item: '[' rhs ']' | atom ['+' | '*']
if self.value == "[":
self._gettoken()
a, z = self._parse_rhs()
self._expect(PythonTokenTypes.OP, ']')
# Make it also possible that there is no token and change the
# state.
a.add_arc(z)
return a, z
else:
a, z = self._parse_atom()
value = self.value
if value not in ("+", "*"):
return a, z
self._gettoken()
# Make it clear that we can go back to the old state and repeat.
z.add_arc(a)
if value == "+":
return a, z
else:
# The end state is the same as the beginning, nothing must
# change.
return a, a
def _parse_atom(self):
# atom: '(' rhs ')' | NAME | STRING
if self.value == "(":
self._gettoken()
a, z = self._parse_rhs()
self._expect(PythonTokenTypes.OP, ')')
return a, z
elif self.type in (PythonTokenTypes.NAME, PythonTokenTypes.STRING):
a = NFAState(self._current_rule_name)
z = NFAState(self._current_rule_name)
# Make it clear that the state transition requires that value.
a.add_arc(z, self.value)
self._gettoken()
return a, z
else:
self._raise_error("expected (...) or NAME or STRING, got %s/%s",
self.type, self.value)
def _expect(self, type_, value=None):
if self.type != type_:
self._raise_error("expected %s, got %s [%s]",
type_, self.type, self.value)
if value is not None and self.value != value:
self._raise_error("expected %s, got %s", value, self.value)
value = self.value
self._gettoken()
return value
def _gettoken(self):
tup = next(self.generator)
self.type, self.value, self.begin, prefix = tup
def _raise_error(self, msg, *args):
if args:
try:
msg = msg % args
except:
msg = " ".join([msg] + list(map(str, args)))
line = self._bnf_grammar.splitlines()[self.begin[0] - 1]
raise SyntaxError(msg, ('<grammar>', self.begin[0],
self.begin[1], line))
class NFAArc(object):
def __init__(self, next_, nonterminal_or_string):
self.next = next_
self.nonterminal_or_string = nonterminal_or_string
class NFAState(object):
def __init__(self, from_rule):
self.from_rule = from_rule
self.arcs = [] # List[nonterminal (str), NFAState]
def add_arc(self, next_, nonterminal_or_string=None):
assert nonterminal_or_string is None or isinstance(nonterminal_or_string, str)
assert isinstance(next_, NFAState)
self.arcs.append(NFAArc(next_, nonterminal_or_string))
def __repr__(self):
return '<%s: from %s>' % (self.__class__.__name__, self.from_rule)

View File

@@ -0,0 +1,20 @@
from typing import Generator, List, Optional, Tuple
from parso.python.token import TokenType
class GrammarParser:
generator: Generator[TokenType, None, None]
def __init__(self, bnf_grammar: str) -> None: ...
def parse(self) -> Generator[Tuple[NFAState, NFAState], None, None]: ...
class NFAArc:
next: NFAState
nonterminal_or_string: Optional[str]
def __init__(
self, next_: NFAState, nonterminal_or_string: Optional[str]
) -> None: ...
class NFAState:
from_rule: str
arcs: List[NFAArc]
def __init__(self, from_rule: str) -> None: ...

View File

@@ -1,223 +0,0 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright 2014 David Halter. Integration into Jedi.
# Modifications are dual-licensed: MIT and PSF.
"""
Parser engine for the grammar tables generated by pgen.
The grammar table must be loaded first.
See Parser/parser.c in the Python distribution for additional info on
how this parsing engine works.
"""
from parso.python import tokenize
class InternalParseError(Exception):
"""
Exception to signal the parser is stuck and error recovery didn't help.
Basically this shouldn't happen. It's a sign that something is really
wrong.
"""
def __init__(self, msg, type, value, start_pos):
Exception.__init__(self, "%s: type=%r, value=%r, start_pos=%r" %
(msg, tokenize.tok_name[type], value, start_pos))
self.msg = msg
self.type = type
self.value = value
self.start_pos = start_pos
class Stack(list):
def get_tos_nodes(self):
tos = self[-1]
return tos[2][1]
def token_to_ilabel(grammar, type_, value):
# Map from token to label
if type_ == tokenize.NAME:
# Check for reserved words (keywords)
try:
return grammar.keywords[value]
except KeyError:
pass
try:
return grammar.tokens[type_]
except KeyError:
return None
class PgenParser(object):
"""Parser engine.
The proper usage sequence is:
p = Parser(grammar, [converter]) # create instance
p.setup([start]) # prepare for parsing
<for each input token>:
if p.add_token(...): # parse a token
break
root = p.rootnode # root of abstract syntax tree
A Parser instance may be reused by calling setup() repeatedly.
A Parser instance contains state pertaining to the current token
sequence, and should not be used concurrently by different threads
to parse separate token sequences.
See driver.py for how to get input tokens by tokenizing a file or
string.
Parsing is complete when add_token() returns True; the root of the
abstract syntax tree can then be retrieved from the rootnode
instance variable. When a syntax error occurs, error_recovery()
is called. There is no error recovery; the parser cannot be used
after a syntax error was reported (but it can be reinitialized by
calling setup()).
"""
def __init__(self, grammar, convert_node, convert_leaf, error_recovery, start):
"""Constructor.
The grammar argument is a grammar.Grammar instance; see the
grammar module for more information.
The parser is not ready yet for parsing; you must call the
setup() method to get it started.
The optional convert argument is a function mapping concrete
syntax tree nodes to abstract syntax tree nodes. If not
given, no conversion is done and the syntax tree produced is
the concrete syntax tree. If given, it must be a function of
two arguments, the first being the grammar (a grammar.Grammar
instance), and the second being the concrete syntax tree node
to be converted. The syntax tree is converted from the bottom
up.
A concrete syntax tree node is a (type, nodes) tuple, where
type is the node type (a token or symbol number) and nodes
is a list of children for symbols, and None for tokens.
An abstract syntax tree node may be anything; this is entirely
up to the converter function.
"""
self.grammar = grammar
self.convert_node = convert_node
self.convert_leaf = convert_leaf
# Each stack entry is a tuple: (dfa, state, node).
# A node is a tuple: (type, children),
# where children is a list of nodes or None
newnode = (start, [])
stackentry = (self.grammar.dfas[start], 0, newnode)
self.stack = Stack([stackentry])
self.rootnode = None
self.error_recovery = error_recovery
def parse(self, tokens):
for type_, value, start_pos, prefix in tokens:
if self.add_token(type_, value, start_pos, prefix):
break
else:
# We never broke out -- EOF is too soon -- Unfinished statement.
# However, the error recovery might have added the token again, if
# the stack is empty, we're fine.
if self.stack:
raise InternalParseError("incomplete input", type_, value, start_pos)
return self.rootnode
def add_token(self, type_, value, start_pos, prefix):
"""Add a token; return True if this is the end of the program."""
ilabel = token_to_ilabel(self.grammar, type_, value)
# Loop until the token is shifted; may raise exceptions
_gram = self.grammar
_labels = _gram.labels
_push = self._push
_pop = self._pop
_shift = self._shift
while True:
dfa, state, node = self.stack[-1]
states, first = dfa
arcs = states[state]
# Look for a state with this label
for i, newstate in arcs:
t, v = _labels[i]
if ilabel == i:
# Look it up in the list of labels
assert t < 256
# Shift a token; we're done with it
_shift(type_, value, newstate, prefix, start_pos)
# Pop while we are in an accept-only state
state = newstate
while states[state] == [(0, state)]:
_pop()
if not self.stack:
# Done parsing!
return True
dfa, state, node = self.stack[-1]
states, first = dfa
# Done with this token
return False
elif t >= 256:
# See if it's a symbol and if we're in its first set
itsdfa = _gram.dfas[t]
itsstates, itsfirst = itsdfa
if ilabel in itsfirst:
# Push a symbol
_push(t, itsdfa, newstate)
break # To continue the outer while loop
else:
if (0, state) in arcs:
# An accepting state, pop it and try something else
_pop()
if not self.stack:
# Done parsing, but another token is input
raise InternalParseError("too much input", type_, value, start_pos)
else:
self.error_recovery(self.grammar, self.stack, arcs, type_,
value, start_pos, prefix, self.add_token)
break
def _shift(self, type_, value, newstate, prefix, start_pos):
"""Shift a token. (Internal)"""
dfa, state, node = self.stack[-1]
newnode = self.convert_leaf(self.grammar, type_, value, prefix, start_pos)
node[-1].append(newnode)
self.stack[-1] = (dfa, newstate, node)
def _push(self, type_, newdfa, newstate):
"""Push a nonterminal. (Internal)"""
dfa, state, node = self.stack[-1]
newnode = (type_, [])
self.stack[-1] = (dfa, newstate, node)
self.stack.append((newdfa, 0, newnode))
def _pop(self):
"""Pop a nonterminal. (Internal)"""
popdfa, popstate, (type_, children) = self.stack.pop()
# If there's exactly one child, return that child instead of creating a
# new node. We still create expr_stmt and file_input though, because a
# lot of Jedi depends on its logic.
if len(children) == 1:
newnode = children[0]
else:
newnode = self.convert_node(self.grammar, type_, children)
try:
# Equal to:
# dfa, state, node = self.stack[-1]
# symbol, children = node
self.stack[-1][2][1].append(newnode)
except IndexError:
# Stack is empty, set the rootnode.
self.rootnode = newnode

View File

@@ -1,399 +0,0 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright 2014 David Halter. Integration into Jedi.
# Modifications are dual-licensed: MIT and PSF.
from parso.pgen2 import grammar
from parso.python import token
from parso.python import tokenize
from parso.utils import parse_version_string
class ParserGenerator(object):
def __init__(self, bnf_text, token_namespace):
self._bnf_text = bnf_text
self.generator = tokenize.tokenize(
bnf_text,
version_info=parse_version_string('3.6')
)
self._gettoken() # Initialize lookahead
self.dfas, self.startsymbol = self._parse()
self.first = {} # map from symbol name to set of tokens
self._addfirstsets()
self._token_namespace = token_namespace
def make_grammar(self):
c = grammar.Grammar(self._bnf_text)
names = list(self.dfas.keys())
names.sort()
names.remove(self.startsymbol)
names.insert(0, self.startsymbol)
for name in names:
i = 256 + len(c.symbol2number)
c.symbol2number[name] = i
c.number2symbol[i] = name
for name in names:
dfa = self.dfas[name]
states = []
for state in dfa:
arcs = []
for label, next in state.arcs.items():
arcs.append((self._make_label(c, label), dfa.index(next)))
if state.isfinal:
arcs.append((0, dfa.index(state)))
states.append(arcs)
c.states.append(states)
c.dfas[c.symbol2number[name]] = (states, self._make_first(c, name))
c.start = c.symbol2number[self.startsymbol]
return c
def _make_first(self, c, name):
rawfirst = self.first[name]
first = {}
for label in rawfirst:
ilabel = self._make_label(c, label)
##assert ilabel not in first # XXX failed on <> ... !=
first[ilabel] = 1
return first
def _make_label(self, c, label):
# XXX Maybe this should be a method on a subclass of converter?
ilabel = len(c.labels)
if label[0].isalpha():
# Either a symbol name or a named token
if label in c.symbol2number:
# A symbol name (a non-terminal)
if label in c.symbol2label:
return c.symbol2label[label]
else:
c.labels.append((c.symbol2number[label], None))
c.symbol2label[label] = ilabel
c.label2symbol[ilabel] = label
return ilabel
else:
# A named token (NAME, NUMBER, STRING)
itoken = getattr(self._token_namespace, label, None)
assert isinstance(itoken, int), label
if itoken in c.tokens:
return c.tokens[itoken]
else:
c.labels.append((itoken, None))
c.tokens[itoken] = ilabel
return ilabel
else:
# Either a keyword or an operator
assert label[0] in ('"', "'"), label
value = eval(label)
if value[0].isalpha():
# A keyword
if value in c.keywords:
return c.keywords[value]
else:
# TODO this might be an issue?! Using token.NAME here?
c.labels.append((token.NAME, value))
c.keywords[value] = ilabel
return ilabel
else:
# An operator (any non-numeric token)
itoken = self._token_namespace.generate_token_id(value)
if itoken in c.tokens:
return c.tokens[itoken]
else:
c.labels.append((itoken, None))
c.tokens[itoken] = ilabel
return ilabel
def _addfirstsets(self):
names = list(self.dfas.keys())
names.sort()
for name in names:
if name not in self.first:
self._calcfirst(name)
#print name, self.first[name].keys()
def _calcfirst(self, name):
dfa = self.dfas[name]
self.first[name] = None # dummy to detect left recursion
state = dfa[0]
totalset = {}
overlapcheck = {}
for label, next in state.arcs.items():
if label in self.dfas:
if label in self.first:
fset = self.first[label]
if fset is None:
raise ValueError("recursion for rule %r" % name)
else:
self._calcfirst(label)
fset = self.first[label]
totalset.update(fset)
overlapcheck[label] = fset
else:
totalset[label] = 1
overlapcheck[label] = {label: 1}
inverse = {}
for label, itsfirst in overlapcheck.items():
for symbol in itsfirst:
if symbol in inverse:
raise ValueError("rule %s is ambiguous; %s is in the"
" first sets of %s as well as %s" %
(name, symbol, label, inverse[symbol]))
inverse[symbol] = label
self.first[name] = totalset
def _parse(self):
dfas = {}
startsymbol = None
# MSTART: (NEWLINE | RULE)* ENDMARKER
while self.type != token.ENDMARKER:
while self.type == token.NEWLINE:
self._gettoken()
# RULE: NAME ':' RHS NEWLINE
name = self._expect(token.NAME)
self._expect(token.COLON)
a, z = self._parse_rhs()
self._expect(token.NEWLINE)
#self._dump_nfa(name, a, z)
dfa = self._make_dfa(a, z)
#self._dump_dfa(name, dfa)
# oldlen = len(dfa)
self._simplify_dfa(dfa)
# newlen = len(dfa)
dfas[name] = dfa
#print name, oldlen, newlen
if startsymbol is None:
startsymbol = name
return dfas, startsymbol
def _make_dfa(self, start, finish):
# To turn an NFA into a DFA, we define the states of the DFA
# to correspond to *sets* of states of the NFA. Then do some
# state reduction. Let's represent sets as dicts with 1 for
# values.
assert isinstance(start, NFAState)
assert isinstance(finish, NFAState)
def closure(state):
base = {}
addclosure(state, base)
return base
def addclosure(state, base):
assert isinstance(state, NFAState)
if state in base:
return
base[state] = 1
for label, next in state.arcs:
if label is None:
addclosure(next, base)
states = [DFAState(closure(start), finish)]
for state in states: # NB states grows while we're iterating
arcs = {}
for nfastate in state.nfaset:
for label, next in nfastate.arcs:
if label is not None:
addclosure(next, arcs.setdefault(label, {}))
for label, nfaset in arcs.items():
for st in states:
if st.nfaset == nfaset:
break
else:
st = DFAState(nfaset, finish)
states.append(st)
state.addarc(st, label)
return states # List of DFAState instances; first one is start
def _dump_nfa(self, name, start, finish):
print("Dump of NFA for", name)
todo = [start]
for i, state in enumerate(todo):
print(" State", i, state is finish and "(final)" or "")
for label, next in state.arcs:
if next in todo:
j = todo.index(next)
else:
j = len(todo)
todo.append(next)
if label is None:
print(" -> %d" % j)
else:
print(" %s -> %d" % (label, j))
def _dump_dfa(self, name, dfa):
print("Dump of DFA for", name)
for i, state in enumerate(dfa):
print(" State", i, state.isfinal and "(final)" or "")
for label, next in state.arcs.items():
print(" %s -> %d" % (label, dfa.index(next)))
def _simplify_dfa(self, dfa):
# This is not theoretically optimal, but works well enough.
# Algorithm: repeatedly look for two states that have the same
# set of arcs (same labels pointing to the same nodes) and
# unify them, until things stop changing.
# dfa is a list of DFAState instances
changes = True
while changes:
changes = False
for i, state_i in enumerate(dfa):
for j in range(i + 1, len(dfa)):
state_j = dfa[j]
if state_i == state_j:
#print " unify", i, j
del dfa[j]
for state in dfa:
state.unifystate(state_j, state_i)
changes = True
break
def _parse_rhs(self):
# RHS: ALT ('|' ALT)*
a, z = self._parse_alt()
if self.value != "|":
return a, z
else:
aa = NFAState()
zz = NFAState()
aa.addarc(a)
z.addarc(zz)
while self.value == "|":
self._gettoken()
a, z = self._parse_alt()
aa.addarc(a)
z.addarc(zz)
return aa, zz
def _parse_alt(self):
# ALT: ITEM+
a, b = self._parse_item()
while (self.value in ("(", "[") or
self.type in (token.NAME, token.STRING)):
c, d = self._parse_item()
b.addarc(c)
b = d
return a, b
def _parse_item(self):
# ITEM: '[' RHS ']' | ATOM ['+' | '*']
if self.value == "[":
self._gettoken()
a, z = self._parse_rhs()
self._expect(token.RSQB)
a.addarc(z)
return a, z
else:
a, z = self._parse_atom()
value = self.value
if value not in ("+", "*"):
return a, z
self._gettoken()
z.addarc(a)
if value == "+":
return a, z
else:
return a, a
def _parse_atom(self):
# ATOM: '(' RHS ')' | NAME | STRING
if self.value == "(":
self._gettoken()
a, z = self._parse_rhs()
self._expect(token.RPAR)
return a, z
elif self.type in (token.NAME, token.STRING):
a = NFAState()
z = NFAState()
a.addarc(z, self.value)
self._gettoken()
return a, z
else:
self._raise_error("expected (...) or NAME or STRING, got %s/%s",
self.type, self.value)
def _expect(self, type):
if self.type != type:
self._raise_error("expected %s, got %s(%s)",
type, self.type, self.value)
value = self.value
self._gettoken()
return value
def _gettoken(self):
tup = next(self.generator)
while tup[0] in (token.COMMENT, token.NL):
tup = next(self.generator)
self.type, self.value, self.begin, prefix = tup
def _raise_error(self, msg, *args):
if args:
try:
msg = msg % args
except:
msg = " ".join([msg] + list(map(str, args)))
line = self._bnf_text.splitlines()[self.begin[0] - 1]
raise SyntaxError(msg, ('<grammar>', self.begin[0],
self.begin[1], line))
class NFAState(object):
def __init__(self):
self.arcs = [] # list of (label, NFAState) pairs
def addarc(self, next, label=None):
assert label is None or isinstance(label, str)
assert isinstance(next, NFAState)
self.arcs.append((label, next))
class DFAState(object):
def __init__(self, nfaset, final):
assert isinstance(nfaset, dict)
assert isinstance(next(iter(nfaset)), NFAState)
assert isinstance(final, NFAState)
self.nfaset = nfaset
self.isfinal = final in nfaset
self.arcs = {} # map from label to DFAState
def addarc(self, next, label):
assert isinstance(label, str)
assert label not in self.arcs
assert isinstance(next, DFAState)
self.arcs[label] = next
def unifystate(self, old, new):
for label, next in self.arcs.items():
if next is old:
self.arcs[label] = new
def __eq__(self, other):
# Equality test -- ignore the nfaset instance variable
assert isinstance(other, DFAState)
if self.isfinal != other.isfinal:
return False
# Can't just return self.arcs == other.arcs, because that
# would invoke this method recursively, with cycles...
if len(self.arcs) != len(other.arcs):
return False
for label, next in self.arcs.items():
if next is not other.arcs.get(label):
return False
return True
__hash__ = None # For Py3 compatibility.
def generate_grammar(bnf_text, token_namespace):
"""
``bnf_text`` is a grammar in extended BNF (using * for repetition, + for
at-least-once repetition, [] for optional parts, | for alternatives and ()
for grouping).
It's not EBNF according to ISO/IEC 14977. It's a dialect Python uses in its
own parser.
"""
p = ParserGenerator(bnf_text, token_namespace)
return p.make_grammar()

View File

@@ -13,10 +13,81 @@ import logging
from parso.utils import split_lines
from parso.python.parser import Parser
from parso.python.tree import EndMarker
from parso.python.tokenize import (NEWLINE, PythonToken, ERROR_DEDENT,
ENDMARKER, INDENT, DEDENT)
from parso.python.tokenize import PythonToken
from parso.python.token import PythonTokenTypes
LOG = logging.getLogger(__name__)
DEBUG_DIFF_PARSER = False
_INDENTATION_TOKENS = 'INDENT', 'ERROR_DEDENT', 'DEDENT'
def _get_previous_leaf_if_indentation(leaf):
while leaf and leaf.type == 'error_leaf' \
and leaf.token_type in _INDENTATION_TOKENS:
leaf = leaf.get_previous_leaf()
return leaf
def _get_next_leaf_if_indentation(leaf):
while leaf and leaf.type == 'error_leaf' \
and leaf.token_type in _INDENTATION_TOKENS:
leaf = leaf.get_previous_leaf()
return leaf
def _assert_valid_graph(node):
"""
Checks if the parent/children relationship is correct.
This is a check that only runs during debugging/testing.
"""
try:
children = node.children
except AttributeError:
# Ignore INDENT is necessary, because indent/dedent tokens don't
# contain value/prefix and are just around, because of the tokenizer.
if node.type == 'error_leaf' and node.token_type in _INDENTATION_TOKENS:
assert not node.value
assert not node.prefix
return
# Calculate the content between two start positions.
previous_leaf = _get_previous_leaf_if_indentation(node.get_previous_leaf())
if previous_leaf is None:
content = node.prefix
previous_start_pos = 1, 0
else:
assert previous_leaf.end_pos <= node.start_pos, \
(previous_leaf, node)
content = previous_leaf.value + node.prefix
previous_start_pos = previous_leaf.start_pos
if '\n' in content or '\r' in content:
splitted = split_lines(content)
line = previous_start_pos[0] + len(splitted) - 1
actual = line, len(splitted[-1])
else:
actual = previous_start_pos[0], previous_start_pos[1] + len(content)
assert node.start_pos == actual, (node.start_pos, actual)
else:
for child in children:
assert child.parent == node, (node, child)
_assert_valid_graph(child)
def _get_debug_error_message(module, old_lines, new_lines):
current_lines = split_lines(module.get_code(), keepends=True)
current_diff = difflib.unified_diff(new_lines, current_lines)
old_new_diff = difflib.unified_diff(old_lines, new_lines)
import parso
return (
"There's an issue with the diff parser. Please "
"report (parso v%s) - Old/New:\n%s\nActual Diff (May be empty):\n%s"
% (parso.__version__, ''.join(old_new_diff), ''.join(current_diff))
)
def _get_last_line(node_or_leaf):
@@ -27,13 +98,21 @@ def _get_last_line(node_or_leaf):
return last_leaf.end_pos[0]
def _skip_dedent_error_leaves(leaf):
while leaf is not None and leaf.type == 'error_leaf' and leaf.token_type == 'DEDENT':
leaf = leaf.get_previous_leaf()
return leaf
def _ends_with_newline(leaf, suffix=''):
leaf = _skip_dedent_error_leaves(leaf)
if leaf.type == 'error_leaf':
typ = leaf.original_type
typ = leaf.token_type.lower()
else:
typ = leaf.type
return typ == 'newline' or suffix.endswith('\n')
return typ == 'newline' or suffix.endswith('\n') or suffix.endswith('\r')
def _flows_finished(pgen_grammar, stack):
@@ -41,32 +120,45 @@ def _flows_finished(pgen_grammar, stack):
if, while, for and try might not be finished, because another part might
still be parsed.
"""
for dfa, newstate, (symbol_number, nodes) in stack:
if pgen_grammar.number2symbol[symbol_number] in ('if_stmt', 'while_stmt',
'for_stmt', 'try_stmt'):
for stack_node in stack:
if stack_node.nonterminal in ('if_stmt', 'while_stmt', 'for_stmt', 'try_stmt'):
return False
return True
def suite_or_file_input_is_valid(pgen_grammar, stack):
def _func_or_class_has_suite(node):
if node.type == 'decorated':
node = node.children[-1]
if node.type in ('async_funcdef', 'async_stmt'):
node = node.children[-1]
return node.type in ('classdef', 'funcdef') and node.children[-1].type == 'suite'
def _suite_or_file_input_is_valid(pgen_grammar, stack):
if not _flows_finished(pgen_grammar, stack):
return False
for dfa, newstate, (symbol_number, nodes) in reversed(stack):
if pgen_grammar.number2symbol[symbol_number] == 'suite':
for stack_node in reversed(stack):
if stack_node.nonterminal == 'decorator':
# A decorator is only valid with the upcoming function.
return False
if stack_node.nonterminal == 'suite':
# If only newline is in the suite, the suite is not valid, yet.
return len(nodes) > 1
return len(stack_node.nodes) > 1
# Not reaching a suite means that we're dealing with file_input levels
# where there's no need for a valid statement in it. It can also be empty.
return True
def _is_flow_node(node):
if node.type == 'async_stmt':
node = node.children[1]
try:
value = node.children[0].value
except AttributeError:
return False
return value in ('if', 'for', 'while', 'try')
return value in ('if', 'for', 'while', 'try', 'with')
class _PositionUpdatingFinished(Exception):
@@ -100,7 +192,7 @@ class DiffParser(object):
self._copy_count = 0
self._parser_count = 0
self._nodes_stack = _NodesStack(self._module)
self._nodes_tree = _NodesTree(self._module)
def update(self, old_lines, new_lines):
'''
@@ -129,11 +221,10 @@ class DiffParser(object):
line_length = len(new_lines)
sm = difflib.SequenceMatcher(None, old_lines, self._parser_lines_new)
opcodes = sm.get_opcodes()
LOG.debug('diff parser calculated')
LOG.debug('diff: line_lengths old: %s, new: %s' % (len(old_lines), line_length))
LOG.debug('line_lengths old: %s; new: %s' % (len(old_lines), line_length))
for operation, i1, i2, j1, j2 in opcodes:
LOG.debug('diff %s old[%s:%s] new[%s:%s]',
LOG.debug('-> code[%s] old[%s:%s] new[%s:%s]',
operation, i1 + 1, i2, j1 + 1, j2)
if j2 == line_length and new_lines[-1] == '':
@@ -152,48 +243,47 @@ class DiffParser(object):
# With this action all change will finally be applied and we have a
# changed module.
self._nodes_stack.close()
self._nodes_tree.close()
if DEBUG_DIFF_PARSER:
# If there is reasonable suspicion that the diff parser is not
# behaving well, this should be enabled.
try:
assert self._module.get_code() == ''.join(new_lines)
_assert_valid_graph(self._module)
except AssertionError:
print(_get_debug_error_message(self._module, old_lines, new_lines))
raise
last_pos = self._module.end_pos[0]
if last_pos != line_length:
current_lines = split_lines(self._module.get_code(), keepends=True)
diff = difflib.unified_diff(current_lines, new_lines)
raise Exception(
"There's an issue (%s != %s) with the diff parser. Please report:\n%s"
% (last_pos, line_length, ''.join(diff))
('(%s != %s) ' % (last_pos, line_length))
+ _get_debug_error_message(self._module, old_lines, new_lines)
)
LOG.debug('diff parser end')
return self._module
def _enabled_debugging(self, old_lines, lines_new):
if self._module.get_code() != ''.join(lines_new):
LOG.warning('parser issue:\n%s\n%s', ''.join(old_lines),
''.join(lines_new))
LOG.warning('parser issue:\n%s\n%s', ''.join(old_lines), ''.join(lines_new))
def _copy_from_old_parser(self, line_offset, until_line_old, until_line_new):
copied_nodes = [None]
last_until_line = -1
while until_line_new > self._nodes_stack.parsed_until_line:
parsed_until_line_old = self._nodes_stack.parsed_until_line - line_offset
while until_line_new > self._nodes_tree.parsed_until_line:
parsed_until_line_old = self._nodes_tree.parsed_until_line - line_offset
line_stmt = self._get_old_line_stmt(parsed_until_line_old + 1)
if line_stmt is None:
# Parse 1 line at least. We don't need more, because we just
# want to get into a state where the old parser has statements
# again that can be copied (e.g. not lines within parentheses).
self._parse(self._nodes_stack.parsed_until_line + 1)
elif not copied_nodes:
# We have copied as much as possible (but definitely not too
# much). Therefore we just parse the rest.
# We might not reach the end, because there's a statement
# that is not finished.
self._parse(until_line_new)
self._parse(self._nodes_tree.parsed_until_line + 1)
else:
p_children = line_stmt.parent.children
index = p_children.index(line_stmt)
copied_nodes = self._nodes_stack.copy_nodes(
from_ = self._nodes_tree.parsed_until_line + 1
copied_nodes = self._nodes_tree.copy_nodes(
p_children[index:],
until_line_old,
line_offset
@@ -202,15 +292,19 @@ class DiffParser(object):
if copied_nodes:
self._copy_count += 1
from_ = copied_nodes[0].get_start_pos_of_prefix()[0] + line_offset
to = self._nodes_stack.parsed_until_line
to = self._nodes_tree.parsed_until_line
LOG.debug('diff actually copy %s to %s', from_, to)
LOG.debug('copy old[%s:%s] new[%s:%s]',
copied_nodes[0].start_pos[0],
copied_nodes[-1].end_pos[0] - 1, from_, to)
else:
# We have copied as much as possible (but definitely not too
# much). Therefore we just parse a bit more.
self._parse(self._nodes_tree.parsed_until_line + 1)
# Since there are potential bugs that might loop here endlessly, we
# just stop here.
assert last_until_line != self._nodes_stack.parsed_until_line \
or not copied_nodes, last_until_line
last_until_line = self._nodes_stack.parsed_until_line
assert last_until_line != self._nodes_tree.parsed_until_line, last_until_line
last_until_line = self._nodes_tree.parsed_until_line
def _get_old_line_stmt(self, old_line):
leaf = self._module.get_leaf_for_position((old_line, 0), include_prefixes=True)
@@ -221,46 +315,36 @@ class DiffParser(object):
node = leaf
while node.parent.type not in ('file_input', 'suite'):
node = node.parent
return node
# Make sure that if only the `else:` line of an if statement is
# copied that not the whole thing is going to be copied.
if node.start_pos[0] >= old_line:
return node
# Must be on the same line. Otherwise we need to parse that bit.
return None
def _get_before_insertion_node(self):
if self._nodes_stack.is_empty():
return None
line = self._nodes_stack.parsed_until_line + 1
node = self._new_module.get_last_leaf()
while True:
parent = node.parent
if parent.type in ('suite', 'file_input'):
assert node.end_pos[0] <= line
assert node.end_pos[1] == 0 or '\n' in self._prefix
return node
node = parent
def _parse(self, until_line):
"""
Parses at least until the given line, but might just parse more until a
valid state is reached.
"""
last_until_line = 0
while until_line > self._nodes_stack.parsed_until_line:
while until_line > self._nodes_tree.parsed_until_line:
node = self._try_parse_part(until_line)
nodes = node.children
self._nodes_stack.add_parsed_nodes(nodes)
self._nodes_tree.add_parsed_nodes(nodes)
LOG.debug(
'parse_part from %s to %s (to %s in part parser)',
nodes[0].get_start_pos_of_prefix()[0],
self._nodes_stack.parsed_until_line,
self._nodes_tree.parsed_until_line,
node.end_pos[0] - 1
)
# Since the tokenizer sometimes has bugs, we cannot be sure that
# this loop terminates. Therefore assert that there's always a
# change.
assert last_until_line != self._nodes_stack.parsed_until_line, last_until_line
last_until_line = self._nodes_stack.parsed_until_line
assert last_until_line != self._nodes_tree.parsed_until_line, last_until_line
last_until_line = self._nodes_tree.parsed_until_line
def _try_parse_part(self, until_line):
"""
@@ -271,9 +355,8 @@ class DiffParser(object):
self._parser_count += 1
# TODO speed up, shouldn't copy the whole list all the time.
# memoryview?
parsed_until_line = self._nodes_stack.parsed_until_line
parsed_until_line = self._nodes_tree.parsed_until_line
lines_after = self._parser_lines_new[parsed_until_line:]
#print('parse_content', parsed_until_line, lines_after, until_line)
tokens = self._diff_tokenize(
lines_after,
until_line,
@@ -290,10 +373,10 @@ class DiffParser(object):
omitted_first_indent = False
indents = []
tokens = self._tokenizer(lines, (1, 0))
stack = self._active_parser.pgen_parser.stack
stack = self._active_parser.stack
for typ, string, start_pos, prefix in tokens:
start_pos = start_pos[0] + line_offset, start_pos[1]
if typ == INDENT:
if typ == PythonTokenTypes.INDENT:
indents.append(start_pos[1])
if is_first_token:
omitted_first_indent = True
@@ -306,29 +389,36 @@ class DiffParser(object):
# In case of omitted_first_indent, it might not be dedented fully.
# However this is a sign for us that a dedent happened.
if typ == DEDENT \
or typ == ERROR_DEDENT and omitted_first_indent and len(indents) == 1:
if typ == PythonTokenTypes.DEDENT \
or typ == PythonTokenTypes.ERROR_DEDENT \
and omitted_first_indent and len(indents) == 1:
indents.pop()
if omitted_first_indent and not indents:
# We are done here, only thing that can come now is an
# endmarker or another dedented code block.
typ, string, start_pos, prefix = next(tokens)
if '\n' in prefix:
prefix = re.sub(r'(<=\n)[^\n]+$', '', prefix)
if '\n' in prefix or '\r' in prefix:
prefix = re.sub(r'[^\n\r]+\Z', '', prefix)
else:
prefix = ''
yield PythonToken(ENDMARKER, '', (start_pos[0] + line_offset, 0), prefix)
assert start_pos[1] >= len(prefix), repr(prefix)
if start_pos[1] - len(prefix) == 0:
prefix = ''
yield PythonToken(
PythonTokenTypes.ENDMARKER, '',
(start_pos[0] + line_offset, 0),
prefix
)
break
elif typ == NEWLINE and start_pos[0] >= until_line:
elif typ == PythonTokenTypes.NEWLINE and start_pos[0] >= until_line:
yield PythonToken(typ, string, start_pos, prefix)
# Check if the parser is actually in a valid suite state.
if suite_or_file_input_is_valid(self._pgen_grammar, stack):
if _suite_or_file_input_is_valid(self._pgen_grammar, stack):
start_pos = start_pos[0] + 1, 0
while len(indents) > int(omitted_first_indent):
indents.pop()
yield PythonToken(DEDENT, '', start_pos, '')
yield PythonToken(PythonTokenTypes.DEDENT, '', start_pos, '')
yield PythonToken(ENDMARKER, '', start_pos, '')
yield PythonToken(PythonTokenTypes.ENDMARKER, '', start_pos, '')
break
else:
continue
@@ -336,17 +426,23 @@ class DiffParser(object):
yield PythonToken(typ, string, start_pos, prefix)
class _NodesStackNode(object):
ChildrenGroup = namedtuple('ChildrenGroup', 'children line_offset last_line_offset_leaf')
class _NodesTreeNode(object):
_ChildrenGroup = namedtuple('_ChildrenGroup', 'prefix children line_offset last_line_offset_leaf')
def __init__(self, tree_node, parent=None):
self.tree_node = tree_node
self.children_groups = []
self._children_groups = []
self.parent = parent
self._node_children = []
def close(self):
def finish(self):
children = []
for children_part, line_offset, last_line_offset_leaf in self.children_groups:
for prefix, children_part, line_offset, last_line_offset_leaf in self._children_groups:
first_leaf = _get_next_leaf_if_indentation(
children_part[0].get_first_leaf()
)
first_leaf.prefix = prefix + first_leaf.prefix
if line_offset != 0:
try:
_update_positions(
@@ -359,59 +455,61 @@ class _NodesStackNode(object):
for node in children:
node.parent = self.tree_node
def add(self, children, line_offset=0, last_line_offset_leaf=None):
group = self.ChildrenGroup(children, line_offset, last_line_offset_leaf)
self.children_groups.append(group)
for node_child in self._node_children:
node_child.finish()
def add_child_node(self, child_node):
self._node_children.append(child_node)
def add_tree_nodes(self, prefix, children, line_offset=0, last_line_offset_leaf=None):
if last_line_offset_leaf is None:
last_line_offset_leaf = children[-1].get_last_leaf()
group = self._ChildrenGroup(prefix, children, line_offset, last_line_offset_leaf)
self._children_groups.append(group)
def get_last_line(self, suffix):
line = 0
if self.children_groups:
children_group = self.children_groups[-1]
last_leaf = children_group.children[-1].get_last_leaf()
line = last_leaf.end_pos[0]
if self._children_groups:
children_group = self._children_groups[-1]
last_leaf = _get_previous_leaf_if_indentation(
children_group.last_line_offset_leaf
)
# Calculate the line offsets
offset = children_group.line_offset
if offset:
# In case the line_offset is not applied to this specific leaf,
# just ignore it.
if last_leaf.line <= children_group.last_line_offset_leaf.line:
line += children_group.line_offset
line = last_leaf.end_pos[0] + children_group.line_offset
# Newlines end on the next line, which means that they would cover
# the next line. That line is not fully parsed at this point.
if _ends_with_newline(last_leaf, suffix):
line -= 1
line += suffix.count('\n')
if suffix and not suffix.endswith('\n'):
line += len(split_lines(suffix)) - 1
if suffix and not suffix.endswith('\n') and not suffix.endswith('\r'):
# This is the end of a file (that doesn't end with a newline).
line += 1
if self._node_children:
return max(line, self._node_children[-1].get_last_line(suffix))
return line
class _NodesStack(object):
endmarker_type = 'endmarker'
class _NodesTree(object):
def __init__(self, module):
# Top of stack
self._tos = self._base_node = _NodesStackNode(module)
self._base_node = _NodesTreeNode(module)
self._working_stack = [self._base_node]
self._module = module
self._last_prefix = ''
self._prefix_remainder = ''
self.prefix = ''
def is_empty(self):
return not self._base_node.children
@property
def parsed_until_line(self):
return self._tos.get_last_line(self.prefix)
return self._working_stack[-1].get_last_line(self.prefix)
def _get_insertion_node(self, indentation_node):
indentation = indentation_node.start_pos[1]
# find insertion node
node = self._tos
while True:
node = self._working_stack[-1]
tree_node = node.tree_node
if tree_node.type == 'suite':
# A suite starts with NEWLINE, ...
@@ -426,53 +524,57 @@ class _NodesStack(object):
elif tree_node.type == 'file_input':
return node
node = self._close_tos()
def _close_tos(self):
self._tos.close()
self._tos = self._tos.parent
return self._tos
self._working_stack.pop()
def add_parsed_nodes(self, tree_nodes):
old_prefix = self.prefix
tree_nodes = self._remove_endmarker(tree_nodes)
if not tree_nodes:
self.prefix = old_prefix + self.prefix
return
assert tree_nodes[0].type != 'newline'
node = self._get_insertion_node(tree_nodes[0])
assert node.tree_node.type in ('suite', 'file_input')
node.add(tree_nodes)
node.add_tree_nodes(old_prefix, tree_nodes)
# tos = Top of stack
self._update_tos(tree_nodes[-1])
def _update_tos(self, tree_node):
if tree_node.type in ('suite', 'file_input'):
new_tos = _NodesTreeNode(tree_node)
new_tos.add_tree_nodes('', list(tree_node.children))
self._working_stack[-1].add_child_node(new_tos)
self._working_stack.append(new_tos)
self._update_tos(tree_node.children[-1])
elif _func_or_class_has_suite(tree_node):
self._update_tos(tree_node.children[-1])
def _remove_endmarker(self, tree_nodes):
"""
Helps cleaning up the tree nodes that get inserted.
"""
last_leaf = tree_nodes[-1].get_last_leaf()
is_endmarker = last_leaf.type == self.endmarker_type
self._last_prefix = ''
is_endmarker = last_leaf.type == 'endmarker'
self._prefix_remainder = ''
if is_endmarker:
try:
separation = last_leaf.prefix.rindex('\n')
except ValueError:
pass
else:
separation = max(last_leaf.prefix.rfind('\n'), last_leaf.prefix.rfind('\r'))
if separation > -1:
# Remove the whitespace part of the prefix after a newline.
# That is not relevant if parentheses were opened. Always parse
# until the end of a line.
last_leaf.prefix, self._last_prefix = \
last_leaf.prefix, self._prefix_remainder = \
last_leaf.prefix[:separation + 1], last_leaf.prefix[separation + 1:]
first_leaf = tree_nodes[0].get_first_leaf()
first_leaf.prefix = self.prefix + first_leaf.prefix
self.prefix = ''
if is_endmarker:
self.prefix = last_leaf.prefix
tree_nodes = tree_nodes[:-1]
return tree_nodes
def copy_nodes(self, tree_nodes, until_line, line_offset):
@@ -481,99 +583,128 @@ class _NodesStack(object):
Returns the number of tree nodes that were copied.
"""
tos = self._get_insertion_node(tree_nodes[0])
if tree_nodes[0].type in ('error_leaf', 'error_node'):
# Avoid copying errors in the beginning. Can lead to a lot of
# issues.
return []
new_nodes, self._tos = self._copy_nodes(tos, tree_nodes, until_line, line_offset)
self._get_insertion_node(tree_nodes[0])
new_nodes, self._working_stack, self.prefix = self._copy_nodes(
list(self._working_stack),
tree_nodes,
until_line,
line_offset,
self.prefix,
)
return new_nodes
def _copy_nodes(self, tos, nodes, until_line, line_offset):
def _copy_nodes(self, working_stack, nodes, until_line, line_offset, prefix=''):
new_nodes = []
new_tos = tos
new_prefix = ''
for node in nodes:
if node.type == 'endmarker':
# Endmarkers just distort all the checks below. Remove them.
if node.start_pos[0] > until_line:
break
if node.start_pos[0] > until_line:
if node.type == 'endmarker':
break
if node.type == 'error_leaf' and node.token_type in ('DEDENT', 'ERROR_DEDENT'):
break
# TODO this check might take a bit of time for large files. We
# might want to change this to do more intelligent guessing or
# binary search.
if _get_last_line(node) > until_line:
# We can split up functions and classes later.
if node.type in ('classdef', 'funcdef') and node.children[-1].type == 'suite':
if _func_or_class_has_suite(node):
new_nodes.append(node)
break
new_nodes.append(node)
if not new_nodes:
return [], tos
return [], working_stack, prefix
tos = working_stack[-1]
last_node = new_nodes[-1]
line_offset_index = -1
if last_node.type in ('classdef', 'funcdef'):
suite = last_node.children[-1]
if suite.type == 'suite':
suite_tos = _NodesStackNode(suite)
# Don't need to pass line_offset here, it's already done by the
# parent.
suite_nodes, recursive_tos = self._copy_nodes(
suite_tos, suite.children, until_line, line_offset)
if len(suite_nodes) < 2:
# A suite only with newline is not valid.
new_nodes.pop()
else:
suite_tos.parent = tos
new_tos = recursive_tos
line_offset_index = -2
had_valid_suite_last = False
if _func_or_class_has_suite(last_node):
suite = last_node
while suite.type != 'suite':
suite = suite.children[-1]
elif (new_nodes[-1].type in ('error_leaf', 'error_node') or
_is_flow_node(new_nodes[-1])):
# Error leafs/nodes don't have a defined start/end. Error
# nodes might not end with a newline (e.g. if there's an
# open `(`). Therefore ignore all of them unless they are
# succeeded with valid parser state.
# If we copy flows at the end, they might be continued
# after the copy limit (in the new parser).
# In this while loop we try to remove until we find a newline.
new_nodes.pop()
while new_nodes:
last_node = new_nodes[-1]
if last_node.get_last_leaf().type == 'newline':
break
suite_tos = _NodesTreeNode(suite)
# Don't need to pass line_offset here, it's already done by the
# parent.
suite_nodes, new_working_stack, new_prefix = self._copy_nodes(
working_stack + [suite_tos], suite.children, until_line, line_offset
)
if len(suite_nodes) < 2:
# A suite only with newline is not valid.
new_nodes.pop()
new_prefix = ''
else:
assert new_nodes
tos.add_child_node(suite_tos)
working_stack = new_working_stack
had_valid_suite_last = True
if new_nodes:
try:
last_line_offset_leaf = new_nodes[line_offset_index].get_last_leaf()
except IndexError:
line_offset = 0
# In this case we don't have to calculate an offset, because
# there's no children to be managed.
last_line_offset_leaf = None
tos.add(new_nodes, line_offset, last_line_offset_leaf)
return new_nodes, new_tos
last_node = new_nodes[-1]
if (last_node.type in ('error_leaf', 'error_node') or
_is_flow_node(new_nodes[-1])):
# Error leafs/nodes don't have a defined start/end. Error
# nodes might not end with a newline (e.g. if there's an
# open `(`). Therefore ignore all of them unless they are
# succeeded with valid parser state.
# If we copy flows at the end, they might be continued
# after the copy limit (in the new parser).
# In this while loop we try to remove until we find a newline.
new_prefix = ''
new_nodes.pop()
while new_nodes:
last_node = new_nodes[-1]
if last_node.get_last_leaf().type == 'newline':
break
new_nodes.pop()
def _update_tos(self, tree_node):
if tree_node.type in ('suite', 'file_input'):
self._tos = _NodesStackNode(tree_node, self._tos)
self._tos.add(list(tree_node.children))
self._update_tos(tree_node.children[-1])
elif tree_node.type in ('classdef', 'funcdef'):
self._update_tos(tree_node.children[-1])
if new_nodes:
if not _ends_with_newline(new_nodes[-1].get_last_leaf()) and not had_valid_suite_last:
p = new_nodes[-1].get_next_leaf().prefix
# We are not allowed to remove the newline at the end of the
# line, otherwise it's going to be missing. This happens e.g.
# if a bracket is around before that moves newlines to
# prefixes.
new_prefix = split_lines(p, keepends=True)[0]
if had_valid_suite_last:
last = new_nodes[-1]
if last.type == 'decorated':
last = last.children[-1]
if last.type in ('async_funcdef', 'async_stmt'):
last = last.children[-1]
last_line_offset_leaf = last.children[-2].get_last_leaf()
assert last_line_offset_leaf == ':'
else:
last_line_offset_leaf = new_nodes[-1].get_last_leaf()
tos.add_tree_nodes(prefix, new_nodes, line_offset, last_line_offset_leaf)
prefix = new_prefix
self._prefix_remainder = ''
return new_nodes, working_stack, prefix
def close(self):
while self._tos is not None:
self._close_tos()
self._base_node.finish()
# Add an endmarker.
try:
last_leaf = self._module.get_last_leaf()
end_pos = list(last_leaf.end_pos)
except IndexError:
end_pos = [1, 0]
else:
last_leaf = _skip_dedent_error_leaves(last_leaf)
end_pos = list(last_leaf.end_pos)
lines = split_lines(self.prefix)
assert len(lines) > 0
if len(lines) == 1:
@@ -582,6 +713,6 @@ class _NodesStack(object):
end_pos[0] += len(lines) - 1
end_pos[1] = len(lines[-1])
endmarker = EndMarker('', tuple(end_pos), self.prefix + self._last_prefix)
endmarker = EndMarker('', tuple(end_pos), self.prefix + self._prefix_remainder)
endmarker.parent = self._module
self._module.children.append(endmarker)

View File

@@ -6,7 +6,6 @@ from contextlib import contextmanager
from parso.normalizer import Normalizer, NormalizerConfig, Issue, Rule
from parso.python.tree import search_ancestor
from parso.parser import ParserSyntaxError
_BLOCK_STMTS = ('if_stmt', 'while_stmt', 'for_stmt', 'try_stmt', 'with_stmt')
_STAR_EXPR_PARENTS = ('testlist_star_expr', 'testlist_comp', 'exprlist')
@@ -17,6 +16,7 @@ ALLOWED_FUTURES = (
'all_feature_names', 'nested_scopes', 'generators', 'division',
'absolute_import', 'with_statement', 'print_function', 'unicode_literals',
)
_COMP_FOR_TYPES = ('comp_for', 'sync_comp_for')
def _iter_stmts(scope):
@@ -35,12 +35,12 @@ def _iter_stmts(scope):
def _get_comprehension_type(atom):
first, second = atom.children[:2]
if second.type == 'testlist_comp' and second.children[1].type == 'comp_for':
if second.type == 'testlist_comp' and second.children[1].type in _COMP_FOR_TYPES:
if first == '[':
return 'list comprehension'
else:
return 'generator expression'
elif second.type == 'dictorsetmaker' and second.children[-1].type == 'comp_for':
elif second.type == 'dictorsetmaker' and second.children[-1].type in _COMP_FOR_TYPES:
if second.children[1] == ':':
return 'dict comprehension'
else:
@@ -107,6 +107,7 @@ def _iter_definition_exprs_from_lists(exprlist):
yield child
def _get_expr_stmt_definition_exprs(expr_stmt):
exprs = []
for list_ in expr_stmt.children[:-2:2]:
@@ -273,13 +274,12 @@ class ErrorFinder(Normalizer):
def visit(self, node):
if node.type == 'error_node':
with self.visit_node(node):
# Don't need to investigate the inners of an error node. We
# might find errors in there that should be ignored, because
# the error node itself already shows that there's an issue.
return ''
# Don't need to investigate the inners of an error node. We
# might find errors in there that should be ignored, because
# the error node itself already shows that there's an issue.
return ''
return super(ErrorFinder, self).visit(node)
@contextmanager
def visit_node(self, node):
self._check_type_rules(node)
@@ -306,12 +306,12 @@ class ErrorFinder(Normalizer):
def visit_leaf(self, leaf):
if leaf.type == 'error_leaf':
if leaf.original_type in ('indent', 'error_dedent'):
if leaf.token_type in ('INDENT', 'ERROR_DEDENT'):
# Indents/Dedents itself never have a prefix. They are just
# "pseudo" tokens that get removed by the syntax tree later.
# Therefore in case of an error we also have to check for this.
spacing = list(leaf.get_next_leaf()._split_prefix())[-1]
if leaf.original_type == 'indent':
if leaf.token_type == 'INDENT':
message = 'unexpected indent'
else:
message = 'unindent does not match any outer indentation level'
@@ -455,23 +455,19 @@ class _YieldFromCheck(SyntaxRule):
def is_issue(self, leaf):
return leaf.parent.type == 'yield_arg' \
and self._normalizer.context.is_async_funcdef()
and self._normalizer.context.is_async_funcdef()
@ErrorFinder.register_rule(type='name')
class _NameChecks(SyntaxRule):
message = 'cannot assign to __debug__'
message_keyword = 'assignment to keyword'
message_none = 'cannot assign to None'
def is_issue(self, leaf):
self._normalizer.context.add_name(leaf)
if leaf.value == '__debug__' and leaf.is_definition():
if self._normalizer.version < (3, 0):
return True
else:
self.add_issue(leaf, message=self.message_keyword)
return True
if leaf.value == 'None' and self._normalizer.version < (3, 0) \
and leaf.is_definition():
self.add_issue(leaf, message=self.message_none)
@@ -539,7 +535,7 @@ class _StarStarCheck(SyntaxRule):
def is_issue(self, leaf):
if leaf.parent.type == 'dictorsetmaker':
comp_for = leaf.get_next_sibling().get_next_sibling()
return comp_for is not None and comp_for.type == 'comp_for'
return comp_for is not None and comp_for.type in _COMP_FOR_TYPES
@ErrorFinder.register_rule(value='yield')
@@ -563,17 +559,21 @@ class _ReturnAndYieldChecks(SyntaxRule):
and self._normalizer.version == (3, 5):
self.add_issue(self.get_node(leaf), message=self.message_async_yield)
@ErrorFinder.register_rule(type='atom')
@ErrorFinder.register_rule(type='strings')
class _BytesAndStringMix(SyntaxRule):
# e.g. 's' b''
message = "cannot mix bytes and nonbytes literals"
def _is_bytes_literal(self, string):
if string.type == 'fstring':
return False
return 'b' in string.string_prefix.lower()
def is_issue(self, node):
first = node.children[0]
if first.type == 'string' and self._normalizer.version >= (3, 0):
# In Python 2 it's allowed to mix bytes and unicode.
if self._normalizer.version >= (3, 0):
first_is_bytes = self._is_bytes_literal(first)
for string in node.children[1:]:
if first_is_bytes != self._is_bytes_literal(string):
@@ -614,7 +614,7 @@ class _FutureImportRule(SyntaxRule):
allowed_futures.append('generator_stop')
if name == 'braces':
self.add_issue(node, message = "not a chance")
self.add_issue(node, message="not a chance")
elif name == 'barry_as_FLUFL':
m = "Seriously I'm not implementing this :) ~ Dave"
self.add_issue(node, message=m)
@@ -634,7 +634,7 @@ class _StarExprRule(SyntaxRule):
return True
if node.parent.type == 'testlist_comp':
# [*[] for a in [1]]
if node.parent.children[1].type == 'comp_for':
if node.parent.children[1].type in _COMP_FOR_TYPES:
self.add_issue(node, message=self.message_iterable_unpacking)
if self._normalizer.version <= (3, 4):
n = search_ancestor(node, 'for_stmt', 'expr_stmt')
@@ -711,8 +711,8 @@ class _AnnotatorRule(SyntaxRule):
if not (lhs.type == 'name'
# subscript/attributes are allowed
or lhs.type in ('atom_expr', 'power')
and trailer.type == 'trailer'
and trailer.children[0] != '('):
and trailer.type == 'trailer'
and trailer.children[0] != '('):
return True
else:
# x, y: str
@@ -727,10 +727,16 @@ class _ArgumentRule(SyntaxRule):
if node.children[1] == '=' and first.type != 'name':
if first.type == 'lambdef':
# f(lambda: 1=1)
message = "lambda cannot contain assignment"
if self._normalizer.version < (3, 8):
message = "lambda cannot contain assignment"
else:
message = 'expression cannot contain assignment, perhaps you meant "=="?'
else:
# f(+x=1)
message = "keyword can't be an expression"
if self._normalizer.version < (3, 8):
message = "keyword can't be an expression"
else:
message = 'expression cannot contain assignment, perhaps you meant "=="?'
self.add_issue(first, message=message)
@@ -744,12 +750,17 @@ class _NonlocalModuleLevelRule(SyntaxRule):
@ErrorFinder.register_rule(type='arglist')
class _ArglistRule(SyntaxRule):
message = "Generator expression must be parenthesized if not sole argument"
@property
def message(self):
if self._normalizer.version < (3, 7):
return "Generator expression must be parenthesized if not sole argument"
else:
return "Generator expression must be parenthesized"
def is_issue(self, node):
first_arg = node.children[0]
if first_arg.type == 'argument' \
and first_arg.children[1].type == 'comp_for':
and first_arg.children[1].type in _COMP_FOR_TYPES:
# e.g. foo(x for x in [], b)
return len(node.children) >= 2
else:
@@ -778,7 +789,8 @@ class _ArglistRule(SyntaxRule):
if first == '*':
if kw_unpacking_only:
# foo(**kwargs, *args)
message = "iterable argument unpacking follows keyword argument unpacking"
message = "iterable argument unpacking " \
"follows keyword argument unpacking"
self.add_issue(argument, message=message)
else:
kw_unpacking_only = True
@@ -800,6 +812,7 @@ class _ArglistRule(SyntaxRule):
message = "positional argument follows keyword argument"
self.add_issue(argument, message=message)
@ErrorFinder.register_rule(type='parameters')
@ErrorFinder.register_rule(type='lambdef')
class _ParameterRule(SyntaxRule):
@@ -837,101 +850,36 @@ class _TryStmtRule(SyntaxRule):
self.add_issue(default_except, message=self.message)
@ErrorFinder.register_rule(type='string')
@ErrorFinder.register_rule(type='fstring')
class _FStringRule(SyntaxRule):
_fstring_grammar = None
message_empty = "f-string: empty expression not allowed" # f'{}'
message_single_closing = "f-string: single '}' is not allowed" # f'}'
message_nested = "f-string: expressions nested too deeply"
message_backslash = "f-string expression part cannot include a backslash" # f'{"\"}' or f'{"\\"}'
message_comment = "f-string expression part cannot include '#'" # f'{#}'
message_unterminated_string = "f-string: unterminated string" # f'{"}'
message_conversion = "f-string: invalid conversion character: expected 's', 'r', or 'a'"
message_incomplete = "f-string: expecting '}'" # f'{'
message_syntax = "invalid syntax"
@classmethod
def _load_grammar(cls):
import parso
def _check_format_spec(self, format_spec, depth):
self._check_fstring_contents(format_spec.children[1:], depth)
if cls._fstring_grammar is None:
cls._fstring_grammar = parso.load_grammar(language='python-f-string')
return cls._fstring_grammar
def _check_fstring_expr(self, fstring_expr, depth):
if depth >= 2:
self.add_issue(fstring_expr, message=self.message_nested)
conversion = fstring_expr.children[2]
if conversion.type == 'fstring_conversion':
name = conversion.children[1]
if name.value not in ('s', 'r', 'a'):
self.add_issue(name, message=self.message_conversion)
format_spec = fstring_expr.children[-2]
if format_spec.type == 'fstring_format_spec':
self._check_format_spec(format_spec, depth + 1)
def is_issue(self, fstring):
if 'f' not in fstring.string_prefix.lower():
return
self._check_fstring_contents(fstring.children[1:-1])
parsed = self._load_grammar().parse_leaf(fstring)
for child in parsed.children:
if child.type == 'expression':
self._check_expression(child)
elif child.type == 'error_node':
next_ = child.get_next_leaf()
if next_.type == 'error_leaf' and next_.original_type == 'unterminated_string':
self.add_issue(next_, message=self.message_unterminated_string)
# At this point nothing more is comming except the error
# leaf that we've already checked here.
break
self.add_issue(child, message=self.message_incomplete)
elif child.type == 'error_leaf':
self.add_issue(child, message=self.message_single_closing)
def _check_python_expr(self, python_expr):
value = python_expr.value
if '\\' in value:
self.add_issue(python_expr, message=self.message_backslash)
return
if '#' in value:
self.add_issue(python_expr, message=self.message_comment)
return
if re.match('\s*$', value) is not None:
self.add_issue(python_expr, message=self.message_empty)
return
# This is now nested parsing. We parsed the fstring and now
# we're parsing Python again.
try:
# CPython has a bit of a special ways to parse Python code within
# f-strings. It wraps the code in brackets to make sure that
# whitespace doesn't make problems (indentation/newlines).
# Just use that algorithm as well here and adapt start positions.
start_pos = python_expr.start_pos
start_pos = start_pos[0], start_pos[1] - 1
eval_input = self._normalizer.grammar._parse(
'(%s)' % value,
start_symbol='eval_input',
start_pos=start_pos,
error_recovery=False
)
except ParserSyntaxError as e:
self.add_issue(e.error_leaf, message=self.message_syntax)
return
issues = self._normalizer.grammar.iter_errors(eval_input)
self._normalizer.issues += issues
def _check_format_spec(self, format_spec):
for expression in format_spec.children[1:]:
nested_format_spec = expression.children[-2]
if nested_format_spec.type == 'format_spec':
if len(nested_format_spec.children) > 1:
self.add_issue(
nested_format_spec.children[1],
message=self.message_nested
)
self._check_expression(expression)
def _check_expression(self, expression):
for c in expression.children:
if c.type == 'python_expr':
self._check_python_expr(c)
elif c.type == 'conversion':
if c.value not in ('s', 'r', 'a'):
self.add_issue(c, message=self.message_conversion)
elif c.type == 'format_spec':
self._check_format_spec(c)
def _check_fstring_contents(self, children, depth=0):
for fstring_content in children:
if fstring_content.type == 'fstring_expr':
self._check_fstring_expr(fstring_content, depth)
class _CheckAssignmentRule(SyntaxRule):
@@ -944,8 +892,14 @@ class _CheckAssignmentRule(SyntaxRule):
first, second = node.children[:2]
error = _get_comprehension_type(node)
if error is None:
if second.type in ('dictorsetmaker', 'string'):
error = 'literal'
if second.type == 'dictorsetmaker':
if self._normalizer.version < (3, 8):
error = 'literal'
else:
if second.children[1] == ':':
error = 'dict display'
else:
error = 'set display'
elif first in ('(', '['):
if second.type == 'yield_expr':
error = 'yield expression'
@@ -957,13 +911,16 @@ class _CheckAssignmentRule(SyntaxRule):
else: # Everything handled, must be useless brackets.
self._check_assignment(second, is_deletion)
elif type_ == 'keyword':
error = 'keyword'
if self._normalizer.version < (3, 8):
error = 'keyword'
else:
error = str(node.value)
elif type_ == 'operator':
if node.value == '...':
error = 'Ellipsis'
elif type_ == 'comparison':
error = 'comparison'
elif type_ in ('string', 'number'):
elif type_ in ('string', 'number', 'strings'):
error = 'literal'
elif type_ == 'yield_expr':
# This one seems to be a slightly different warning in Python.
@@ -985,27 +942,28 @@ class _CheckAssignmentRule(SyntaxRule):
elif type_ in ('testlist_star_expr', 'exprlist', 'testlist'):
for child in node.children[::2]:
self._check_assignment(child, is_deletion)
elif ('expr' in type_ and type_ != 'star_expr' # is a substring
elif ('expr' in type_ and type_ != 'star_expr' # is a substring
or '_test' in type_
or type_ in ('term', 'factor')):
error = 'operator'
if error is not None:
message = "can't %s %s" % ("delete" if is_deletion else "assign to", error)
cannot = "can't" if self._normalizer.version < (3, 8) else "cannot"
message = ' '.join([cannot, "delete" if is_deletion else "assign to", error])
self.add_issue(node, message=message)
@ErrorFinder.register_rule(type='comp_for')
@ErrorFinder.register_rule(type='sync_comp_for')
class _CompForRule(_CheckAssignmentRule):
message = "asynchronous comprehension outside of an asynchronous function"
def is_issue(self, node):
# Some of the nodes here are already used, so no else if
expr_list = node.children[1 + int(node.children[0] == 'async')]
expr_list = node.children[1]
print(expr_list)
if expr_list.type != 'expr_list': # Already handled.
self._check_assignment(expr_list)
return node.children[0] == 'async' \
return node.parent.children[0] == 'async' \
and not self._normalizer.context.is_async_funcdef()

View File

@@ -1,211 +0,0 @@
import re
from itertools import count
from parso.utils import PythonVersionInfo
from parso.utils import split_lines
from parso.python.tokenize import Token
from parso import parser
from parso.tree import TypedLeaf, ErrorNode, ErrorLeaf
version36 = PythonVersionInfo(3, 6)
class TokenNamespace:
_c = count()
LBRACE = next(_c)
RBRACE = next(_c)
ENDMARKER = next(_c)
COLON = next(_c)
CONVERSION = next(_c)
PYTHON_EXPR = next(_c)
EXCLAMATION_MARK = next(_c)
UNTERMINATED_STRING = next(_c)
token_map = dict((v, k) for k, v in locals().items() if not k.startswith('_'))
@classmethod
def generate_token_id(cls, string):
if string == '{':
return cls.LBRACE
elif string == '}':
return cls.RBRACE
elif string == '!':
return cls.EXCLAMATION_MARK
elif string == ':':
return cls.COLON
return getattr(cls, string)
GRAMMAR = """
fstring: expression* ENDMARKER
format_spec: ':' expression*
expression: '{' PYTHON_EXPR [ '!' CONVERSION ] [ format_spec ] '}'
"""
_prefix = r'((?:[^{}]+)*)'
_expr = _prefix + r'(\{|\}|$)'
_in_expr = r'([^{}\[\]:"\'!]*)(.?)'
# There's only one conversion character allowed. But the rules have to be
# checked later anyway, so allow more here. This makes error recovery nicer.
_conversion = r'([^={}:]*)(.?)'
_compiled_expr = re.compile(_expr)
_compiled_in_expr = re.compile(_in_expr)
_compiled_conversion = re.compile(_conversion)
def tokenize(code, start_pos=(1, 0)):
def add_to_pos(string):
lines = split_lines(string)
l = len(lines[-1])
if len(lines) > 1:
start_pos[0] += len(lines) - 1
start_pos[1] = l
else:
start_pos[1] += l
def tok(value, type=None, prefix=''):
if type is None:
type = TokenNamespace.generate_token_id(value)
add_to_pos(prefix)
token = Token(type, value, tuple(start_pos), prefix)
add_to_pos(value)
return token
start = 0
recursion_level = 0
added_prefix = ''
start_pos = list(start_pos)
while True:
match = _compiled_expr.match(code, start)
prefix = added_prefix + match.group(1)
found = match.group(2)
start = match.end()
if not found:
# We're at the end.
break
if found == '}':
if recursion_level == 0 and len(code) > start and code[start] == '}':
# This is a }} escape.
added_prefix = prefix + '}}'
start += 1
continue
recursion_level = max(0, recursion_level - 1)
yield tok(found, prefix=prefix)
added_prefix = ''
else:
assert found == '{'
if recursion_level == 0 and len(code) > start and code[start] == '{':
# This is a {{ escape.
added_prefix = prefix + '{{'
start += 1
continue
recursion_level += 1
yield tok(found, prefix=prefix)
added_prefix = ''
expression = ''
squared_count = 0
curly_count = 0
while True:
expr_match = _compiled_in_expr.match(code, start)
expression += expr_match.group(1)
found = expr_match.group(2)
start = expr_match.end()
if found == '{':
curly_count += 1
expression += found
elif found == '}' and curly_count > 0:
curly_count -= 1
expression += found
elif found == '[':
squared_count += 1
expression += found
elif found == ']':
# Use a max function here, because the Python code might
# just have syntax errors.
squared_count = max(0, squared_count - 1)
expression += found
elif found == ':' and (squared_count or curly_count):
expression += found
elif found in ('"', "'"):
search = found
if len(code) > start + 1 and \
code[start] == found == code[start+1]:
search *= 3
start += 2
index = code.find(search, start)
if index == -1:
yield tok(expression, type=TokenNamespace.PYTHON_EXPR)
yield tok(
found + code[start:],
type=TokenNamespace.UNTERMINATED_STRING,
)
start = len(code)
break
expression += found + code[start:index+1]
start = index + 1
elif found == '!' and len(code) > start and code[start] == '=':
# This is a python `!=` and not a conversion.
expression += found
else:
yield tok(expression, type=TokenNamespace.PYTHON_EXPR)
if found:
yield tok(found)
break
if found == '!':
conversion_match = _compiled_conversion.match(code, start)
found = conversion_match.group(2)
start = conversion_match.end()
yield tok(conversion_match.group(1), type=TokenNamespace.CONVERSION)
if found:
yield tok(found)
if found == '}':
recursion_level -= 1
# We don't need to handle everything after ':', because that is
# basically new tokens.
yield tok('', type=TokenNamespace.ENDMARKER, prefix=prefix)
class Parser(parser.BaseParser):
def parse(self, tokens):
node = super(Parser, self).parse(tokens)
if isinstance(node, self.default_leaf): # Is an endmarker.
# If there's no curly braces we get back a non-module. We always
# want an fstring.
node = self.default_node('fstring', [node])
return node
def convert_leaf(self, pgen_grammar, type, value, prefix, start_pos):
# TODO this is so ugly.
leaf_type = TokenNamespace.token_map[type].lower()
return TypedLeaf(leaf_type, value, start_pos, prefix)
def error_recovery(self, pgen_grammar, stack, arcs, typ, value, start_pos, prefix,
add_token_callback):
if not self._error_recovery:
return super(Parser, self).error_recovery(
pgen_grammar, stack, arcs, typ, value, start_pos, prefix,
add_token_callback
)
token_type = TokenNamespace.token_map[typ].lower()
if len(stack) == 1:
error_leaf = ErrorLeaf(token_type, value, start_pos, prefix)
stack[0][2][1].append(error_leaf)
else:
dfa, state, (type_, nodes) = stack[1]
stack[0][2][1].append(ErrorNode(nodes))
stack[1:] = []
add_token_callback(typ, value, start_pos, prefix)

View File

@@ -119,7 +119,8 @@ atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [listmaker] ']' |
'{' [dictorsetmaker] '}' |
'`' testlist1 '`' |
NAME | NUMBER | STRING+)
NAME | NUMBER | strings)
strings: STRING+
listmaker: test ( list_for | (',' test)* [','] )
# Dave: Renamed testlist_gexpr to testlist_comp, because in 2.7+ this is the
# default. It's more consistent like this.

View File

@@ -104,9 +104,10 @@ atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [listmaker] ']' |
'{' [dictorsetmaker] '}' |
'`' testlist1 '`' |
NAME | NUMBER | STRING+)
NAME | NUMBER | strings)
strings: STRING+
listmaker: test ( list_for | (',' test)* [','] )
testlist_comp: test ( comp_for | (',' test)* [','] )
testlist_comp: test ( sync_comp_for | (',' test)* [','] )
lambdef: 'lambda' [varargslist] ':' test
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
@@ -114,8 +115,8 @@ subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: expr (',' expr)* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
(test (comp_for | (',' test)* [','])) )
dictorsetmaker: ( (test ':' test (sync_comp_for | (',' test ':' test)* [','])) |
(test (sync_comp_for | (',' test)* [','])) )
classdef: 'class' NAME ['(' [testlist] ')'] ':' suite
@@ -124,14 +125,14 @@ arglist: (argument ',')* (argument [',']
|'**' test)
# The reason that keywords are test nodes instead of NAME is that using NAME
# results in an ambiguity. ast.c makes sure it's a NAME.
argument: test [comp_for] | test '=' test
argument: test [sync_comp_for] | test '=' test
list_iter: list_for | list_if
list_for: 'for' exprlist 'in' testlist_safe [list_iter]
list_if: 'if' old_test [list_iter]
comp_iter: comp_for | comp_if
comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_iter: sync_comp_for | comp_if
sync_comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' old_test [comp_iter]
testlist1: test (',' test)*

View File

@@ -103,16 +103,17 @@ power: atom trailer* ['**' factor]
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')
strings: STRING+
testlist_comp: (test|star_expr) ( sync_comp_for | (',' (test|star_expr))* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
(test (comp_for | (',' test)* [','])) )
dictorsetmaker: ( (test ':' test (sync_comp_for | (',' test ':' test)* [','])) |
(test (sync_comp_for | (',' test)* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
@@ -121,9 +122,9 @@ arglist: (argument ',')* (argument [',']
|'**' test)
# The reason that keywords are test nodes instead of NAME is that using NAME
# results in an ambiguity. ast.c makes sure it's a NAME.
argument: test [comp_for] | test '=' test # Really [keyword '='] test
comp_iter: comp_for | comp_if
comp_for: 'for' exprlist 'in' or_test [comp_iter]
argument: test [sync_comp_for] | test '=' test # Really [keyword '='] test
comp_iter: sync_comp_for | comp_if
sync_comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' test_nocond [comp_iter]
# not used in grammar, but may appear in "node" passed from Parser to Compiler

View File

@@ -103,16 +103,17 @@ power: atom trailer* ['**' factor]
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')
strings: STRING+
testlist_comp: (test|star_expr) ( sync_comp_for | (',' (test|star_expr))* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
(test (comp_for | (',' test)* [','])) )
dictorsetmaker: ( (test ':' test (sync_comp_for | (',' test ':' test)* [','])) |
(test (sync_comp_for | (',' test)* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
@@ -121,9 +122,9 @@ arglist: (argument ',')* (argument [',']
|'**' test)
# The reason that keywords are test nodes instead of NAME is that using NAME
# results in an ambiguity. ast.c makes sure it's a NAME.
argument: test [comp_for] | test '=' test # Really [keyword '='] test
comp_iter: comp_for | comp_if
comp_for: 'for' exprlist 'in' or_test [comp_iter]
argument: test [sync_comp_for] | test '=' test # Really [keyword '='] test
comp_iter: sync_comp_for | comp_if
sync_comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' test_nocond [comp_iter]
# not used in grammar, but may appear in "node" passed from Parser to Compiler

View File

@@ -110,8 +110,9 @@ atom_expr: ['await'] atom trailer*
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')
strings: STRING+
testlist_comp: (test|star_expr) ( sync_comp_for | (',' (test|star_expr))* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
@@ -119,9 +120,9 @@ sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( ((test ':' test | '**' expr)
(comp_for | (',' (test ':' test | '**' expr))* [','])) |
(sync_comp_for | (',' (test ':' test | '**' expr))* [','])) |
((test | star_expr)
(comp_for | (',' (test | star_expr))* [','])) )
(sync_comp_for | (',' (test | star_expr))* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
@@ -136,13 +137,13 @@ arglist: argument (',' argument)* [',']
# Illegal combinations and orderings are blocked in ast.c:
# multiple (test comp_for) arguments are blocked; keyword unpackings
# that precede iterable unpackings are blocked; etc.
argument: ( test [comp_for] |
argument: ( test [sync_comp_for] |
test '=' test |
'**' test |
'*' test )
comp_iter: comp_for | comp_if
comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_iter: sync_comp_for | comp_if
sync_comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' test_nocond [comp_iter]
# not used in grammar, but may appear in "node" passed from Parser to Compiler

View File

@@ -108,7 +108,7 @@ atom_expr: ['await'] atom trailer*
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
@@ -140,7 +140,8 @@ argument: ( test [comp_for] |
'*' test )
comp_iter: comp_for | comp_if
comp_for: ['async'] 'for' exprlist 'in' or_test [comp_iter]
sync_comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_for: ['async'] sync_comp_for
comp_if: 'if' test_nocond [comp_iter]
# not used in grammar, but may appear in "node" passed from Parser to Compiler
@@ -148,3 +149,10 @@ encoding_decl: NAME
yield_expr: 'yield' [yield_arg]
yield_arg: 'from' test | testlist
strings: (STRING | fstring)+
fstring: FSTRING_START fstring_content* FSTRING_END
fstring_content: FSTRING_STRING | fstring_expr
fstring_conversion: '!' NAME
fstring_expr: '{' testlist_comp [ fstring_conversion ] [ fstring_format_spec ] '}'
fstring_format_spec: ':' fstring_content*

View File

@@ -15,8 +15,6 @@ decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef | async_funcdef)
# NOTE: Francisco Souza/Reinoud Elhorst, using ASYNC/'await' keywords instead of
# skipping python3.5+ compatibility, in favour of 3.7 solution
async_funcdef: 'async' funcdef
funcdef: 'def' NAME parameters ['->' test] ':' suite
@@ -108,7 +106,7 @@ atom_expr: ['await'] atom trailer*
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
@@ -140,7 +138,8 @@ argument: ( test [comp_for] |
'*' test )
comp_iter: comp_for | comp_if
comp_for: ['async'] 'for' exprlist 'in' or_test [comp_iter]
sync_comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_for: ['async'] sync_comp_for
comp_if: 'if' test_nocond [comp_iter]
# not used in grammar, but may appear in "node" passed from Parser to Compiler
@@ -148,3 +147,10 @@ encoding_decl: NAME
yield_expr: 'yield' [yield_arg]
yield_arg: 'from' test | testlist
strings: (STRING | fstring)+
fstring: FSTRING_START fstring_content* FSTRING_END
fstring_content: FSTRING_STRING | fstring_expr
fstring_conversion: '!' NAME
fstring_expr: '{' testlist [ fstring_conversion ] [ fstring_format_spec ] '}'
fstring_format_spec: ':' fstring_content*

171
parso/python/grammar38.txt Normal file
View File

@@ -0,0 +1,171 @@
# Grammar for Python
# NOTE WELL: You should also follow all the steps listed at
# https://devguide.python.org/grammar/
# Start symbols for the grammar:
# single_input is a single interactive statement;
# file_input is a module or sequence of commands read from an input file;
# eval_input is the input for the eval() functions.
# NB: compound_stmt in single_input is followed by extra NEWLINE!
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
file_input: (NEWLINE | stmt)* ENDMARKER
eval_input: testlist NEWLINE* ENDMARKER
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef | async_funcdef)
async_funcdef: 'async' funcdef
funcdef: 'def' NAME parameters ['->' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: (
(tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [',' [ tfpdef ['=' test] (
',' tfpdef ['=' test])* ([',' [
'*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]
| '**' tfpdef [',']]])
| '*' [tfpdef] (',' tfpdef ['=' test])* ([',' ['**' tfpdef [',']]])
| '**' tfpdef [',']]] )
| (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [
'*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]
| '**' tfpdef [',']]]
| '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]
| '**' tfpdef [','])
)
tfpdef: NAME [':' test]
varargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [
'*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
| '**' vfpdef [',']]]
| '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
| '**' vfpdef [',']) ]] | (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [
'*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
| '**' vfpdef [',']]]
| '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
| '**' vfpdef [',']
)
vfpdef: NAME
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |
('=' (yield_expr|testlist_star_expr))*)
annassign: ':' test ['=' test]
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
# For normal and annotated assignments, additional restrictions enforced by the interpreter
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist_star_expr]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test]]
import_stmt: import_name | import_from
import_name: 'import' dotted_as_names
# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
import_as_name: NAME ['as' NAME]
dotted_as_name: dotted_name ['as' NAME]
import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: 'global' NAME (',' NAME)*
nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
assert_stmt: 'assert' test [',' test]
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt
async_stmt: 'async' (funcdef | with_stmt | for_stmt)
if_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite]
while_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' with_item (',' with_item)* ':' suite
with_item: test ['as' expr]
# NB compile.c makes sure that the default except clause is last
except_clause: 'except' [test ['as' NAME]]
suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
namedexpr_test: test [':=' test]
test: or_test ['if' or_test 'else' test] | lambdef
test_nocond: or_test | lambdef_nocond
lambdef: 'lambda' [varargslist] ':' test
lambdef_nocond: 'lambda' [varargslist] ':' test_nocond
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: expr (comp_op expr)*
# <> isn't actually a valid comparison operator in Python. It's here for the
# sake of a __future__ import described in PEP 401 (which really works :-)
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
star_expr: '*' expr
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'@'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: atom_expr ['**' factor]
atom_expr: ['await'] atom trailer*
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')
testlist_comp: (namedexpr_test|star_expr) ( comp_for | (',' (namedexpr_test|star_expr))* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( ((test ':' test | '**' expr)
(comp_for | (',' (test ':' test | '**' expr))* [','])) |
((test | star_expr)
(comp_for | (',' (test | star_expr))* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: argument (',' argument)* [',']
# The reason that keywords are test nodes instead of NAME is that using NAME
# results in an ambiguity. ast.c makes sure it's a NAME.
# "test '=' test" is really "keyword '=' test", but we have no such token.
# These need to be in a single rule to avoid grammar that is ambiguous
# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,
# we explicitly match '*' here, too, to give it proper precedence.
# Illegal combinations and orderings are blocked in ast.c:
# multiple (test comp_for) arguments are blocked; keyword unpackings
# that precede iterable unpackings are blocked; etc.
argument: ( test [comp_for] |
test ':=' test |
test '=' test |
'**' test |
'*' test )
comp_iter: comp_for | comp_if
sync_comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_for: ['async'] sync_comp_for
comp_if: 'if' test_nocond [comp_iter]
# not used in grammar, but may appear in "node" passed from Parser to Compiler
encoding_decl: NAME
yield_expr: 'yield' [yield_arg]
yield_arg: 'from' test | testlist_star_expr
strings: (STRING | fstring)+
fstring: FSTRING_START fstring_content* FSTRING_END
fstring_content: FSTRING_STRING | fstring_expr
fstring_conversion: '!' NAME
fstring_expr: '{' testlist ['='] [ fstring_conversion ] [ fstring_format_spec ] '}'
fstring_format_spec: ':' fstring_content*

View File

@@ -1,8 +1,11 @@
from parso.python import tree
from parso.python.token import (DEDENT, INDENT, ENDMARKER, NEWLINE, NUMBER,
STRING, tok_name, NAME)
from parso.python.token import PythonTokenTypes
from parso.parser import BaseParser
from parso.pgen2.parse import token_to_ilabel
NAME = PythonTokenTypes.NAME
INDENT = PythonTokenTypes.INDENT
DEDENT = PythonTokenTypes.DEDENT
class Parser(BaseParser):
@@ -36,13 +39,13 @@ class Parser(BaseParser):
'for_stmt': tree.ForStmt,
'while_stmt': tree.WhileStmt,
'try_stmt': tree.TryStmt,
'comp_for': tree.CompFor,
'sync_comp_for': tree.SyncCompFor,
# Not sure if this is the best idea, but IMO it's the easiest way to
# avoid extreme amounts of work around the subtle difference of 2/3
# grammar in list comoprehensions.
'list_for': tree.CompFor,
'list_for': tree.SyncCompFor,
# Same here. This just exists in Python 2.6.
'gen_for': tree.CompFor,
'gen_for': tree.SyncCompFor,
'decorator': tree.Decorator,
'lambdef': tree.Lambda,
'old_lambdef': tree.Lambda,
@@ -50,44 +53,35 @@ class Parser(BaseParser):
}
default_node = tree.PythonNode
def __init__(self, pgen_grammar, error_recovery=True, start_symbol='file_input'):
super(Parser, self).__init__(pgen_grammar, start_symbol, error_recovery=error_recovery)
# Names/Keywords are handled separately
_leaf_map = {
PythonTokenTypes.STRING: tree.String,
PythonTokenTypes.NUMBER: tree.Number,
PythonTokenTypes.NEWLINE: tree.Newline,
PythonTokenTypes.ENDMARKER: tree.EndMarker,
PythonTokenTypes.FSTRING_STRING: tree.FStringString,
PythonTokenTypes.FSTRING_START: tree.FStringStart,
PythonTokenTypes.FSTRING_END: tree.FStringEnd,
}
def __init__(self, pgen_grammar, error_recovery=True, start_nonterminal='file_input'):
super(Parser, self).__init__(pgen_grammar, start_nonterminal,
error_recovery=error_recovery)
self.syntax_errors = []
self._omit_dedent_list = []
self._indent_counter = 0
# TODO do print absolute import detection here.
# try:
# del python_grammar_no_print_statement.keywords["print"]
# except KeyError:
# pass # Doesn't exist in the Python 3 grammar.
# if self.options["print_function"]:
# python_grammar = pygram.python_grammar_no_print_statement
# else:
def parse(self, tokens):
if self._error_recovery:
if self._start_symbol != 'file_input':
if self._start_nonterminal != 'file_input':
raise NotImplementedError
tokens = self._recovery_tokenize(tokens)
node = super(Parser, self).parse(tokens)
return super(Parser, self).parse(tokens)
if self._start_symbol == 'file_input' != node.type:
# If there's only one statement, we get back a non-module. That's
# not what we want, we want a module, so we add it here:
node = self.convert_node(
self._pgen_grammar,
self._pgen_grammar.symbol2number['file_input'],
[node]
)
return node
def convert_node(self, pgen_grammar, type, children):
def convert_node(self, nonterminal, children):
"""
Convert raw node information to a PythonBaseNode instance.
@@ -95,158 +89,121 @@ class Parser(BaseParser):
grammar rule produces a new complete node, so that the tree is build
strictly bottom-up.
"""
# TODO REMOVE symbol, we don't want type here.
symbol = pgen_grammar.number2symbol[type]
try:
return self.node_map[symbol](children)
node = self.node_map[nonterminal](children)
except KeyError:
if symbol == 'suite':
if nonterminal == 'suite':
# We don't want the INDENT/DEDENT in our parser tree. Those
# leaves are just cancer. They are virtual leaves and not real
# ones and therefore have pseudo start/end positions and no
# prefixes. Just ignore them.
children = [children[0]] + children[2:-1]
elif symbol == 'list_if':
elif nonterminal == 'list_if':
# Make transitioning from 2 to 3 easier.
symbol = 'comp_if'
elif symbol == 'listmaker':
nonterminal = 'comp_if'
elif nonterminal == 'listmaker':
# Same as list_if above.
symbol = 'testlist_comp'
return self.default_node(symbol, children)
nonterminal = 'testlist_comp'
node = self.default_node(nonterminal, children)
for c in children:
c.parent = node
return node
def convert_leaf(self, pgen_grammar, type, value, prefix, start_pos):
def convert_leaf(self, type, value, prefix, start_pos):
# print('leaf', repr(value), token.tok_name[type])
if type == NAME:
if value in pgen_grammar.keywords:
if value in self._pgen_grammar.reserved_syntax_strings:
return tree.Keyword(value, start_pos, prefix)
else:
return tree.Name(value, start_pos, prefix)
elif type == STRING:
return tree.String(value, start_pos, prefix)
elif type == NUMBER:
return tree.Number(value, start_pos, prefix)
elif type == NEWLINE:
return tree.Newline(value, start_pos, prefix)
elif type == ENDMARKER:
return tree.EndMarker(value, start_pos, prefix)
else:
return tree.Operator(value, start_pos, prefix)
def error_recovery(self, pgen_grammar, stack, arcs, typ, value, start_pos, prefix,
add_token_callback):
def get_symbol_and_nodes(stack):
for dfa, state, (type_, nodes) in stack:
symbol = pgen_grammar.number2symbol[type_]
yield symbol, nodes
return self._leaf_map.get(type, tree.Operator)(value, start_pos, prefix)
tos_nodes = stack.get_tos_nodes()
def error_recovery(self, token):
tos_nodes = self.stack[-1].nodes
if tos_nodes:
last_leaf = tos_nodes[-1].get_last_leaf()
else:
last_leaf = None
if self._start_symbol == 'file_input' and \
(typ == ENDMARKER or typ == DEDENT and '\n' not in last_leaf.value):
def reduce_stack(states, newstate):
# reduce
state = newstate
while states[state] == [(0, state)]:
self.pgen_parser._pop()
dfa, state, (type_, nodes) = stack[-1]
states, first = dfa
if self._start_nonterminal == 'file_input' and \
(token.type == PythonTokenTypes.ENDMARKER
or token.type == DEDENT and '\n' not in last_leaf.value
and '\r' not in last_leaf.value):
# In Python statements need to end with a newline. But since it's
# possible (and valid in Python ) that there's no newline at the
# end of a file, we have to recover even if the user doesn't want
# error recovery.
#print('x', pprint.pprint(stack))
ilabel = token_to_ilabel(pgen_grammar, NEWLINE, value)
dfa, state, (type_, nodes) = stack[-1]
symbol = pgen_grammar.number2symbol[type_]
states, first = dfa
arcs = states[state]
# Look for a state with this label
for i, newstate in arcs:
if ilabel == i:
if symbol == 'simple_stmt':
# This is basically shifting
stack[-1] = (dfa, newstate, (type_, nodes))
reduce_stack(states, newstate)
add_token_callback(typ, value, start_pos, prefix)
if self.stack[-1].dfa.from_rule == 'simple_stmt':
try:
plan = self.stack[-1].dfa.transitions[PythonTokenTypes.NEWLINE]
except KeyError:
pass
else:
if plan.next_dfa.is_final and not plan.dfa_pushes:
# We are ignoring here that the newline would be
# required for a simple_stmt.
self.stack[-1].dfa = plan.next_dfa
self._add_token(token)
return
# Check if we're at the right point
#for symbol, nodes in get_symbol_and_nodes(stack):
# self.pgen_parser._pop()
#break
break
#symbol = pgen_grammar.number2symbol[type_]
if not self._error_recovery:
return super(Parser, self).error_recovery(
pgen_grammar, stack, arcs, typ, value, start_pos, prefix,
add_token_callback)
return super(Parser, self).error_recovery(token)
def current_suite(stack):
# For now just discard everything that is not a suite or
# file_input, if we detect an error.
for index, (symbol, nodes) in reversed(list(enumerate(get_symbol_and_nodes(stack)))):
for until_index, stack_node in reversed(list(enumerate(stack))):
# `suite` can sometimes be only simple_stmt, not stmt.
if symbol == 'file_input':
if stack_node.nonterminal == 'file_input':
break
elif symbol == 'suite' and len(nodes) > 1:
# suites without an indent in them get discarded.
break
return index, symbol, nodes
elif stack_node.nonterminal == 'suite':
# In the case where we just have a newline we don't want to
# do error recovery here. In all other cases, we want to do
# error recovery.
if len(stack_node.nodes) != 1:
break
return until_index
index, symbol, nodes = current_suite(stack)
until_index = current_suite(self.stack)
# print('err', token.tok_name[typ], repr(value), start_pos, len(stack), index)
if self._stack_removal(pgen_grammar, stack, arcs, index + 1, value, start_pos):
add_token_callback(typ, value, start_pos, prefix)
if self._stack_removal(until_index + 1):
self._add_token(token)
else:
typ, value, start_pos, prefix = token
if typ == INDENT:
# For every deleted INDENT we have to delete a DEDENT as well.
# Otherwise the parser will get into trouble and DEDENT too early.
self._omit_dedent_list.append(self._indent_counter)
error_leaf = tree.PythonErrorLeaf(tok_name[typ].lower(), value, start_pos, prefix)
stack[-1][2][1].append(error_leaf)
error_leaf = tree.PythonErrorLeaf(typ.name, value, start_pos, prefix)
self.stack[-1].nodes.append(error_leaf)
if symbol == 'suite':
dfa, state, node = stack[-1]
states, first = dfa
arcs = states[state]
intended_label = pgen_grammar.symbol2label['stmt']
# Introduce a proper state transition. We're basically allowing
# there to be no valid statements inside a suite.
if [x[0] for x in arcs] == [intended_label]:
new_state = arcs[0][1]
stack[-1] = dfa, new_state, node
tos = self.stack[-1]
if tos.nonterminal == 'suite':
# Need at least one statement in the suite. This happend with the
# error recovery above.
try:
tos.dfa = tos.dfa.arcs['stmt']
except KeyError:
# We're already in a final state.
pass
def _stack_removal(self, pgen_grammar, stack, arcs, start_index, value, start_pos):
failed_stack = False
found = False
all_nodes = []
for dfa, state, (type_, nodes) in stack[start_index:]:
if nodes:
found = True
if found:
failed_stack = True
all_nodes += nodes
if failed_stack:
stack[start_index - 1][2][1].append(tree.PythonErrorNode(all_nodes))
def _stack_removal(self, start_index):
all_nodes = [node for stack_node in self.stack[start_index:] for node in stack_node.nodes]
stack[start_index:] = []
return failed_stack
if all_nodes:
node = tree.PythonErrorNode(all_nodes)
for n in all_nodes:
n.parent = node
self.stack[start_index - 1].nodes.append(node)
self.stack[start_index:] = []
return bool(all_nodes)
def _recovery_tokenize(self, tokens):
for typ, value, start_pos, prefix in tokens:
# print(tok_name[typ], repr(value), start_pos, repr(prefix))
for token in tokens:
typ = token[0]
if typ == DEDENT:
# We need to count indents, because if we just omit any DEDENT,
# we might omit them in the wrong place.
@@ -258,4 +215,4 @@ class Parser(BaseParser):
self._indent_counter -= 1
elif typ == INDENT:
self._indent_counter += 1
yield typ, value, start_pos, prefix
yield token

View File

@@ -391,11 +391,11 @@ class PEP8Normalizer(ErrorFinder):
if value.lstrip('#'):
self.add_issue(part, 266, "Too many leading '#' for block comment.")
elif self._on_newline:
if not re.match('#:? ', value) and not value == '#' \
if not re.match(r'#:? ', value) and not value == '#' \
and not (value.startswith('#!') and part.start_pos == (1, 0)):
self.add_issue(part, 265, "Block comment should start with '# '")
else:
if not re.match('#:? [^ ]', value):
if not re.match(r'#:? [^ ]', value):
self.add_issue(part, 262, "Inline comment should start with '# '")
self._reset_newlines(spacing, leaf, is_comment=True)
@@ -677,7 +677,7 @@ class PEP8Normalizer(ErrorFinder):
elif typ == 'string':
# Checking multiline strings
for i, line in enumerate(leaf.value.splitlines()[1:]):
indentation = re.match('[ \t]*', line).group(0)
indentation = re.match(r'[ \t]*', line).group(0)
start_pos = leaf.line + i, len(indentation)
# TODO check multiline indentation.
elif typ == 'endmarker':

View File

@@ -1,104 +1,27 @@
from __future__ import absolute_import
from itertools import count
from token import *
from parso._compatibility import py_version
_counter = count(N_TOKENS)
# Never want to see this thing again.
del N_TOKENS
class TokenType(object):
def __init__(self, name, contains_syntax=False):
self.name = name
self.contains_syntax = contains_syntax
COMMENT = next(_counter)
tok_name[COMMENT] = 'COMMENT'
NL = next(_counter)
tok_name[NL] = 'NL'
# Sets the attributes that don't exist in these tok_name versions.
if py_version >= 30:
BACKQUOTE = next(_counter)
tok_name[BACKQUOTE] = 'BACKQUOTE'
else:
RARROW = next(_counter)
tok_name[RARROW] = 'RARROW'
ELLIPSIS = next(_counter)
tok_name[ELLIPSIS] = 'ELLIPSIS'
if py_version < 35:
ATEQUAL = next(_counter)
tok_name[ATEQUAL] = 'ATEQUAL'
ERROR_DEDENT = next(_counter)
tok_name[ERROR_DEDENT] = 'ERROR_DEDENT'
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self.name)
# Map from operator to number (since tokenize doesn't do this)
opmap_raw = """\
( LPAR
) RPAR
[ LSQB
] RSQB
: COLON
, COMMA
; SEMI
+ PLUS
- MINUS
* STAR
/ SLASH
| VBAR
& AMPER
< LESS
> GREATER
= EQUAL
. DOT
% PERCENT
` BACKQUOTE
{ LBRACE
} RBRACE
@ AT
== EQEQUAL
!= NOTEQUAL
<> NOTEQUAL
<= LESSEQUAL
>= GREATEREQUAL
~ TILDE
^ CIRCUMFLEX
<< LEFTSHIFT
>> RIGHTSHIFT
** DOUBLESTAR
+= PLUSEQUAL
-= MINEQUAL
*= STAREQUAL
/= SLASHEQUAL
%= PERCENTEQUAL
&= AMPEREQUAL
|= VBAREQUAL
@= ATEQUAL
^= CIRCUMFLEXEQUAL
<<= LEFTSHIFTEQUAL
>>= RIGHTSHIFTEQUAL
**= DOUBLESTAREQUAL
// DOUBLESLASH
//= DOUBLESLASHEQUAL
-> RARROW
... ELLIPSIS
"""
opmap = {}
for line in opmap_raw.splitlines():
op, name = line.split()
opmap[op] = globals()[name]
def generate_token_id(string):
class TokenTypes(object):
"""
Uses a token in the grammar (e.g. `'+'` or `'and'`returns the corresponding
ID for it. The strings are part of the grammar file.
Basically an enum, but Python 2 doesn't have enums in the standard library.
"""
try:
return opmap[string]
except KeyError:
pass
return globals()[string]
def __init__(self, names, contains_syntax):
for name in names:
setattr(self, name, TokenType(name, contains_syntax=name in contains_syntax))
PythonTokenTypes = TokenTypes((
'STRING', 'NUMBER', 'NAME', 'ERRORTOKEN', 'NEWLINE', 'INDENT', 'DEDENT',
'ERROR_DEDENT', 'FSTRING_STRING', 'FSTRING_START', 'FSTRING_END', 'OP',
'ENDMARKER'),
contains_syntax=('NAME', 'OP'),
)

30
parso/python/token.pyi Normal file
View File

@@ -0,0 +1,30 @@
from typing import Container, Iterable
class TokenType:
name: str
contains_syntax: bool
def __init__(self, name: str, contains_syntax: bool) -> None: ...
class TokenTypes:
def __init__(
self, names: Iterable[str], contains_syntax: Container[str]
) -> None: ...
# not an actual class in the source code, but we need this class to type the fields of
# PythonTokenTypes
class _FakePythonTokenTypesClass(TokenTypes):
STRING: TokenType
NUMBER: TokenType
NAME: TokenType
ERRORTOKEN: TokenType
NEWLINE: TokenType
INDENT: TokenType
DEDENT: TokenType
ERROR_DEDENT: TokenType
FSTRING_STRING: TokenType
FSTRING_START: TokenType
FSTRING_END: TokenType
OP: TokenType
ENDMARKER: TokenType
PythonTokenTypes: _FakePythonTokenTypesClass = ...

View File

@@ -18,16 +18,29 @@ from collections import namedtuple
import itertools as _itertools
from codecs import BOM_UTF8
from parso.python.token import (tok_name, ENDMARKER, STRING, NUMBER, opmap,
NAME, ERRORTOKEN, NEWLINE, INDENT, DEDENT,
ERROR_DEDENT)
from parso.python.token import PythonTokenTypes
from parso._compatibility import py_version
from parso.utils import split_lines
STRING = PythonTokenTypes.STRING
NAME = PythonTokenTypes.NAME
NUMBER = PythonTokenTypes.NUMBER
OP = PythonTokenTypes.OP
NEWLINE = PythonTokenTypes.NEWLINE
INDENT = PythonTokenTypes.INDENT
DEDENT = PythonTokenTypes.DEDENT
ENDMARKER = PythonTokenTypes.ENDMARKER
ERRORTOKEN = PythonTokenTypes.ERRORTOKEN
ERROR_DEDENT = PythonTokenTypes.ERROR_DEDENT
FSTRING_START = PythonTokenTypes.FSTRING_START
FSTRING_STRING = PythonTokenTypes.FSTRING_STRING
FSTRING_END = PythonTokenTypes.FSTRING_END
TokenCollection = namedtuple(
'TokenCollection',
'pseudo_token single_quoted triple_quoted endpats always_break_tokens',
'pseudo_token single_quoted triple_quoted endpats whitespace '
'fstring_pattern_map always_break_tokens',
)
BOM_UTF8_STRING = BOM_UTF8.decode('utf-8')
@@ -52,32 +65,35 @@ def group(*choices, **kwargs):
return start + '|'.join(choices) + ')'
def any(*choices):
return group(*choices) + '*'
def maybe(*choices):
return group(*choices) + '?'
# Return the empty string, plus all of the valid string prefixes.
def _all_string_prefixes(version_info):
def _all_string_prefixes(version_info, include_fstring=False, only_fstring=False):
def different_case_versions(prefix):
for s in _itertools.product(*[(c, c.upper()) for c in prefix]):
yield ''.join(s)
# The valid string prefixes. Only contain the lower case versions,
# and don't contain any permuations (include 'fr', but not
# 'rf'). The various permutations will be generated.
_valid_string_prefixes = ['b', 'r', 'u']
valid_string_prefixes = ['b', 'r', 'u']
if version_info >= (3, 0):
_valid_string_prefixes.append('br')
valid_string_prefixes.append('br')
if version_info >= (3, 6):
_valid_string_prefixes += ['f', 'fr']
result = set([''])
if version_info >= (3, 6) and include_fstring:
f = ['f', 'fr']
if only_fstring:
valid_string_prefixes = f
result = set()
else:
valid_string_prefixes += f
elif only_fstring:
return set()
# if we add binary f-strings, add: ['fb', 'fbr']
result = set([''])
for prefix in _valid_string_prefixes:
for prefix in valid_string_prefixes:
for t in _itertools.permutations(prefix):
# create a list with upper and lower versions of each
# character
@@ -102,10 +118,17 @@ def _get_token_collection(version_info):
return result
fstring_string_single_line = _compile(r'(?:[^{}\r\n]+|\{\{|\}\})+')
fstring_string_multi_line = _compile(r'(?:[^{}]+|\{\{|\}\})+')
fstring_format_spec_single_line = _compile(r'[^{}\r\n]+')
fstring_format_spec_multi_line = _compile(r'[^{}]+')
def _create_token_collection(version_info):
# Note: we use unicode matching for names ("\w") but ascii matching for
# number literals.
Whitespace = r'[ \f\t]*'
whitespace = _compile(Whitespace)
Comment = r'#[^\r\n]*'
Name = r'\w+'
@@ -130,6 +153,8 @@ def _create_token_collection(version_info):
Octnumber = '0[oO]?[0-7]+'
Decnumber = r'(?:0+|[1-9][0-9]*)'
Intnumber = group(Hexnumber, Binnumber, Octnumber, Decnumber)
if version_info[0] < 3:
Intnumber += '[lL]?'
Exponent = r'[eE][-+]?[0-9]+'
Pointfloat = group(r'[0-9]+\.[0-9]*', r'\.[0-9]+') + maybe(Exponent)
Expfloat = r'[0-9]+' + Exponent
@@ -141,40 +166,52 @@ def _create_token_collection(version_info):
# StringPrefix can be the empty string (making it optional).
possible_prefixes = _all_string_prefixes(version_info)
StringPrefix = group(*possible_prefixes)
StringPrefixWithF = group(*_all_string_prefixes(version_info, include_fstring=True))
fstring_prefixes = _all_string_prefixes(version_info, include_fstring=True, only_fstring=True)
FStringStart = group(*fstring_prefixes)
# Tail end of ' string.
Single = r"[^'\\]*(?:\\.[^'\\]*)*'"
Single = r"(?:\\.|[^'\\])*'"
# Tail end of " string.
Double = r'[^"\\]*(?:\\.[^"\\]*)*"'
Double = r'(?:\\.|[^"\\])*"'
# Tail end of ''' string.
Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''"
Single3 = r"(?:\\.|'(?!'')|[^'\\])*'''"
# Tail end of """ string.
Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""'
Triple = group(StringPrefix + "'''", StringPrefix + '"""')
Double3 = r'(?:\\.|"(?!"")|[^"\\])*"""'
Triple = group(StringPrefixWithF + "'''", StringPrefixWithF + '"""')
# Because of leftmost-then-longest match semantics, be sure to put the
# longest operators first (e.g., if = came before ==, == would get
# recognized as two instances of =).
Operator = group(r"\*\*=?", r">>=?", r"<<=?", r"!=",
Operator = group(r"\*\*=?", r">>=?", r"<<=?",
r"//=?", r"->",
r"[+\-*/%&@`|^=<>]=?",
r"[+\-*/%&@`|^!=<>]=?",
r"~")
Bracket = '[][(){}]'
special_args = [r'\r?\n', r'[:;.,@]']
special_args = [r'\r\n?', r'\n', r'[;.,@]']
if version_info >= (3, 0):
special_args.insert(0, r'\.\.\.')
if version_info >= (3, 8):
special_args.insert(0, ":=?")
else:
special_args.insert(0, ":")
Special = group(*special_args)
Funny = group(Operator, Bracket, Special)
# First (or only) line of ' or " string.
ContStr = group(StringPrefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*" +
group("'", r'\\\r?\n'),
StringPrefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*' +
group('"', r'\\\r?\n'))
PseudoExtras = group(r'\\\r?\n|\Z', Comment, Triple)
ContStr = group(StringPrefix + r"'[^\r\n'\\]*(?:\\.[^\r\n'\\]*)*" +
group("'", r'\\(?:\r\n?|\n)'),
StringPrefix + r'"[^\r\n"\\]*(?:\\.[^\r\n"\\]*)*' +
group('"', r'\\(?:\r\n?|\n)'))
pseudo_extra_pool = [Comment, Triple]
all_quotes = '"', "'", '"""', "'''"
if fstring_prefixes:
pseudo_extra_pool.append(FStringStart + group(*all_quotes))
PseudoExtras = group(r'\\(?:\r\n?|\n)|\Z', *pseudo_extra_pool)
PseudoToken = group(Whitespace, capture=True) + \
group(PseudoExtras, Number, Funny, ContStr, Name, capture=True)
@@ -192,18 +229,24 @@ def _create_token_collection(version_info):
# including the opening quotes.
single_quoted = set()
triple_quoted = set()
fstring_pattern_map = {}
for t in possible_prefixes:
for p in (t + '"', t + "'"):
single_quoted.add(p)
for p in (t + '"""', t + "'''"):
triple_quoted.add(p)
for quote in '"', "'":
single_quoted.add(t + quote)
for quote in '"""', "'''":
triple_quoted.add(t + quote)
for t in fstring_prefixes:
for quote in all_quotes:
fstring_pattern_map[t + quote] = quote
ALWAYS_BREAK_TOKENS = (';', 'import', 'class', 'def', 'try', 'except',
'finally', 'while', 'with', 'return')
pseudo_token_compiled = _compile(PseudoToken)
return TokenCollection(
pseudo_token_compiled, single_quoted, triple_quoted, endpats,
ALWAYS_BREAK_TOKENS
whitespace, fstring_pattern_map, ALWAYS_BREAK_TOKENS
)
@@ -218,12 +261,92 @@ class Token(namedtuple('Token', ['type', 'string', 'start_pos', 'prefix'])):
class PythonToken(Token):
def _get_type_name(self, exact=True):
return tok_name[self.type]
def __repr__(self):
return ('TokenInfo(type=%s, string=%r, start=%r, prefix=%r)' %
self._replace(type=self._get_type_name()))
return ('TokenInfo(type=%s, string=%r, start_pos=%r, prefix=%r)' %
self._replace(type=self.type.name))
class FStringNode(object):
def __init__(self, quote):
self.quote = quote
self.parentheses_count = 0
self.previous_lines = ''
self.last_string_start_pos = None
# In the syntax there can be multiple format_spec's nested:
# {x:{y:3}}
self.format_spec_count = 0
def open_parentheses(self, character):
self.parentheses_count += 1
def close_parentheses(self, character):
self.parentheses_count -= 1
if self.parentheses_count == 0:
# No parentheses means that the format spec is also finished.
self.format_spec_count = 0
def allow_multiline(self):
return len(self.quote) == 3
def is_in_expr(self):
return self.parentheses_count > self.format_spec_count
def is_in_format_spec(self):
return not self.is_in_expr() and self.format_spec_count
def _close_fstring_if_necessary(fstring_stack, string, start_pos, additional_prefix):
for fstring_stack_index, node in enumerate(fstring_stack):
if string.startswith(node.quote):
token = PythonToken(
FSTRING_END,
node.quote,
start_pos,
prefix=additional_prefix,
)
additional_prefix = ''
assert not node.previous_lines
del fstring_stack[fstring_stack_index:]
return token, '', len(node.quote)
return None, additional_prefix, 0
def _find_fstring_string(endpats, fstring_stack, line, lnum, pos):
tos = fstring_stack[-1]
allow_multiline = tos.allow_multiline()
if tos.is_in_format_spec():
if allow_multiline:
regex = fstring_format_spec_multi_line
else:
regex = fstring_format_spec_single_line
else:
if allow_multiline:
regex = fstring_string_multi_line
else:
regex = fstring_string_single_line
match = regex.match(line, pos)
if match is None:
return tos.previous_lines, pos
if not tos.previous_lines:
tos.last_string_start_pos = (lnum, pos)
string = match.group(0)
for fstring_stack_node in fstring_stack:
end_match = endpats[fstring_stack_node.quote].match(string)
if end_match is not None:
string = end_match.group(0)[:-len(fstring_stack_node.quote)]
new_pos = pos
new_pos += len(string)
if allow_multiline and (string.endswith('\n') or string.endswith('\r')):
tos.previous_lines += string
string = ''
else:
string = tos.previous_lines + string
return string, new_pos
def tokenize(code, version_info, start_pos=(1, 0)):
@@ -232,6 +355,18 @@ def tokenize(code, version_info, start_pos=(1, 0)):
return tokenize_lines(lines, version_info, start_pos=start_pos)
def _print_tokens(func):
"""
A small helper function to help debug the tokenize_lines function.
"""
def wrapper(*args, **kwargs):
for token in func(*args, **kwargs):
yield token
return wrapper
# @_print_tokens
def tokenize_lines(lines, version_info, start_pos=(1, 0)):
"""
A heavily modified Python standard library tokenizer.
@@ -240,7 +375,16 @@ def tokenize_lines(lines, version_info, start_pos=(1, 0)):
token. This idea comes from lib2to3. The prefix contains all information
that is irrelevant for the parser like newlines in parentheses or comments.
"""
pseudo_token, single_quoted, triple_quoted, endpats, always_break_tokens, = \
def dedent_if_necessary(start):
while start < indents[-1]:
if start > indents[-2]:
yield PythonToken(ERROR_DEDENT, '', (lnum, 0), '')
break
yield PythonToken(DEDENT, '', spos, '')
indents.pop()
pseudo_token, single_quoted, triple_quoted, endpats, whitespace, \
fstring_pattern_map, always_break_tokens, = \
_get_token_collection(version_info)
paren_level = 0 # count parentheses
indents = [0]
@@ -257,6 +401,7 @@ def tokenize_lines(lines, version_info, start_pos=(1, 0)):
additional_prefix = ''
first = True
lnum = start_pos[0] - 1
fstring_stack = []
for line in lines: # loop over lines in stream
lnum += 1
pos = 0
@@ -278,7 +423,9 @@ def tokenize_lines(lines, version_info, start_pos=(1, 0)):
endmatch = endprog.match(line)
if endmatch:
pos = endmatch.end(0)
yield PythonToken(STRING, contstr + line[:pos], contstr_start, prefix)
yield PythonToken(
STRING, contstr + line[:pos],
contstr_start, prefix)
contstr = ''
contline = None
else:
@@ -287,14 +434,50 @@ def tokenize_lines(lines, version_info, start_pos=(1, 0)):
continue
while pos < max:
if fstring_stack:
tos = fstring_stack[-1]
if not tos.is_in_expr():
string, pos = _find_fstring_string(endpats, fstring_stack, line, lnum, pos)
if string:
yield PythonToken(
FSTRING_STRING, string,
tos.last_string_start_pos,
# Never has a prefix because it can start anywhere and
# include whitespace.
prefix=''
)
tos.previous_lines = ''
continue
if pos == max:
break
rest = line[pos:]
fstring_end_token, additional_prefix, quote_length = _close_fstring_if_necessary(
fstring_stack,
rest,
(lnum, pos),
additional_prefix,
)
pos += quote_length
if fstring_end_token is not None:
yield fstring_end_token
continue
pseudomatch = pseudo_token.match(line, pos)
if not pseudomatch: # scan for tokens
txt = line[pos:]
if txt.endswith('\n'):
new_line = True
yield PythonToken(ERRORTOKEN, txt, (lnum, pos), additional_prefix)
match = whitespace.match(line, pos)
if pos == 0:
for t in dedent_if_necessary(match.end()):
yield t
pos = match.end()
new_line = False
yield PythonToken(
ERRORTOKEN, line[pos], (lnum, pos),
additional_prefix + match.group(0)
)
additional_prefix = ''
break
pos += 1
continue
prefix = additional_prefix + pseudomatch.group(1)
additional_prefix = ''
@@ -309,28 +492,31 @@ def tokenize_lines(lines, version_info, start_pos=(1, 0)):
break
initial = token[0]
if new_line and initial not in '\r\n#':
if new_line and initial not in '\r\n\\#':
new_line = False
if paren_level == 0:
if paren_level == 0 and not fstring_stack:
i = 0
indent_start = start
while line[i] == '\f':
i += 1
start -= 1
if start > indents[-1]:
# TODO don't we need to change spos as well?
indent_start -= 1
if indent_start > indents[-1]:
yield PythonToken(INDENT, '', spos, '')
indents.append(start)
while start < indents[-1]:
if start > indents[-2]:
yield PythonToken(ERROR_DEDENT, '', (lnum, 0), '')
break
yield PythonToken(DEDENT, '', spos, '')
indents.pop()
indents.append(indent_start)
for t in dedent_if_necessary(indent_start):
yield t
if (initial in numchars or # ordinary number
(initial == '.' and token != '.' and token != '...')):
yield PythonToken(NUMBER, token, spos, prefix)
elif initial in '\r\n':
if not new_line and paren_level == 0:
if any(not f.allow_multiline() for f in fstring_stack):
# Would use fstring_stack.clear, but that's not available
# in Python 2.
fstring_stack[:] = []
if not new_line and paren_level == 0 and not fstring_stack:
yield PythonToken(NEWLINE, token, spos, prefix)
else:
additional_prefix = prefix + token
@@ -350,10 +536,23 @@ def tokenize_lines(lines, version_info, start_pos=(1, 0)):
contstr = line[start:]
contline = line
break
# Check up to the first 3 chars of the token to see if
# they're in the single_quoted set. If so, they start
# a string.
# We're using the first 3, because we're looking for
# "rb'" (for example) at the start of the token. If
# we switch to longer prefixes, this needs to be
# adjusted.
# Note that initial == token[:1].
# Also note that single quote checking must come after
# triple quote checking (above).
elif initial in single_quoted or \
token[:2] in single_quoted or \
token[:3] in single_quoted:
if token[-1] == '\n': # continued string
if token[-1] in '\r\n': # continued string
# This means that a single quoted string ends with a
# backslash and is continued.
contstr_start = lnum, start
endprog = (endpats.get(initial) or endpats.get(token[1])
or endpats.get(token[2]))
@@ -362,37 +561,48 @@ def tokenize_lines(lines, version_info, start_pos=(1, 0)):
break
else: # ordinary string
yield PythonToken(STRING, token, spos, prefix)
elif token in fstring_pattern_map: # The start of an fstring.
fstring_stack.append(FStringNode(fstring_pattern_map[token]))
yield PythonToken(FSTRING_START, token, spos, prefix)
elif is_identifier(initial): # ordinary name
if token in always_break_tokens:
fstring_stack[:] = []
paren_level = 0
while True:
indent = indents.pop()
if indent > start:
yield PythonToken(DEDENT, '', spos, '')
else:
indents.append(indent)
break
# We only want to dedent if the token is on a new line.
if re.match(r'[ \f\t]*$', line[:start]):
while True:
indent = indents.pop()
if indent > start:
yield PythonToken(DEDENT, '', spos, '')
else:
indents.append(indent)
break
yield PythonToken(NAME, token, spos, prefix)
elif initial == '\\' and line[start:] in ('\\\n', '\\\r\n'): # continued stmt
elif initial == '\\' and line[start:] in ('\\\n', '\\\r\n', '\\\r'): # continued stmt
additional_prefix += prefix + line[start:]
break
else:
if token in '([{':
paren_level += 1
if fstring_stack:
fstring_stack[-1].open_parentheses(token)
else:
paren_level += 1
elif token in ')]}':
paren_level -= 1
if fstring_stack:
fstring_stack[-1].close_parentheses(token)
else:
if paren_level:
paren_level -= 1
elif token == ':' and fstring_stack \
and fstring_stack[-1].parentheses_count \
- fstring_stack[-1].format_spec_count == 1:
fstring_stack[-1].format_spec_count += 1
try:
# This check is needed in any case to check if it's a valid
# operator or just some random unicode character.
typ = opmap[token]
except KeyError:
typ = ERRORTOKEN
yield PythonToken(typ, token, spos, prefix)
yield PythonToken(OP, token, spos, prefix)
if contstr:
yield PythonToken(ERRORTOKEN, contstr, contstr_start, prefix)
if contstr.endswith('\n'):
if contstr.endswith('\n') or contstr.endswith('\r'):
new_line = True
end_pos = lnum, max

24
parso/python/tokenize.pyi Normal file
View File

@@ -0,0 +1,24 @@
from typing import Generator, Iterable, NamedTuple, Tuple
from parso.python.token import TokenType
from parso.utils import PythonVersionInfo
class Token(NamedTuple):
type: TokenType
string: str
start_pos: Tuple[int, int]
prefix: str
@property
def end_pos(self) -> Tuple[int, int]: ...
class PythonToken(Token):
def __repr__(self) -> str: ...
def tokenize(
code: str, version_info: PythonVersionInfo, start_pos: Tuple[int, int] = (1, 0)
) -> Generator[PythonToken, None, None]: ...
def tokenize_lines(
lines: Iterable[str],
version_info: PythonVersionInfo,
start_pos: Tuple[int, int] = (1, 0),
) -> Generator[PythonToken, None, None]: ...

View File

@@ -43,24 +43,25 @@ Parser Tree Classes
"""
import re
from collections import Mapping
from parso._compatibility import utf8_repr, unicode
from parso.tree import Node, BaseNode, Leaf, ErrorNode, ErrorLeaf, \
search_ancestor
from parso.python.prefix import split_prefix
from parso.utils import split_lines
_FLOW_CONTAINERS = set(['if_stmt', 'while_stmt', 'for_stmt', 'try_stmt',
'with_stmt', 'async_stmt', 'suite'])
_RETURN_STMT_CONTAINERS = set(['suite', 'simple_stmt']) | _FLOW_CONTAINERS
_FUNC_CONTAINERS = set(['suite', 'simple_stmt', 'decorated']) | _FLOW_CONTAINERS
_GET_DEFINITION_TYPES = set([
'expr_stmt', 'comp_for', 'with_stmt', 'for_stmt', 'import_name',
'expr_stmt', 'sync_comp_for', 'with_stmt', 'for_stmt', 'import_name',
'import_from', 'param'
])
_IMPORTS = set(['import_name', 'import_from'])
class DocstringMixin(object):
__slots__ = ()
@@ -125,15 +126,16 @@ class PythonLeaf(PythonMixin, Leaf):
# indent error leafs somehow? No idea how, though.
previous_leaf = self.get_previous_leaf()
if previous_leaf is not None and previous_leaf.type == 'error_leaf' \
and previous_leaf.original_type in ('indent', 'error_dedent'):
and previous_leaf.token_type in ('INDENT', 'DEDENT', 'ERROR_DEDENT'):
previous_leaf = previous_leaf.get_previous_leaf()
if previous_leaf is None:
return self.line - self.prefix.count('\n'), 0 # It's the first leaf.
if previous_leaf is None: # It's the first leaf.
lines = split_lines(self.prefix)
# + 1 is needed because split_lines always returns at least [''].
return self.line - len(lines) + 1, 0 # It's the first leaf.
return previous_leaf.end_pos
class _LeafWithoutNewlines(PythonLeaf):
"""
Simply here to optimize performance.
@@ -166,6 +168,12 @@ class EndMarker(_LeafWithoutNewlines):
__slots__ = ()
type = 'endmarker'
@utf8_repr
def __repr__(self):
return "<%s: prefix=%s end_pos=%s>" % (
type(self).__name__, repr(self.prefix), self.end_pos
)
class Newline(PythonLeaf):
"""Contains NEWLINE and ENDMARKER tokens."""
@@ -235,7 +243,6 @@ class Name(_LeafWithoutNewlines):
return None
class Literal(PythonLeaf):
__slots__ = ()
@@ -251,7 +258,7 @@ class String(Literal):
@property
def string_prefix(self):
return re.match('\w*(?=[\'"])', self.value).group(0)
return re.match(r'\w*(?=[\'"])', self.value).group(0)
def _get_payload(self):
match = re.search(
@@ -262,6 +269,33 @@ class String(Literal):
return match.group(2)[:-len(match.group(1))]
class FStringString(PythonLeaf):
"""
f-strings contain f-string expressions and normal python strings. These are
the string parts of f-strings.
"""
type = 'fstring_string'
__slots__ = ()
class FStringStart(PythonLeaf):
"""
f-strings contain f-string expressions and normal python strings. These are
the string parts of f-strings.
"""
type = 'fstring_start'
__slots__ = ()
class FStringEnd(PythonLeaf):
"""
f-strings contain f-string expressions and normal python strings. These are
the string parts of f-strings.
"""
type = 'fstring_end'
__slots__ = ()
class _StringComparisonMixin(object):
def __eq__(self, other):
"""
@@ -409,7 +443,7 @@ class Module(Scope):
recurse(child)
recurse(self)
self._used_names = dct
self._used_names = UsedNamesMapping(dct)
return self._used_names
@@ -433,6 +467,9 @@ class ClassOrFunc(Scope):
:rtype: list of :class:`Decorator`
"""
decorated = self.parent
if decorated.type == 'async_funcdef':
decorated = decorated.parent
if decorated.type == 'decorated':
if decorated.children[0].type == 'decorators':
return decorated.children[0].children
@@ -509,8 +546,11 @@ def _create_params(parent, argslist_list):
if child is None or child == ',':
param_children = children[start:end]
if param_children: # Could as well be comma and then end.
if param_children[0] == '*' and param_children[1] == ',' \
or check_python2_nested_param(param_children[0]):
if param_children[0] == '*' \
and (len(param_children) == 1
or param_children[1] == ',') \
or check_python2_nested_param(param_children[0]) \
or param_children[0] == '/':
for p in param_children:
p.parent = parent
new_children += param_children
@@ -626,6 +666,7 @@ class Function(ClassOrFunc):
except IndexError:
return None
class Lambda(Function):
"""
Lambdas are basically trimmed functions, so give it the same interface.
@@ -933,7 +974,7 @@ class ImportName(Import):
class KeywordStatement(PythonBaseNode):
"""
For the following statements: `assert`, `del`, `global`, `nonlocal`,
`raise`, `return`, `yield`, `return`, `yield`.
`raise`, `return`, `yield`.
`pass`, `continue` and `break` are not in there, because they are just
simple keywords and the parser reduces it to a keyword.
@@ -1122,6 +1163,13 @@ class Param(PythonBaseNode):
index -= 2
except ValueError:
pass
try:
keyword_only_index = self.parent.children.index('/')
if index > keyword_only_index:
# Skip the ` /, `
index -= 2
except ValueError:
pass
return index - 1
def get_parent_function(self):
@@ -1153,8 +1201,8 @@ class Param(PythonBaseNode):
return '<%s: %s>' % (type(self).__name__, str(self._tfpdef()) + default)
class CompFor(PythonBaseNode):
type = 'comp_for'
class SyncCompFor(PythonBaseNode):
type = 'sync_comp_for'
__slots__ = ()
def get_defined_names(self):
@@ -1162,4 +1210,33 @@ class CompFor(PythonBaseNode):
Returns the a list of `Name` that the comprehension defines.
"""
# allow async for
return _defined_names(self.children[self.children.index('for') + 1])
return _defined_names(self.children[1])
# This is simply here so an older Jedi version can work with this new parso
# version. Can be deleted in the next release.
CompFor = SyncCompFor
class UsedNamesMapping(Mapping):
"""
This class exists for the sole purpose of creating an immutable dict.
"""
def __init__(self, dct):
self._dict = dct
def __getitem__(self, key):
return self._dict[key]
def __len__(self):
return len(self._dict)
def __iter__(self):
return iter(self._dict)
def __hash__(self):
return id(self)
def __eq__(self, other):
# Comparing these dicts does not make sense.
return self is other

View File

@@ -1,5 +1,7 @@
from abc import abstractmethod, abstractproperty
from parso._compatibility import utf8_repr, encoding, py_version
from parso.utils import split_lines
def search_ancestor(node, *node_types):
@@ -55,7 +57,6 @@ class NodeOrLeaf(object):
Returns the node immediately preceding this node in this parent's
children list. If this node does not have a previous sibling, it is
None.
None.
"""
# Can't use index(); we need to test by identity
for i, child in enumerate(self.parent.children):
@@ -194,7 +195,9 @@ class Leaf(NodeOrLeaf):
def get_start_pos_of_prefix(self):
previous_leaf = self.get_previous_leaf()
if previous_leaf is None:
return self.line - self.prefix.count('\n'), 0 # It's the first leaf.
lines = split_lines(self.prefix)
# + 1 is needed because split_lines always returns at least [''].
return self.line - len(lines) + 1, 0 # It's the first leaf.
return previous_leaf.end_pos
def get_first_leaf(self):
@@ -211,7 +214,7 @@ class Leaf(NodeOrLeaf):
@property
def end_pos(self):
lines = self.value.split('\n')
lines = split_lines(self.value)
end_pos_line = self.line + len(lines) - 1
# Check for multiline token
if self.line == end_pos_line:
@@ -230,6 +233,7 @@ class Leaf(NodeOrLeaf):
class TypedLeaf(Leaf):
__slots__ = ('type',)
def __init__(self, type, value, start_pos, prefix=''):
super(TypedLeaf, self).__init__(value, start_pos, prefix)
self.type = type
@@ -244,8 +248,6 @@ class BaseNode(NodeOrLeaf):
type = None
def __init__(self, children):
for c in children:
c.parent = self
self.children = children
"""
A list of :class:`NodeOrLeaf` child nodes.
@@ -318,7 +320,7 @@ class BaseNode(NodeOrLeaf):
@utf8_repr
def __repr__(self):
code = self.get_code().replace('\n', ' ').strip()
code = self.get_code().replace('\n', ' ').replace('\r', ' ').strip()
if not py_version >= 30:
code = code.encode(encoding, 'replace')
return "<%s: %s@%s,%s>" % \
@@ -339,7 +341,7 @@ class Node(BaseNode):
class ErrorNode(BaseNode):
"""
A node that containes valid nodes/leaves that we're follow by a token that
A node that contains valid nodes/leaves that we're follow by a token that
was invalid. This basically means that the leaf after this node is where
Python would mark a syntax error.
"""
@@ -352,13 +354,13 @@ class ErrorLeaf(Leaf):
A leaf that is either completely invalid in a language (like `$` in Python)
or is invalid at that position. Like the star in `1 +* 1`.
"""
__slots__ = ('original_type',)
__slots__ = ('token_type',)
type = 'error_leaf'
def __init__(self, original_type, value, start_pos, prefix=''):
def __init__(self, token_type, value, start_pos, prefix=''):
super(ErrorLeaf, self).__init__(value, start_pos, prefix)
self.original_type = original_type
self.token_type = token_type
def __repr__(self):
return "<%s: %s:%s, %s>" % \
(type(self).__name__, self.original_type, repr(self.value), self.start_pos)
(type(self).__name__, self.token_type, repr(self.value), self.start_pos)

View File

@@ -5,6 +5,20 @@ from ast import literal_eval
from parso._compatibility import unicode, total_ordering
# The following is a list in Python that are line breaks in str.splitlines, but
# not in Python. In Python only \r (Carriage Return, 0xD) and \n (Line Feed,
# 0xA) are allowed to split lines.
_NON_LINE_BREAKS = (
u'\v', # Vertical Tabulation 0xB
u'\f', # Form Feed 0xC
u'\x1C', # File Separator
u'\x1D', # Group Separator
u'\x1E', # Record Separator
u'\x85', # Next Line (NEL - Equivalent to CR+LF.
# Used to mark end-of-line on some IBM mainframes.)
u'\u2028', # Line Separator
u'\u2029', # Paragraph Separator
)
Version = namedtuple('Version', 'major, minor, micro')
@@ -26,8 +40,13 @@ def split_lines(string, keepends=False):
# We have to merge lines that were broken by form feed characters.
merge = []
for i, line in enumerate(lst):
if line.endswith('\f'):
merge.append(i)
try:
last_chr = line[-1]
except IndexError:
pass
else:
if last_chr in _NON_LINE_BREAKS:
merge.append(i)
for index in reversed(merge):
try:
@@ -41,11 +60,11 @@ def split_lines(string, keepends=False):
# The stdlib's implementation of the end is inconsistent when calling
# it with/without keepends. One time there's an empty string in the
# end, one time there's none.
if string.endswith('\n') or string == '':
if string.endswith('\n') or string.endswith('\r') or string == '':
lst.append('')
return lst
else:
return re.split('\n|\r\n', string)
return re.split(r'\n|\r\n|\r', string)
def python_bytes_to_unicode(source, encoding='utf-8', errors='strict'):

29
parso/utils.pyi Normal file
View File

@@ -0,0 +1,29 @@
from typing import NamedTuple, Optional, Sequence, Union
class Version(NamedTuple):
major: int
minor: int
micro: int
def split_lines(string: str, keepends: bool = ...) -> Sequence[str]: ...
def python_bytes_to_unicode(
source: Union[str, bytes], encoding: str = ..., errors: str = ...
) -> str: ...
def version_info() -> Version:
"""
Returns a namedtuple of parso's version, similar to Python's
``sys.version_info``.
"""
...
class PythonVersionInfo(NamedTuple):
major: int
minor: int
def parse_version_string(version: Optional[str]) -> PythonVersionInfo:
"""
Checks for a valid version number (e.g. `3.2` or `2.7.1` or `3`) and
returns a corresponding version info that is always two characters long in
decimal.
"""
...

View File

@@ -1,6 +1,8 @@
[pytest]
addopts = --doctest-modules
testpaths = parso test
# Ignore broken files inblackbox test directories
norecursedirs = .* docs scripts normalizer_issue_files build

View File

@@ -1,2 +1,12 @@
[bdist_wheel]
universal=1
[flake8]
max-line-length = 100
ignore =
# do not use bare 'except'
E722,
# don't know why this was ever even an option, 1+1 should be possible.
E226,
# line break before binary operator
W503,

View File

@@ -40,8 +40,16 @@ setup(name='parso',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Text Editors :: Integrated Development Environments (IDE)',
'Topic :: Utilities',
],
extras_require={
'testing': [
'pytest>=3.0.7',
'docopt',
],
},
)

View File

@@ -19,14 +19,6 @@ def build_nested(code, depth, base='def f():\n'):
FAILING_EXAMPLES = [
'1 +',
'?',
# Python/compile.c
dedent('''\
for a in [1]:
try:
pass
finally:
continue
'''), # 'continue' not supported inside 'finally' clause"
'continue',
'break',
'return',
@@ -141,7 +133,7 @@ FAILING_EXAMPLES = [
# f-strings
'f"{}"',
'f"{\\}"',
r'f"{\}"',
'f"{\'\\\'}"',
'f"{#}"',
"f'{1!b}'",
@@ -154,7 +146,7 @@ FAILING_EXAMPLES = [
# Now nested parsing
"f'{continue}'",
"f'{1;1}'",
"f'{a=3}'",
"f'{a;}'",
"f'{b\"\" \"\"}'",
]
@@ -259,10 +251,6 @@ GLOBAL_NONLOCAL_ERROR = [
if sys.version_info >= (3, 6):
FAILING_EXAMPLES += GLOBAL_NONLOCAL_ERROR
FAILING_EXAMPLES += [
# Raises multiple errors in previous versions.
'async def foo():\n def nofoo():[x async for x in []]',
]
if sys.version_info >= (3, 5):
FAILING_EXAMPLES += [
# Raises different errors so just ignore them for now.
@@ -285,6 +273,14 @@ if sys.version_info >= (3,):
'b"ä"',
# combining strings and unicode is allowed in Python 2.
'"s" b""',
'"s" b"" ""',
'b"" "" b"" ""',
]
if sys.version_info >= (3, 6):
FAILING_EXAMPLES += [
# Same as above, but for f-strings.
'f"s" b""',
'b"s" f""',
]
if sys.version_info >= (2, 7):
# This is something that raises a different error in 2.6 than in the other
@@ -311,3 +307,15 @@ if sys.version_info[:2] <= (3, 4):
'a = *[1], 2',
'(*[1], 2)',
]
if sys.version_info[:2] < (3, 8):
FAILING_EXAMPLES += [
# Python/compile.c
dedent('''\
for a in [1]:
try:
pass
finally:
continue
'''), # 'continue' not supported inside 'finally' clause"
]

290
test/fuzz_diff_parser.py Normal file
View File

@@ -0,0 +1,290 @@
"""
A script to find bugs in the diff parser.
This script is extremely useful if changes are made to the diff parser. By
running a few thousand iterations, we can assure that the diff parser is in
good shape.
Usage:
fuzz_diff_parser.py [--pdb|--ipdb] [-l] [-n=<nr>] [-x=<nr>] random [<path>]
fuzz_diff_parser.py [--pdb|--ipdb] [-l] redo [-o=<nr>] [-p]
fuzz_diff_parser.py -h | --help
Options:
-h --help Show this screen
-n, --maxtries=<nr> Maximum of random tries [default: 1000]
-x, --changes=<nr> Amount of changes to be done to a file per try [default: 5]
-l, --logging Prints all the logs
-o, --only-last=<nr> Only runs the last n iterations; Defaults to running all
-p, --print-code Print all test diffs
--pdb Launch pdb when error is raised
--ipdb Launch ipdb when error is raised
"""
from __future__ import print_function
import logging
import sys
import os
import random
import pickle
import parso
from parso.utils import split_lines
from test.test_diff_parser import _check_error_leaves_nodes
_latest_grammar = parso.load_grammar(version='3.8')
_python_reserved_strings = tuple(
# Keywords are ususally only interesting in combination with spaces after
# them. We don't put a space before keywords, to avoid indentation errors.
s + (' ' if s.isalpha() else '')
for s in _latest_grammar._pgen_grammar.reserved_syntax_strings.keys()
)
_random_python_fragments = _python_reserved_strings + (
' ', '\t', '\n', '\r', '\f', 'f"', 'F"""', "fr'", "RF'''", '"', '"""', "'",
"'''", ';', ' some_random_word ', '\\', '#',
)
def find_python_files_in_tree(file_path):
if not os.path.isdir(file_path):
yield file_path
return
for root, dirnames, filenames in os.walk(file_path):
for name in filenames:
if name.endswith('.py'):
yield os.path.join(root, name)
def _print_copyable_lines(lines):
for line in lines:
line = repr(line)[1:-1]
if line.endswith(r'\n'):
line = line[:-2] + '\n'
print(line, end='')
def _get_first_error_start_pos_or_none(module):
error_leaf = _check_error_leaves_nodes(module)
return None if error_leaf is None else error_leaf.start_pos
class LineReplacement:
def __init__(self, line_nr, new_line):
self._line_nr = line_nr
self._new_line = new_line
def apply(self, code_lines):
# print(repr(self._new_line))
code_lines[self._line_nr] = self._new_line
class LineDeletion:
def __init__(self, line_nr):
self.line_nr = line_nr
def apply(self, code_lines):
del code_lines[self.line_nr]
class LineCopy:
def __init__(self, copy_line, insertion_line):
self._copy_line = copy_line
self._insertion_line = insertion_line
def apply(self, code_lines):
code_lines.insert(
self._insertion_line,
# Use some line from the file. This doesn't feel totally
# random, but for the diff parser it will feel like it.
code_lines[self._copy_line]
)
class FileModification:
@classmethod
def generate(cls, code_lines, change_count):
return cls(
list(cls._generate_line_modifications(code_lines, change_count)),
# work with changed trees more than with normal ones.
check_original=random.random() > 0.8,
)
@staticmethod
def _generate_line_modifications(lines, change_count):
def random_line(include_end=False):
return random.randint(0, len(lines) - (not include_end))
lines = list(lines)
for _ in range(change_count):
rand = random.randint(1, 4)
if rand == 1:
if len(lines) == 1:
# We cannot delete every line, that doesn't make sense to
# fuzz and it would be annoying to rewrite everything here.
continue
l = LineDeletion(random_line())
elif rand == 2:
# Copy / Insertion
# Make it possible to insert into the first and the last line
l = LineCopy(random_line(), random_line(include_end=True))
elif rand in (3, 4):
# Modify a line in some weird random ways.
line_nr = random_line()
line = lines[line_nr]
column = random.randint(0, len(line))
random_string = ''
for _ in range(random.randint(1, 3)):
if random.random() > 0.8:
# The lower characters cause way more issues.
unicode_range = 0x1f if random.randint(0, 1) else 0x3000
random_string += chr(random.randint(0, unicode_range))
else:
# These insertions let us understand how random
# keyword/operator insertions work. Theoretically this
# could also be done with unicode insertions, but the
# fuzzer is just way more effective here.
random_string += random.choice(_random_python_fragments)
if random.random() > 0.5:
# In this case we insert at a very random place that
# probably breaks syntax.
line = line[:column] + random_string + line[column:]
else:
# Here we have better chances to not break syntax, because
# we really replace the line with something that has
# indentation.
line = ' ' * random.randint(0, 12) + random_string + '\n'
l = LineReplacement(line_nr, line)
l.apply(lines)
yield l
def __init__(self, modification_list, check_original):
self._modification_list = modification_list
self._check_original = check_original
def _apply(self, code_lines):
changed_lines = list(code_lines)
for modification in self._modification_list:
modification.apply(changed_lines)
return changed_lines
def run(self, grammar, code_lines, print_code):
code = ''.join(code_lines)
modified_lines = self._apply(code_lines)
modified_code = ''.join(modified_lines)
if print_code:
if self._check_original:
print('Original:')
_print_copyable_lines(code_lines)
print('\nModified:')
_print_copyable_lines(modified_lines)
print()
if self._check_original:
m = grammar.parse(code, diff_cache=True)
start1 = _get_first_error_start_pos_or_none(m)
grammar.parse(modified_code, diff_cache=True)
if self._check_original:
# Also check if it's possible to "revert" the changes.
m = grammar.parse(code, diff_cache=True)
start2 = _get_first_error_start_pos_or_none(m)
assert start1 == start2, (start1, start2)
class FileTests:
def __init__(self, file_path, test_count, change_count):
self._path = file_path
with open(file_path) as f:
code = f.read()
self._code_lines = split_lines(code, keepends=True)
self._test_count = test_count
self._code_lines = self._code_lines
self._change_count = change_count
self._file_modifications = []
def _run(self, grammar, file_modifications, debugger, print_code=False):
try:
for i, fm in enumerate(file_modifications, 1):
fm.run(grammar, self._code_lines, print_code=print_code)
print('.', end='')
sys.stdout.flush()
print()
except Exception:
print("Issue in file: %s" % self._path)
if debugger:
einfo = sys.exc_info()
pdb = __import__(debugger)
pdb.post_mortem(einfo[2])
raise
def redo(self, grammar, debugger, only_last, print_code):
mods = self._file_modifications
if only_last is not None:
mods = mods[-only_last:]
self._run(grammar, mods, debugger, print_code=print_code)
def run(self, grammar, debugger):
def iterate():
for _ in range(self._test_count):
fm = FileModification.generate(self._code_lines, self._change_count)
self._file_modifications.append(fm)
yield fm
self._run(grammar, iterate(), debugger)
def main(arguments):
debugger = 'pdb' if arguments['--pdb'] else \
'ipdb' if arguments['--ipdb'] else None
redo_file = os.path.join(os.path.dirname(__file__), 'fuzz-redo.pickle')
if arguments['--logging']:
root = logging.getLogger()
root.setLevel(logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
root.addHandler(ch)
grammar = parso.load_grammar()
parso.python.diff.DEBUG_DIFF_PARSER = True
if arguments['redo']:
with open(redo_file, 'rb') as f:
file_tests_obj = pickle.load(f)
only_last = arguments['--only-last'] and int(arguments['--only-last'])
file_tests_obj.redo(
grammar,
debugger,
only_last=only_last,
print_code=arguments['--print-code']
)
elif arguments['random']:
# A random file is used to do diff parser checks if no file is given.
# This helps us to find errors in a lot of different files.
file_paths = list(find_python_files_in_tree(arguments['<path>'] or '.'))
max_tries = int(arguments['--maxtries'])
tries = 0
try:
while tries < max_tries:
path = random.choice(file_paths)
print("Checking %s: %s tries" % (path, tries))
now_tries = min(1000, max_tries - tries)
file_tests_obj = FileTests(path, now_tries, int(arguments['--changes']))
file_tests_obj.run(grammar, debugger)
tries += now_tries
except Exception:
with open(redo_file, 'wb') as f:
pickle.dump(file_tests_obj, f)
raise
else:
raise NotImplementedError('Command is not implemented')
if __name__ == '__main__':
from docopt import docopt
arguments = docopt(__doc__)
main(arguments)

View File

@@ -10,6 +10,7 @@ from parso.cache import _NodeCacheItem, save_module, load_module, \
_get_hashed_path, parser_cache, _load_from_file_system, _save_to_file_system
from parso import load_grammar
from parso import cache
from parso import file_io
@pytest.fixture()
@@ -76,12 +77,13 @@ def test_modulepickling_simulate_deleted_cache(tmpdir):
path = tmpdir.dirname + '/some_path'
with open(path, 'w'):
pass
io = file_io.FileIO(path)
save_module(grammar._hashed, path, module, [])
assert load_module(grammar._hashed, path) == module
save_module(grammar._hashed, io, module, lines=[])
assert load_module(grammar._hashed, io) == module
unlink(_get_hashed_path(grammar._hashed, path))
parser_cache.clear()
cached2 = load_module(grammar._hashed, path)
cached2 = load_module(grammar._hashed, io)
assert cached2 is None

View File

@@ -1,14 +1,18 @@
# -*- coding: utf-8 -*-
from textwrap import dedent
import logging
import sys
import pytest
from parso.utils import split_lines
from parso import cache
from parso import load_grammar
from parso.python.diff import DiffParser
from parso.python.diff import DiffParser, _assert_valid_graph
from parso import parse
ANY = object()
def test_simple():
"""
@@ -21,7 +25,7 @@ def test_simple():
def _check_error_leaves_nodes(node):
if node.type in ('error_leaf', 'error_node'):
return True
return node
try:
children = node.children
@@ -29,23 +33,10 @@ def _check_error_leaves_nodes(node):
pass
else:
for child in children:
if _check_error_leaves_nodes(child):
return True
return False
def _assert_valid_graph(node):
"""
Checks if the parent/children relationship is correct.
"""
try:
children = node.children
except AttributeError:
return
for child in children:
assert child.parent == node
_assert_valid_graph(child)
x_node = _check_error_leaves_nodes(child)
if x_node is not None:
return x_node
return None
class Differ(object):
@@ -60,6 +51,8 @@ class Differ(object):
self.lines = split_lines(code, keepends=True)
self.module = parse(code, diff_cache=True, cache=True)
assert code == self.module.get_code()
_assert_valid_graph(self.module)
return self.module
def parse(self, code, copies=0, parsers=0, expect_error_leaves=False):
@@ -73,11 +66,15 @@ class Differ(object):
new_module = diff_parser.update(self.lines, lines)
self.lines = lines
assert code == new_module.get_code()
assert diff_parser._copy_count == copies
#assert diff_parser._parser_count == parsers
assert expect_error_leaves == _check_error_leaves_nodes(new_module)
_assert_valid_graph(new_module)
error_node = _check_error_leaves_nodes(new_module)
assert expect_error_leaves == (error_node is not None), error_node
if parsers is not ANY:
assert diff_parser._parser_count == parsers
if copies is not ANY:
assert diff_parser._copy_count == copies
return new_module
@@ -122,7 +119,7 @@ def test_positions(differ):
m = differ.parse('a\n\n', parsers=1)
assert m.end_pos == (3, 0)
m = differ.parse('a\n\n ', copies=1, parsers=1)
m = differ.parse('a\n\n ', copies=1, parsers=2)
assert m.end_pos == (3, 1)
m = differ.parse('a ', parsers=1)
assert m.end_pos == (1, 2)
@@ -138,7 +135,7 @@ def test_if_simple(differ):
differ.initialize(src + 'a')
differ.parse(src + else_ + "a", copies=0, parsers=1)
differ.parse(else_, parsers=1, expect_error_leaves=True)
differ.parse(else_, parsers=1, copies=1, expect_error_leaves=True)
differ.parse(src + else_, parsers=1)
@@ -208,7 +205,7 @@ def test_open_parentheses(differ):
differ.parse(new_code, parsers=1, expect_error_leaves=True)
new_code = 'a = 1\n' + new_code
differ.parse(new_code, copies=1, parsers=1, expect_error_leaves=True)
differ.parse(new_code, parsers=2, expect_error_leaves=True)
func += 'def other_func():\n pass\n'
differ.initialize('isinstance(\n' + func)
@@ -222,6 +219,7 @@ def test_open_parentheses_at_end(differ):
differ.initialize(code)
differ.parse(code, parsers=1, expect_error_leaves=True)
def test_backslash(differ):
src = dedent(r"""
a = 1\
@@ -255,7 +253,7 @@ def test_backslash(differ):
def test_full_copy(differ):
code = 'def foo(bar, baz):\n pass\n bar'
differ.initialize(code)
differ.parse(code, copies=1, parsers=1)
differ.parse(code, copies=1)
def test_wrong_whitespace(differ):
@@ -263,10 +261,10 @@ def test_wrong_whitespace(differ):
hello
'''
differ.initialize(code)
differ.parse(code + 'bar\n ', parsers=1)
differ.parse(code + 'bar\n ', parsers=3)
code += """abc(\npass\n """
differ.parse(code, parsers=1, copies=1, expect_error_leaves=True)
differ.parse(code, parsers=2, copies=1, expect_error_leaves=True)
def test_issues_with_error_leaves(differ):
@@ -367,7 +365,7 @@ def test_totally_wrong_whitespace(differ):
'''
differ.initialize(code1)
differ.parse(code2, parsers=3, copies=0, expect_error_leaves=True)
differ.parse(code2, parsers=4, copies=0, expect_error_leaves=True)
def test_node_insertion(differ):
@@ -466,6 +464,9 @@ def test_in_parentheses_newlines(differ):
b = 2""")
differ.initialize(code1)
differ.parse(code2, parsers=1, copies=1)
def test_indentation_issue(differ):
code1 = dedent("""
@@ -483,4 +484,803 @@ def test_indentation_issue(differ):
""")
differ.initialize(code1)
differ.parse(code2, parsers=2)
differ.parse(code2, parsers=1)
def test_endmarker_newline(differ):
code1 = dedent('''\
docu = None
# some comment
result = codet
incomplete_dctassign = {
"module"
if "a":
x = 3 # asdf
''')
code2 = code1.replace('codet', 'coded')
differ.initialize(code1)
differ.parse(code2, parsers=2, copies=1, expect_error_leaves=True)
def test_newlines_at_end(differ):
differ.initialize('a\n\n')
differ.parse('a\n', copies=1)
def test_end_newline_with_decorator(differ):
code = dedent('''\
@staticmethod
def spam():
import json
json.l''')
differ.initialize(code)
module = differ.parse(code + '\n', copies=1, parsers=1)
decorated, endmarker = module.children
assert decorated.type == 'decorated'
decorator, func = decorated.children
suite = func.children[-1]
assert suite.type == 'suite'
newline, first_stmt, second_stmt = suite.children
assert first_stmt.get_code() == ' import json\n'
assert second_stmt.get_code() == ' json.l\n'
def test_invalid_to_valid_nodes(differ):
code1 = dedent('''\
def a():
foo = 3
def b():
la = 3
else:
la
return
foo
base
''')
code2 = dedent('''\
def a():
foo = 3
def b():
la = 3
if foo:
latte = 3
else:
la
return
foo
base
''')
differ.initialize(code1)
differ.parse(code2, parsers=1, copies=3)
def test_if_removal_and_reappearence(differ):
code1 = dedent('''\
la = 3
if foo:
latte = 3
else:
la
pass
''')
code2 = dedent('''\
la = 3
latte = 3
else:
la
pass
''')
code3 = dedent('''\
la = 3
if foo:
latte = 3
else:
la
''')
differ.initialize(code1)
differ.parse(code2, parsers=1, copies=4, expect_error_leaves=True)
differ.parse(code1, parsers=1, copies=1)
differ.parse(code3, parsers=1, copies=1)
def test_add_error_indentation(differ):
code = 'if x:\n 1\n'
differ.initialize(code)
differ.parse(code + ' 2\n', parsers=1, copies=0, expect_error_leaves=True)
def test_differing_docstrings(differ):
code1 = dedent('''\
def foobar(x, y):
1
return x
def bazbiz():
foobar()
lala
''')
code2 = dedent('''\
def foobar(x, y):
2
return x + y
def bazbiz():
z = foobar()
lala
''')
differ.initialize(code1)
differ.parse(code2, parsers=3, copies=1)
differ.parse(code1, parsers=3, copies=1)
def test_one_call_in_function_change(differ):
code1 = dedent('''\
def f(self):
mro = [self]
for a in something:
yield a
def g(self):
return C(
a=str,
b=self,
)
''')
code2 = dedent('''\
def f(self):
mro = [self]
def g(self):
return C(
a=str,
t
b=self,
)
''')
differ.initialize(code1)
differ.parse(code2, parsers=1, copies=1, expect_error_leaves=True)
differ.parse(code1, parsers=2, copies=1)
def test_function_deletion(differ):
code1 = dedent('''\
class C(list):
def f(self):
def iterate():
for x in b:
break
return list(iterate())
''')
code2 = dedent('''\
class C():
def f(self):
for x in b:
break
return list(iterate())
''')
differ.initialize(code1)
differ.parse(code2, parsers=1, copies=0, expect_error_leaves=True)
differ.parse(code1, parsers=1, copies=0)
def test_docstring_removal(differ):
code1 = dedent('''\
class E(Exception):
"""
1
2
3
"""
class S(object):
@property
def f(self):
return cmd
def __repr__(self):
return cmd2
''')
code2 = dedent('''\
class E(Exception):
"""
1
3
"""
class S(object):
@property
def f(self):
return cmd
return cmd2
''')
differ.initialize(code1)
differ.parse(code2, parsers=1, copies=2)
differ.parse(code1, parsers=2, copies=1)
def test_paren_in_strange_position(differ):
code1 = dedent('''\
class C:
""" ha """
def __init__(self, message):
self.message = message
''')
code2 = dedent('''\
class C:
""" ha """
)
def __init__(self, message):
self.message = message
''')
differ.initialize(code1)
differ.parse(code2, parsers=1, copies=2, expect_error_leaves=True)
differ.parse(code1, parsers=0, copies=2)
def insert_line_into_code(code, index, line):
lines = split_lines(code, keepends=True)
lines.insert(index, line)
return ''.join(lines)
def test_paren_before_docstring(differ):
code1 = dedent('''\
# comment
"""
The
"""
from parso import tree
from parso import python
''')
code2 = insert_line_into_code(code1, 1, ' ' * 16 + 'raise InternalParseError(\n')
differ.initialize(code1)
differ.parse(code2, parsers=1, copies=1, expect_error_leaves=True)
differ.parse(code1, parsers=2, copies=1)
def test_parentheses_before_method(differ):
code1 = dedent('''\
class A:
def a(self):
pass
class B:
def b(self):
if 1:
pass
''')
code2 = dedent('''\
class A:
def a(self):
pass
Exception.__init__(self, "x" %
def b(self):
if 1:
pass
''')
differ.initialize(code1)
differ.parse(code2, parsers=2, copies=1, expect_error_leaves=True)
differ.parse(code1, parsers=1, copies=1)
def test_indentation_issues(differ):
code1 = dedent('''\
class C:
def f():
1
if 2:
return 3
def g():
to_be_removed
pass
''')
code2 = dedent('''\
class C:
def f():
1
``something``, very ``weird``).
if 2:
return 3
def g():
to_be_removed
pass
''')
code3 = dedent('''\
class C:
def f():
1
if 2:
return 3
def g():
pass
''')
differ.initialize(code1)
differ.parse(code2, parsers=2, copies=2, expect_error_leaves=True)
differ.parse(code1, copies=2)
differ.parse(code3, parsers=2, copies=1)
differ.parse(code1, parsers=1, copies=2)
def test_error_dedent_issues(differ):
code1 = dedent('''\
while True:
try:
1
except KeyError:
if 2:
3
except IndexError:
4
5
''')
code2 = dedent('''\
while True:
try:
except KeyError:
1
except KeyError:
if 2:
3
except IndexError:
4
something_inserted
5
''')
differ.initialize(code1)
differ.parse(code2, parsers=6, copies=2, expect_error_leaves=True)
differ.parse(code1, parsers=1, copies=0)
def test_random_text_insertion(differ):
code1 = dedent('''\
class C:
def f():
return node
def g():
try:
1
except KeyError:
2
''')
code2 = dedent('''\
class C:
def f():
return node
Some'random text: yeah
for push in plan.dfa_pushes:
def g():
try:
1
except KeyError:
2
''')
differ.initialize(code1)
differ.parse(code2, parsers=1, copies=1, expect_error_leaves=True)
differ.parse(code1, parsers=1, copies=1)
def test_many_nested_ifs(differ):
code1 = dedent('''\
class C:
def f(self):
def iterate():
if 1:
yield t
else:
yield
return
def g():
3
''')
code2 = dedent('''\
def f(self):
def iterate():
if 1:
yield t
hahahaha
if 2:
else:
yield
return
def g():
3
''')
differ.initialize(code1)
differ.parse(code2, parsers=2, copies=1, expect_error_leaves=True)
differ.parse(code1, parsers=1, copies=1)
@pytest.mark.skipif(sys.version_info < (3, 5), reason="Async starts working in 3.5")
@pytest.mark.parametrize('prefix', ['', 'async '])
def test_with_and_funcdef_in_call(differ, prefix):
code1 = prefix + dedent('''\
with x:
la = C(
a=1,
b=2,
c=3,
)
''')
code2 = insert_line_into_code(code1, 3, 'def y(self, args):\n')
differ.initialize(code1)
differ.parse(code2, parsers=3, expect_error_leaves=True)
differ.parse(code1, parsers=1)
def test_wrong_backslash(differ):
code1 = dedent('''\
def y():
1
for x in y:
continue
''')
code2 = insert_line_into_code(code1, 3, '\\.whl$\n')
differ.initialize(code1)
differ.parse(code2, parsers=2, copies=2, expect_error_leaves=True)
differ.parse(code1, parsers=1, copies=1)
def test_comment_change(differ):
differ.initialize('')
def test_random_unicode_characters(differ):
"""
Those issues were all found with the fuzzer.
"""
differ.initialize('')
differ.parse(u'\x1dĔBϞɛˁşʑ˳˻ȣſéÎ\x90̕ȟòwʘ\x1dĔBϞɛˁşʑ˳˻ȣſéÎ', parsers=1, expect_error_leaves=True)
differ.parse(u'\r\r', parsers=1)
differ.parse(u"˟Ę\x05À\r rúƣ@\x8a\x15r()\n", parsers=1, expect_error_leaves=True)
differ.parse(u'a\ntaǁ\rGĒōns__\n\nb', parsers=1)
s = ' if not (self, "_fi\x02\x0e\x08\n\nle"):'
differ.parse(s, parsers=1, expect_error_leaves=True)
differ.parse('')
differ.parse(s + '\n', parsers=1, expect_error_leaves=True)
differ.parse(u' result = (\r\f\x17\t\x11res)', parsers=2, expect_error_leaves=True)
differ.parse('')
differ.parse(' a( # xx\ndef', parsers=2, expect_error_leaves=True)
@pytest.mark.skipif(sys.version_info < (2, 7), reason="No set literals in Python 2.6")
def test_dedent_end_positions(differ):
code1 = dedent('''\
if 1:
if b:
2
c = {
5}
''')
code2 = dedent('''\
if 1:
if ⌟ഒᜈྡྷṭb:
2
'l': ''}
c = {
5}
''')
differ.initialize(code1)
differ.parse(code2, parsers=1, expect_error_leaves=True)
differ.parse(code1, parsers=1)
def test_special_no_newline_ending(differ):
code1 = dedent('''\
1
''')
code2 = dedent('''\
1
is ''')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=1, expect_error_leaves=True)
differ.parse(code1, copies=1, parsers=0)
def test_random_character_insertion(differ):
code1 = dedent('''\
def create(self):
1
if self.path is not None:
return
# 3
# 4
''')
code2 = dedent('''\
def create(self):
1
if 2:
x return
# 3
# 4
''')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=3, expect_error_leaves=True)
differ.parse(code1, copies=1, parsers=1)
def test_import_opening_bracket(differ):
code1 = dedent('''\
1
2
from bubu import (X,
''')
code2 = dedent('''\
11
2
from bubu import (X,
''')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=2, expect_error_leaves=True)
differ.parse(code1, copies=1, parsers=2, expect_error_leaves=True)
def test_opening_bracket_at_end(differ):
code1 = dedent('''\
class C:
1
[
''')
code2 = dedent('''\
3
class C:
1
[
''')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=2, expect_error_leaves=True)
differ.parse(code1, copies=1, parsers=1, expect_error_leaves=True)
def test_all_sorts_of_indentation(differ):
code1 = dedent('''\
class C:
1
def f():
'same'
if foo:
a = b
end
''')
code2 = dedent('''\
class C:
1
def f(yield await %|(
'same'
\x02\x06\x0f\x1c\x11
if foo:
a = b
end
''')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=4, expect_error_leaves=True)
differ.parse(code1, copies=1, parsers=3)
code3 = dedent('''\
if 1:
a
b
c
d
\x00
''')
differ.parse(code3, parsers=2, expect_error_leaves=True)
differ.parse('')
def test_dont_copy_dedents_in_beginning(differ):
code1 = dedent('''\
a
4
''')
code2 = dedent('''\
1
2
3
4
''')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=1, expect_error_leaves=True)
differ.parse(code1, parsers=2)
def test_dont_copy_error_leaves(differ):
code1 = dedent('''\
def f(n):
x
if 2:
3
''')
code2 = dedent('''\
def f(n):
def if 1:
indent
x
if 2:
3
''')
differ.initialize(code1)
differ.parse(code2, parsers=1, expect_error_leaves=True)
differ.parse(code1, parsers=2)
def test_error_dedent_in_between(differ):
code1 = dedent('''\
class C:
def f():
a
if something:
x
z
''')
code2 = dedent('''\
class C:
def f():
a
dedent
if other_thing:
b
if something:
x
z
''')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=1, expect_error_leaves=True)
differ.parse(code1, copies=1, parsers=2)
def test_some_other_indentation_issues(differ):
code1 = dedent('''\
class C:
x
def f():
""
copied
a
''')
code2 = dedent('''\
try:
de
a
b
c
d
def f():
""
copied
a
''')
differ.initialize(code1)
differ.parse(code2, copies=2, parsers=1, expect_error_leaves=True)
differ.parse(code1, copies=2, parsers=2)
def test_open_bracket_case1(differ):
code1 = dedent('''\
class C:
1
2 # ha
''')
code2 = insert_line_into_code(code1, 2, ' [str\n')
code3 = insert_line_into_code(code2, 4, ' str\n')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=1, expect_error_leaves=True)
differ.parse(code3, copies=1, parsers=1, expect_error_leaves=True)
differ.parse(code1, copies=1, parsers=1)
def test_open_bracket_case2(differ):
code1 = dedent('''\
class C:
def f(self):
(
b
c
def g(self):
d
''')
code2 = dedent('''\
class C:
def f(self):
(
b
c
self.
def g(self):
d
''')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=2, expect_error_leaves=True)
differ.parse(code1, copies=2, parsers=0, expect_error_leaves=True)
def test_some_weird_removals(differ):
code1 = dedent('''\
class C:
1
''')
code2 = dedent('''\
class C:
1
@property
A
return
# x
omega
''')
code3 = dedent('''\
class C:
1
;
omega
''')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=1, expect_error_leaves=True)
differ.parse(code3, copies=1, parsers=2, expect_error_leaves=True)
differ.parse(code1, copies=1)
@pytest.mark.skipif(sys.version_info < (3, 5), reason="Async starts working in 3.5")
def test_async_copy(differ):
code1 = dedent('''\
async def main():
x = 3
print(
''')
code2 = dedent('''\
async def main():
x = 3
print()
''')
differ.initialize(code1)
differ.parse(code2, copies=1, parsers=1)
differ.parse(code1, copies=1, parsers=1, expect_error_leaves=True)

View File

@@ -0,0 +1,85 @@
from parso import parse, load_grammar
def test_with_stmt():
module = parse('with x: f.\na')
assert module.children[0].type == 'with_stmt'
w, with_item, colon, f = module.children[0].children
assert f.type == 'error_node'
assert f.get_code(include_prefix=False) == 'f.'
assert module.children[2].type == 'name'
def test_one_line_function(each_version):
module = parse('def x(): f.', version=each_version)
assert module.children[0].type == 'funcdef'
def_, name, parameters, colon, f = module.children[0].children
assert f.type == 'error_node'
module = parse('def x(a:', version=each_version)
func = module.children[0]
assert func.type == 'error_node'
if each_version.startswith('2'):
assert func.children[-1].value == 'a'
else:
assert func.children[-1] == ':'
def test_if_else():
module = parse('if x:\n f.\nelse:\n g(')
if_stmt = module.children[0]
if_, test, colon, suite1, else_, colon, suite2 = if_stmt.children
f = suite1.children[1]
assert f.type == 'error_node'
assert f.children[0].value == 'f'
assert f.children[1].value == '.'
g = suite2.children[1]
assert g.children[0].value == 'g'
assert g.children[1].value == '('
def test_if_stmt():
module = parse('if x: f.\nelse: g(')
if_stmt = module.children[0]
assert if_stmt.type == 'if_stmt'
if_, test, colon, f = if_stmt.children
assert f.type == 'error_node'
assert f.children[0].value == 'f'
assert f.children[1].value == '.'
assert module.children[1].type == 'newline'
assert module.children[1].value == '\n'
assert module.children[2].type == 'error_leaf'
assert module.children[2].value == 'else'
assert module.children[3].type == 'error_leaf'
assert module.children[3].value == ':'
in_else_stmt = module.children[4]
assert in_else_stmt.type == 'error_node'
assert in_else_stmt.children[0].value == 'g'
assert in_else_stmt.children[1].value == '('
def test_invalid_token():
module = parse('a + ? + b')
error_node, q, plus_b, endmarker = module.children
assert error_node.get_code() == 'a +'
assert q.value == '?'
assert q.type == 'error_leaf'
assert plus_b.type == 'factor'
assert plus_b.get_code() == ' + b'
def test_invalid_token_in_fstr():
module = load_grammar(version='3.6').parse('f"{a + ? + b}"')
error_node, q, plus_b, error1, error2, endmarker = module.children
assert error_node.get_code() == 'f"{a +'
assert q.value == '?'
assert q.type == 'error_leaf'
assert plus_b.type == 'error_node'
assert plus_b.get_code() == ' + b'
assert error1.value == '}'
assert error1.type == 'error_leaf'
assert error2.value == '"'
assert error2.type == 'error_leaf'

View File

@@ -1,17 +1,19 @@
import pytest
from textwrap import dedent
from parso import load_grammar, ParserSyntaxError
from parso.python.fstring import tokenize
from parso.python.tokenize import tokenize
@pytest.fixture
def grammar():
return load_grammar(language="python-f-string")
return load_grammar(version='3.8')
@pytest.mark.parametrize(
'code', [
'{1}',
'{1:}',
'',
'{1!a}',
'{1!a:1}',
@@ -19,6 +21,9 @@ def grammar():
'{1:1.{32}}',
'{1::>4}',
'{foo} {bar}',
'{x:{y}}',
'{x:{y:}}',
'{x:{y:1}}',
# Escapes
'{{}}',
@@ -26,22 +31,16 @@ def grammar():
'{{{1}',
'1{{2{{3',
'}}',
'{:}}}',
# Invalid, but will be checked, later.
'{}',
'{1:}',
'{:}',
'{:1}',
'{!:}',
'{!}',
'{!a}',
'{1:{}}',
'{1:{:}}',
# New Python 3.8 syntax f'{a=}'
'{a=}',
'{a()=}',
]
)
def test_valid(code, grammar):
fstring = grammar.parse(code, error_recovery=False)
code = 'f"""%s"""' % code
module = grammar.parse(code, error_recovery=False)
fstring = module.children[0]
assert fstring.type == 'fstring'
assert fstring.get_code() == code
@@ -52,24 +51,52 @@ def test_valid(code, grammar):
'{',
'{1!{a}}',
'{!{a}}',
'{}',
'{:}',
'{:}}}',
'{:1}',
'{!:}',
'{!}',
'{!a}',
'{1:{}}',
'{1:{:}}',
]
)
def test_invalid(code, grammar):
code = 'f"""%s"""' % code
with pytest.raises(ParserSyntaxError):
grammar.parse(code, error_recovery=False)
# It should work with error recovery.
#grammar.parse(code, error_recovery=True)
grammar.parse(code, error_recovery=True)
@pytest.mark.parametrize(
('code', 'start_pos', 'positions'), [
('code', 'positions'), [
# 2 times 2, 5 because python expr and endmarker.
('}{', (2, 3), [(2, 3), (2, 4), (2, 5), (2, 5)]),
(' :{ 1 : } ', (1, 0), [(1, 2), (1, 3), (1, 6), (1, 8), (1, 10)]),
('\n{\nfoo\n }', (2, 1), [(3, 0), (3, 1), (5, 1), (5, 2)]),
('f"}{"', [(1, 0), (1, 2), (1, 3), (1, 4), (1, 5)]),
('f" :{ 1 : } "', [(1, 0), (1, 2), (1, 4), (1, 6), (1, 8), (1, 9),
(1, 10), (1, 11), (1, 12), (1, 13)]),
('f"""\n {\nfoo\n }"""', [(1, 0), (1, 4), (2, 1), (3, 0), (4, 1),
(4, 2), (4, 5)]),
]
)
def test_tokenize_start_pos(code, start_pos, positions):
tokens = tokenize(code, start_pos)
def test_tokenize_start_pos(code, positions):
tokens = list(tokenize(code, version_info=(3, 6)))
assert positions == [p.start_pos for p in tokens]
@pytest.mark.parametrize(
'code', [
dedent("""\
f'''s{
str.uppe
'''
"""),
'f"foo',
'f"""foo',
]
)
def test_roundtrip(grammar, code):
tree = grammar.parse(code)
assert tree.get_code() == code

View File

@@ -106,14 +106,15 @@ def test_end_newlines():
@pytest.mark.parametrize(('code', 'types'), [
('\r', ['error_leaf', 'endmarker']),
('\n\r', ['error_leaf', 'endmarker'])
('\r', ['endmarker']),
('\n\r', ['endmarker'])
])
def test_carriage_return_at_end(code, types):
"""
By adding an artificial newline this creates weird side effects for
\r at the end of files that would normally be error leafs.
By adding an artificial newline this created weird side effects for
\r at the end of files.
"""
tree = parse(code)
assert tree.get_code() == code
assert [c.type for c in tree.children] == types
assert tree.end_pos == (len(code) + 1, 0)

View File

@@ -32,3 +32,16 @@ def test_split_params_with_stars():
assert_params(u'x, *args', x=None, args=None)
assert_params(u'**kwargs', kwargs=None)
assert_params(u'*args, **kwargs', args=None, kwargs=None)
def test_kw_only_no_kw(works_ge_py3):
"""
Parsing this should be working. In CPython the parser also parses this and
in a later step the AST complains.
"""
module = works_ge_py3.parse('def test(arg, *):\n pass')
if module is not None:
func = module.children[0]
open_, p1, asterisk, close = func._get_param_nodes()
assert p1.get_code('arg,')
assert asterisk.value == '*'

View File

@@ -189,3 +189,22 @@ def test_no_error_nodes(each_version):
check(child)
check(parse("if foo:\n bar", version=each_version))
def test_named_expression(works_ge_py38):
works_ge_py38.parse("(a := 1, a + 1)")
@pytest.mark.parametrize(
'param_code', [
'a=1, /',
'a, /',
'a=1, /, b=3',
'a, /, b',
'a, /, b',
'a, /, *, b',
'a, /, **kwargs',
]
)
def test_positional_only_arguments(works_ge_py38, param_code):
works_ge_py38.parse("def x(%s): pass" % param_code)

View File

@@ -12,6 +12,8 @@ import pytest
from parso import load_grammar
from parso import ParserSyntaxError
from parso.pgen2 import generate_grammar
from parso.python import tokenize
def _parse(code, version=None):
@@ -188,6 +190,19 @@ def test_old_octal_notation(works_in_py2):
works_in_py2.parse("07")
def test_long_notation(works_in_py2):
works_in_py2.parse("0xFl")
works_in_py2.parse("0xFL")
works_in_py2.parse("0b1l")
works_in_py2.parse("0B1L")
works_in_py2.parse("0o7l")
works_in_py2.parse("0O7L")
works_in_py2.parse("0l")
works_in_py2.parse("0L")
works_in_py2.parse("10l")
works_in_py2.parse("10L")
def test_new_binary_notation(each_version):
_parse("""0b101010""", each_version)
_invalid_syntax("""0b0101021""", each_version)
@@ -270,3 +285,19 @@ def py_br(each_version):
def test_py3_rb(works_ge_py3):
works_ge_py3.parse("rb'1'")
works_ge_py3.parse("RB'1'")
def test_left_recursion():
with pytest.raises(ValueError, match='left recursion'):
generate_grammar('foo: foo NAME\n', tokenize.PythonTokenTypes)
def test_ambiguities():
with pytest.raises(ValueError, match='ambiguous'):
generate_grammar('foo: bar | baz\nbar: NAME\nbaz: NAME\n', tokenize.PythonTokenTypes)
with pytest.raises(ValueError, match='ambiguous'):
generate_grammar('''foo: bar | baz\nbar: 'x'\nbaz: "x"\n''', tokenize.PythonTokenTypes)
with pytest.raises(ValueError, match='ambiguous'):
generate_grammar('''foo: bar | 'x'\nbar: 'x'\n''', tokenize.PythonTokenTypes)

View File

@@ -41,6 +41,29 @@ def test_python_exception_matches(code):
assert line_nr is None or line_nr == error.start_pos[0]
def test_non_async_in_async():
"""
This example doesn't work with FAILING_EXAMPLES, because the line numbers
are not always the same / incorrect in Python 3.8.
"""
if sys.version_info[:2] < (3, 5):
pytest.skip()
# Raises multiple errors in previous versions.
code = 'async def foo():\n def nofoo():[x async for x in []]'
wanted, line_nr = _get_actual_exception(code)
errors = _get_error_list(code)
if errors:
error, = errors
actual = error.message
assert actual in wanted
if sys.version_info[:2] < (3, 8):
assert line_nr == error.start_pos[0]
else:
assert line_nr == 0 # For whatever reason this is zero in Python 3.8+
@pytest.mark.parametrize(
('code', 'positions'), [
('1 +', [(1, 3)]),
@@ -103,7 +126,8 @@ def _get_actual_exception(code):
# The python 3.5+ way, a bit nicer.
wanted = 'SyntaxError: positional argument follows keyword argument'
elif wanted == 'SyntaxError: assignment to keyword':
return [wanted, "SyntaxError: can't assign to keyword"], line_nr
return [wanted, "SyntaxError: can't assign to keyword",
'SyntaxError: cannot assign to __debug__'], line_nr
elif wanted == 'SyntaxError: assignment to None':
# Python 2.6 does has a slightly different error.
wanted = 'SyntaxError: cannot assign to None'
@@ -114,6 +138,22 @@ def _get_actual_exception(code):
# Python 3.4/3.4 have a bit of a different warning than 3.5/3.6 in
# certain places. But in others this error makes sense.
return [wanted, "SyntaxError: can't use starred expression here"], line_nr
elif wanted == 'SyntaxError: f-string: unterminated string':
wanted = 'SyntaxError: EOL while scanning string literal'
elif wanted == 'SyntaxError: f-string expression part cannot include a backslash':
return [
wanted,
"SyntaxError: EOL while scanning string literal",
"SyntaxError: unexpected character after line continuation character",
], line_nr
elif wanted == "SyntaxError: f-string: expecting '}'":
wanted = 'SyntaxError: EOL while scanning string literal'
elif wanted == 'SyntaxError: f-string: empty expression not allowed':
wanted = 'SyntaxError: invalid syntax'
elif wanted == "SyntaxError: f-string expression part cannot include '#'":
wanted = 'SyntaxError: invalid syntax'
elif wanted == "SyntaxError: f-string: single '}' is not allowed":
wanted = 'SyntaxError: invalid syntax'
return [wanted], line_nr
@@ -242,6 +282,11 @@ def test_too_many_levels_of_indentation():
@pytest.mark.parametrize(
'code', [
"f'{*args,}'",
r'f"\""',
r'f"\\\""',
r'fr"\""',
r'fr"\\\""',
r"print(f'Some {x:.2f} and some {y}')",
]
)
def test_valid_fstrings(code):
@@ -251,6 +296,8 @@ def test_valid_fstrings(code):
@pytest.mark.parametrize(
('code', 'message'), [
("f'{1+}'", ('invalid syntax')),
(r'f"\"', ('invalid syntax')),
(r'fr"\"', ('invalid syntax')),
]
)
def test_invalid_fstrings(code, message):

View File

@@ -6,16 +6,30 @@ import pytest
from parso._compatibility import py_version
from parso.utils import split_lines, parse_version_string
from parso.python.token import (
NAME, NEWLINE, STRING, INDENT, DEDENT, ERRORTOKEN, ENDMARKER, ERROR_DEDENT)
from parso.python.token import PythonTokenTypes
from parso.python import tokenize
from parso import parse
from parso.python.tokenize import PythonToken
def _get_token_list(string):
# To make it easier to access some of the token types, just put them here.
NAME = PythonTokenTypes.NAME
NEWLINE = PythonTokenTypes.NEWLINE
STRING = PythonTokenTypes.STRING
INDENT = PythonTokenTypes.INDENT
DEDENT = PythonTokenTypes.DEDENT
ERRORTOKEN = PythonTokenTypes.ERRORTOKEN
OP = PythonTokenTypes.OP
ENDMARKER = PythonTokenTypes.ENDMARKER
ERROR_DEDENT = PythonTokenTypes.ERROR_DEDENT
FSTRING_START = PythonTokenTypes.FSTRING_START
FSTRING_STRING = PythonTokenTypes.FSTRING_STRING
FSTRING_END = PythonTokenTypes.FSTRING_END
def _get_token_list(string, version=None):
# Load the current version.
version_info = parse_version_string()
version_info = parse_version_string(version)
return list(tokenize.tokenize(string, version_info))
@@ -126,7 +140,7 @@ def test_identifier_contains_unicode():
else:
# Unicode tokens in Python 2 seem to be identified as operators.
# They will be ignored in the parser, that's ok.
assert unicode_token[0] == tokenize.ERRORTOKEN
assert unicode_token[0] == OP
def test_quoted_strings():
@@ -162,8 +176,9 @@ def test_ur_literals():
token_list = _get_token_list(literal)
typ, result_literal, _, _ = token_list[0]
if is_literal:
assert typ == STRING
assert result_literal == literal
if typ != FSTRING_START:
assert typ == STRING
assert result_literal == literal
else:
assert typ == NAME
@@ -175,6 +190,7 @@ def test_ur_literals():
# Starting with Python 3.3 this ordering is also possible.
if py_version >= 33:
check('Rb""')
# Starting with Python 3.6 format strings where introduced.
check('fr""', is_literal=py_version >= 36)
check('rF""', is_literal=py_version >= 36)
@@ -183,18 +199,18 @@ def test_ur_literals():
def test_error_literal():
error_token, endmarker = _get_token_list('"\n')
assert error_token.type == tokenize.ERRORTOKEN
assert endmarker.prefix == ''
assert error_token.string == '"\n'
assert endmarker.type == tokenize.ENDMARKER
error_token, newline, endmarker = _get_token_list('"\n')
assert error_token.type == ERRORTOKEN
assert error_token.string == '"'
assert newline.type == NEWLINE
assert endmarker.type == ENDMARKER
assert endmarker.prefix == ''
bracket, error_token, endmarker = _get_token_list('( """')
assert error_token.type == tokenize.ERRORTOKEN
assert error_token.type == ERRORTOKEN
assert error_token.prefix == ' '
assert error_token.string == '"""'
assert endmarker.type == tokenize.ENDMARKER
assert endmarker.type == ENDMARKER
assert endmarker.prefix == ''
@@ -224,3 +240,105 @@ def test_endmarker_end_pos():
def test_indentation(code, types):
actual_types = [t.type for t in _get_token_list(code)]
assert actual_types == types + [ENDMARKER]
def test_error_string():
t1, newline, endmarker = _get_token_list(' "\n')
assert t1.type == ERRORTOKEN
assert t1.prefix == ' '
assert t1.string == '"'
assert newline.type == NEWLINE
assert endmarker.prefix == ''
assert endmarker.string == ''
def test_indent_error_recovery():
code = dedent("""\
str(
from x import a
def
""")
lst = _get_token_list(code)
expected = [
# `str(`
INDENT, NAME, OP,
# `from parso`
NAME, NAME,
# `import a` on same line as the previous from parso
NAME, NAME, NEWLINE,
# Dedent happens, because there's an import now and the import
# statement "breaks" out of the opening paren on the first line.
DEDENT,
# `b`
NAME, NEWLINE, ENDMARKER]
assert [t.type for t in lst] == expected
def test_error_token_after_dedent():
code = dedent("""\
class C:
pass
$foo
""")
lst = _get_token_list(code)
expected = [
NAME, NAME, OP, NEWLINE, INDENT, NAME, NEWLINE, DEDENT,
# $foo\n
ERRORTOKEN, NAME, NEWLINE, ENDMARKER
]
assert [t.type for t in lst] == expected
def test_brackets_no_indentation():
"""
There used to be an issue that the parentheses counting would go below
zero. This should not happen.
"""
code = dedent("""\
}
{
}
""")
lst = _get_token_list(code)
assert [t.type for t in lst] == [OP, NEWLINE, OP, OP, NEWLINE, ENDMARKER]
def test_form_feed():
error_token, endmarker = _get_token_list(dedent('''\
\f"""'''))
assert error_token.prefix == '\f'
assert error_token.string == '"""'
assert endmarker.prefix == ''
def test_carriage_return():
lst = _get_token_list(' =\\\rclass')
assert [t.type for t in lst] == [INDENT, OP, DEDENT, NAME, ENDMARKER]
def test_backslash():
code = '\\\n# 1 \n'
endmarker, = _get_token_list(code)
assert endmarker.prefix == code
@pytest.mark.parametrize(
('code', 'types'), [
('f"', [FSTRING_START]),
('f""', [FSTRING_START, FSTRING_END]),
('f" {}"', [FSTRING_START, FSTRING_STRING, OP, OP, FSTRING_END]),
('f" "{}', [FSTRING_START, FSTRING_STRING, FSTRING_END, OP, OP]),
(r'f"\""', [FSTRING_START, FSTRING_STRING, FSTRING_END]),
(r'f"\""', [FSTRING_START, FSTRING_STRING, FSTRING_END]),
(r'f"Some {x:.2f}{y}"', [FSTRING_START, FSTRING_STRING, OP, NAME, OP,
FSTRING_STRING, OP, OP, NAME, OP, FSTRING_END]),
(r'print(f"Some {x:.2f}a{y}")', [
NAME, OP, FSTRING_START, FSTRING_STRING, OP, NAME, OP,
FSTRING_STRING, OP, FSTRING_STRING, OP, NAME, OP, FSTRING_END, OP
]),
]
)
def test_fstring(code, types, version_ge_py36):
actual_types = [t.type for t in _get_token_list(code, version_ge_py36)]
assert types + [ENDMARKER] == actual_types

View File

@@ -3,21 +3,42 @@ from codecs import BOM_UTF8
from parso.utils import split_lines, python_bytes_to_unicode
import parso
def test_split_lines_no_keepends():
assert split_lines('asd\r\n') == ['asd', '']
assert split_lines('asd\r\n\f') == ['asd', '\f']
assert split_lines('\fasd\r\n') == ['\fasd', '']
assert split_lines('') == ['']
assert split_lines('\n') == ['', '']
import pytest
def test_split_lines_keepends():
assert split_lines('asd\r\n', keepends=True) == ['asd\r\n', '']
assert split_lines('asd\r\n\f', keepends=True) == ['asd\r\n', '\f']
assert split_lines('\fasd\r\n', keepends=True) == ['\fasd\r\n', '']
assert split_lines('', keepends=True) == ['']
assert split_lines('\n', keepends=True) == ['\n', '']
@pytest.mark.parametrize(
('string', 'expected_result', 'keepends'), [
('asd\r\n', ['asd', ''], False),
('asd\r\n', ['asd\r\n', ''], True),
('asd\r', ['asd', ''], False),
('asd\r', ['asd\r', ''], True),
('asd\n', ['asd', ''], False),
('asd\n', ['asd\n', ''], True),
('asd\r\n\f', ['asd', '\f'], False),
('asd\r\n\f', ['asd\r\n', '\f'], True),
('\fasd\r\n', ['\fasd', ''], False),
('\fasd\r\n', ['\fasd\r\n', ''], True),
('', [''], False),
('', [''], True),
('\n', ['', ''], False),
('\n', ['\n', ''], True),
('\r', ['', ''], False),
('\r', ['\r', ''], True),
# Invalid line breaks
('a\vb', ['a\vb'], False),
('a\vb', ['a\vb'], True),
('\x1C', ['\x1C'], False),
('\x1C', ['\x1C'], True),
]
)
def test_split_lines(string, expected_result, keepends):
assert split_lines(string, keepends=keepends) == expected_result
def test_python_bytes_to_unicode_unicode_text():

20
tox.ini
View File

@@ -1,20 +1,16 @@
[tox]
envlist = py26, py27, py33, py34, py35, py36
envlist = {py26,py27,py33,py34,py35,py36,py37}
[testenv]
extras = testing
deps =
pytest>=3.0.7
# For --lf and --ff.
pytest-cache
py26,py33: pytest>=3.0.7,<3.3
py26,py33: setuptools<37
coverage: coverage
setenv =
# https://github.com/tomchristie/django-rest-framework/issues/1957
# tox corrupts __pycache__, solution from here:
PYTHONDONTWRITEBYTECODE=1
coverage: TOX_TESTENV_COMMAND=coverage run -m pytest
commands =
pytest {posargs:parso test}
[testenv:cov]
deps =
coverage
{[testenv]deps}
commands =
coverage run --source parso -m pytest
coverage report
{env:TOX_TESTENV_COMMAND:pytest} {posargs}
coverage: coverage report