390 Commits

Author SHA1 Message Date
Dave Halter
a06af3d989 Remove the old deploy script. 2017-09-20 22:23:50 +02:00
Dave Halter
f2855ebb11 Change the date of the change log. 2017-09-20 20:33:52 +02:00
Dave Halter
55d7c2acff A bit of a different import. 2017-09-20 18:32:16 +02:00
Dave Halter
84ec5eda4c Remove two internal deprecations that don't seem to matter. 2017-09-20 18:28:46 +02:00
Dave Halter
d6a04b2928 Remove the deprecated attributes from Jedi. 2017-09-20 18:27:29 +02:00
Dave Halter
0c01a3b823 The sys.modules implementation did not work properly with newly created files.
Fixes #886.
2017-09-20 10:06:02 +02:00
Dave Halter
03584ff3f3 Imports can be executed twice without this. 2017-09-19 18:17:07 +02:00
Dave Halter
260aef943a Increase Python's recursion limit
Currently there is still the possiblity that Jedi fails with a recursion error,
because the stack is too small. (see #861) By increasing it we improve the
situation.

Probably we should just be switching away from this extreme amount of recursion
and move to queueing which would also allow to use other algorithms such as
breadth-first-search.
2017-09-18 10:26:42 +02:00
Dave Halter
c7dbf95344 Fix recursion issues.
Completely refactored the recursion checker and changed some settings.

Fixes #861.
2017-09-17 21:54:09 +02:00
Dave Halter
0516a8bd35 Get rid of the settings module in recursions. 2017-09-17 14:18:56 +02:00
Dave Halter
2eb715dae8 Mention in the changelog that the recursion settings have been moved. 2017-09-17 14:10:26 +02:00
Dave Halter
f4ba71f6a3 Move the recursion limit settings to the recursion module. 2017-09-17 14:08:39 +02:00
Dave Halter
f2d24f0259 Remove inspecting the stack in the debugger.
This feature wasn't used anymore and it made debugging a slower by a factor of 10-10000.
2017-09-17 03:03:12 +02:00
Dave Halter
c51634b8d4 dict_values should be accessible for CompiledObjects. 2017-09-17 02:48:09 +02:00
Dave Halter
96ad254fcc Typo. 2017-09-17 02:15:49 +02:00
Dave Halter
4b4b2c2122 Fix a small issue surrounding old school classes in Python 2. 2017-09-17 02:09:39 +02:00
Dave Halter
8fcb468539 Jedi was able to go crazy and loop endlessly in certain if/self assignment combinations.
Here we limit type inferance per tree scope. I'm still not sure this is the way
to go, but it looks okay for now and we can still go anther way in the future.
Tests are there.

Fixes #929.
2017-09-17 02:04:42 +02:00
Dave Halter
9dd2027299 Way better support for instantiated classes in REPL
Fixes several issues:

- It was not possible to correctly trace where instances were coming from in a
  REPL. This led to them being pretty much ignored.
- Instances were then just treated as classes and not as actual instances in
  MixedObjects. (However since they were ignored in the first place this
  wasn't really an issue).
- Avoiding the repr bug https://github.com/python/cpython/pull/2132/files in
  Jedi is working a bit differently. We're just never accessing Objects
  directly. This should work around 99.99% of the cases were people are using
  this stuff.

Fixes #872
2017-09-15 01:55:18 +02:00
Dave Halter
63edbdcc5b Better context completions for finally/except/else/elif
Fixes #837
2017-09-15 00:48:56 +02:00
Dave Halter
e389c61377 Relative imports with more than one level did not work
Fixes #784.
2017-09-14 22:06:08 +02:00
Dave Halter
ab84030ad2 full_name was buggy when used on import error names
Fixes #873.
2017-09-14 20:41:25 +02:00
Dave Halter
2210b11778 Fix some issues with import completion
Fixes #759
2017-09-14 20:09:13 +02:00
Dave Halter
4c2d1ea7e7 Understand context managers correctly
Fixes #812.
2017-09-13 11:00:34 +02:00
Dave Halter
5ff7e3dbbe Actually do goto when follow_imports is used
Fixes #945.
2017-09-13 00:28:49 +02:00
Dave Halter
5a8b9541a7 Add operator.itemgetter support for Python <= 3.3.
Also fixes namedtuple support for these versions.
2017-09-12 23:18:32 +02:00
Dave Halter
a8a15114ac Fix namedtuple support
There were a couple issues:
 - namedtuple with one member didn't work
 - namedtuple content access was never possible
 - operator.itemgetter didn't work properly. Corrected py__bool__ for FakeSequence

Fixes #730.
2017-09-12 11:06:39 +02:00
Dave Halter
4a544c29ea Fix a follow_imports (goto) issue. 2017-09-11 23:32:10 +02:00
Dave Halter
619acbd2ca Goto didn't work well on imports in __init__.py files.
Fixes #956.
2017-09-11 21:48:37 +02:00
Dave Halter
c05f1d3ccc Completion after as in imports should not be possible.
Fixes #841.
2017-09-10 11:27:57 +02:00
Dave Halter
c25a4a00df readlines should be completable.
Fixes #921.
2017-09-10 01:54:50 +02:00
Dave Halter
80284fb14b Gracefully fail in 2.7 because inspect.signature is not available. 2017-09-10 01:36:32 +02:00
Dave Halter
5c6f8bda01 Fix inspect.signature for Python3.4. 2017-09-10 01:34:15 +02:00
Dave Halter
d1c85191a0 Start using inspect.signature for CompiledObject params.
Fixes 917 and 924.
2017-09-09 22:29:00 +02:00
Dave Halter
c7f225439d Comprehenions can also define self variables.
Also related to #932.
2017-09-09 20:20:05 +02:00
Dave Halter
40f4f032c6 Fix class/def/class nesting definitions
Fixes #932.
2017-09-09 20:13:03 +02:00
Dave Halter
236b860cc7 Add the numpy docstring changes to the changelog. 2017-09-09 19:27:11 +02:00
Dave Halter
d47804edef Don't use literal_eval
Using it without control over the input leads to various possible exceptions.
Refs #868.
2017-09-09 19:23:06 +02:00
Dave Halter
3bceef075a Merge branch 'numpydoc' of https://github.com/bcolsen/jedi 2017-09-09 18:50:19 +02:00
Dave Halter
381fedddb4 Fix get_line_code().
Fixes #948.
2017-09-09 18:28:05 +02:00
Dave Halter
ef6a1ca10f Fix an issue with choosing the right lines in get_line_code. Refs #948. 2017-09-09 18:10:53 +02:00
Dave Halter
46f306aa11 Add a TODO. 2017-09-09 17:59:53 +02:00
Dave Halter
078b5802d2 Remove unused code. 2017-09-09 17:58:06 +02:00
Dave Halter
077bccadc7 Remove AnonymousFunctionExecution and simplify everything. 2017-09-09 17:58:06 +02:00
Dave Halter
37ec79241c Remove the only param for AnonymousArguments. 2017-09-09 17:58:06 +02:00
Dave Halter
04c4313dc7 Start refactoring arguments. 2017-09-09 17:58:06 +02:00
Dave Halter
2f213f89e5 Remove code that was scheduled for removal. 2017-09-09 17:58:06 +02:00
Guglielmo Saggiorato
12a6a388cd removed reference to autocomplete-python
kept only ref to autocomplete-python-jedi
2017-09-07 10:58:13 +02:00
Guglielmo Saggiorato
06fac596d9 corrected typo in docs/docs/usage.rst 2017-09-07 10:58:13 +02:00
Guglielmo Saggiorato
7c4a96fbfa Citing autocomplete-python-jedi alongside to autocomplete-python 2017-09-07 10:58:13 +02:00
Dave Halter
4841b8d491 Merge branch 'master' of github.com:davidhalter/jedi 2017-09-07 10:46:15 +02:00
Dave Halter
794880b8a8 Prepare for version 0.11.0. 2017-09-07 10:43:40 +02:00
Dave Halter
c4601b835f Don't go crazy with big lists. 2017-09-07 01:26:53 +02:00
Dave Halter
a0bf465aee Fix an issue in stdlib path checking. 2017-09-07 01:10:54 +02:00
Dave Halter
d2b4e0511f Ignore stdlib paths for dynamic param inference. 2017-09-07 00:09:14 +02:00
Dave Halter
8d06e9f9c9 Do some parser tree caching. This might be important for recursions. 2017-09-05 19:00:49 +02:00
Dave Halter
16ad43922f Aldo change CachedMetaClass a bit to use the same memoize decorator. 2017-09-05 18:52:12 +02:00
Dave Halter
e85000b798 Replace memoize_default with two nicer functions. 2017-09-05 18:46:16 +02:00
Dave Halter
e81486894f Prepare for eventual cache changes. 2017-09-05 18:38:32 +02:00
Dave Halter
2aa5da8682 Parso was finally released. 2017-09-05 18:19:10 +02:00
Jakub Wilk
6c85ec1a6d Fix typos. 2017-09-05 00:34:27 +02:00
Dave Halter
882f8029ea Use split_lines and python_bytes_to_unicode directly. 2017-09-03 18:38:00 +02:00
Dave Halter
ef89593896 Disable more tests in Python2.6, because of set literals that don't exist there. 2017-09-03 02:01:43 +02:00
Dave Halter
957f2cedf4 Disable some tests that don't run in 2.6, because its syntax doesn't support it. 2017-09-03 01:23:54 +02:00
Dave Halter
245ad9d581 Bump parso version. 2017-09-03 01:10:22 +02:00
Dave Halter
65c02a2332 A bit of shuffling code around get_definition around. 2017-09-03 01:05:53 +02:00
Dave Halter
f69d8f1f29 _get_definition -> get_definition in parso. 2017-09-03 00:50:52 +02:00
Dave Halter
4795ed9071 More refactoring. 2017-09-03 00:39:15 +02:00
Dave Halter
6fb2f73f88 Some more refactorings. 2017-09-03 00:37:20 +02:00
Dave Halter
b64690afb8 Param defaults were not correctly followed when goto was used on them. 2017-09-03 00:22:59 +02:00
Dave Halter
e85816cc85 Simplify getting code for completions. 2017-09-03 00:11:23 +02:00
Dave Halter
fc8326bca1 Finally get rid of the last get_definition. 2017-09-03 00:07:14 +02:00
Dave Halter
333babea39 get_definition has now a new option. 2017-09-02 23:56:00 +02:00
Dave Halter
747e0aa7c4 Remove a get_definition usage. 2017-09-02 23:23:09 +02:00
Dave Halter
4a04bf78c7 Move some code around. 2017-09-02 22:45:23 +02:00
Dave Halter
9663e343c2 Almost the last switch to _get_definition. 2017-09-02 22:42:01 +02:00
Dave Halter
03da6b5655 get_definition change in finder. 2017-09-02 21:46:03 +02:00
Dave Halter
6419534417 Some more _get_definition fixes 2017-09-02 21:37:59 +02:00
Dave Halter
ee6d68c3a8 Remove a get_definnition usage. 2017-09-02 17:59:09 +02:00
Dave Halter
7e19e49200 Start replacing get_definitions. 2017-09-02 17:48:01 +02:00
Dave Halter
9cac7462d6 Return statements should be handled correctly if the return_stmt is only a return without an expression behind it. 2017-09-02 14:03:54 +02:00
Dave Halter
c47f5ca68c Fix issues with yield. 2017-09-01 18:38:19 +02:00
Dave Halter
e2d53f51b0 test for yields in expressions. 2017-09-01 18:08:52 +02:00
Dave Halter
16f1eb417a One more parso rename. 2017-09-01 18:05:19 +02:00
Dave Halter
2b08c0ac88 Bump parso to 0.0.3 2017-08-31 22:54:09 +02:00
Dave Halter
3789709ec0 Add the deployment script from parso. 2017-08-31 22:45:27 +02:00
Dave Halter
fe9be9fe09 source_to_unicode -> python_bytes_to_unicode. 2017-08-15 20:09:48 +02:00
Dave Halter
f9e31dc941 Refactor splitlines -> split_lines. 2017-08-15 19:55:50 +02:00
Dave Halter
e3ca1e87ff Simplify Contributing.md. 2017-08-13 22:31:05 +02:00
Dave Halter
a37201bc1d Finally fixing the Python 2 issues with static_getattr. 2017-08-13 22:24:50 +02:00
Dave Halter
13a0d63091 Add Python 2 compatibility. 2017-08-12 23:15:16 +02:00
Dave Halter
88cfb2cb91 Remove side effects when accessing jedi from the interpreter.
Note that there is http://bugs.python.org/issue31184.
Fixes #925.
2017-08-12 22:49:05 +02:00
Dave Halter
b26b8a1749 Merge branch 'dev' 2017-08-12 22:46:24 +02:00
Dave Halter
997cb2d366 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-08-12 22:45:47 +02:00
Yariv Kenan
9a43c35a4d fix old_files method
Returns old files instead of new files info.
Probably not what it was meant for.
2017-08-10 00:14:07 +02:00
bcolsen
3422b21c62 Added Yields test 2017-08-09 00:37:29 -06:00
bcolsen
38a690b4e4 add numpydoc to cov in tox.ini 2017-08-08 23:41:08 -06:00
bcolsen
77d6de0ae5 fix test skip and py3.6 2017-08-08 23:30:02 -06:00
bcolsen
4f96cdb3b0 Numpydocs doesn't support 2.6 or 3.3 2017-08-08 23:13:16 -06:00
bcolsen
d19a97f53a Numpydocs and compiled objects return types 2017-08-08 22:46:33 -06:00
Dave Halter
ff001e07a6 In parso params is now get_params(). 2017-08-06 17:35:05 +02:00
Dave Halter
39cbd003c0 A small change in parso changed the normalize API. 2017-08-06 16:43:47 +02:00
Dave Halter
8d6732c28c Remove a print statement. 2017-07-16 22:16:13 +02:00
Dave Halter
7e4504efbd Fix ellipsis issues of python2. 2017-07-16 20:07:49 +02:00
Dave Halter
54490be1b2 parso.load_grammar now needs version as a keyword argument. 2017-07-16 17:16:37 +02:00
Dave Halter
2fcd2f8f89 Fix some more stuff because of newer parso changes. 2017-07-14 18:21:52 +02:00
micbou
175e57214e Fix instance docstring 2017-07-14 00:59:55 +02:00
micbou
f5248250d8 Fix keyword docstring 2017-07-14 00:22:27 +02:00
Dave Halter
945a2ba405 Dedent some code to avoid issues with parso. 2017-07-09 00:27:23 +02:00
Dave Halter
72b4c8bd9f The normalize function is private for now. 2017-07-08 18:56:42 +02:00
denfromufa
270f70ea7e more precise SO link 2017-06-25 21:39:11 +02:00
Dave Halter
e0485b032e Fix some stuff to make parso work again. 2017-06-02 00:00:31 +02:00
Dave Halter
af26cc9f05 Remove the license parts that are not needed anymore, because they are now part of parso. 2017-06-01 23:59:04 +02:00
Dave Halter
5d657500d1 Use the new normalize function instead of get_code(normalize=True) that was removed in parso. 2017-05-27 13:12:11 -04:00
Dave Halter
b9271cf5a5 Use the parser_cache correctly. 2017-05-26 13:43:18 -04:00
Dave Halter
76529ca34d The parser_cache contents have changed. Therefore adapt. 2017-05-26 12:52:52 -04:00
Dave Halter
35e248091e Some more parso API changes. 2017-05-26 12:02:39 -04:00
Dave Halter
3015e7e60f Remove use_exact_op_types because the default changed. 2017-05-26 11:35:47 -04:00
Dave Halter
24cd603fcf Some more parso adaptations. 2017-05-26 09:08:34 -04:00
Dave Halter
f94ef63ff2 Remove load_python_grammar for tests as well. 2017-05-25 13:36:40 -04:00
Dave Halter
3f36824a94 Parso changed load_python_grammar to load_grammar. 2017-05-25 12:41:19 -04:00
Dave Halter
d0127a7f61 Fix a warning that happened if there was no valid Python function in a place. 2017-05-25 12:26:07 -04:00
Dave Halter
ef2e2f343e Fix some warnings. 2017-05-25 12:24:21 -04:00
Dave Halter
6a320147ac Catch an importlib warning. 2017-05-24 23:43:27 -04:00
Dave Halter
4b0b07791e Bump parso version. 2017-05-24 00:42:06 -04:00
Dave Halter
7173559182 Move a test to parso. 2017-05-24 00:41:55 -04:00
Dave Halter
cd8932fbfc Add a latest grammar to the evaluator and use it to avoid importing from parso import parse. 2017-05-24 00:37:36 -04:00
Dave Halter
b90589b62e Some changes because parso has changed. 2017-05-22 15:42:42 -04:00
Dave Halter
91e753e07a The deploy script should create versions prefixed with v. 2017-05-20 18:01:33 -04:00
Dave Halter
d6f695b3bb Use the ast module instead of a jedi import to get the jedi version.
With dependencies it's not possible to do this with importing jedi anymore. It's now just a bit more complicated. Gosh I hate setup.py.
2017-05-20 17:53:11 -04:00
Dave Halter
c7984c0710 Add a requirements.txt.
Also use it within setup.py. It doesn't seem possible to define dependencies for tox with install_requires.
2017-05-20 17:22:34 -04:00
Dave Halter
fdff9396dd Move an import. 2017-05-20 16:08:43 -04:00
Dave Halter
aec86c6c80 distutils doesn't support install_requires. 2017-05-20 16:07:38 -04:00
Dave Halter
f35f1b9676 Add the cache_path parameter to parso calls. 2017-05-20 10:08:48 -04:00
Dave Halter
50c7137437 splitlines and source_to_unicode are utils of parso. 2017-05-20 09:55:16 -04:00
Dave Halter
0f4b7db56a Move jedi parser cache tests to parso. 2017-05-19 15:04:28 -04:00
Dave Halter
3c2b10a2a0 Remove a test that wasn't used for a long time. 2017-05-19 14:45:36 -04:00
Dave Halter
576c8cb433 Remove a star import cache test (the star import cache doesn't exist anymore). 2017-05-19 14:24:48 -04:00
Dave Halter
9bca3d39f5 Actually use parso now instead of Jedi. 2017-05-19 14:20:14 -04:00
Dave Halter
ccbaa12143 Add parso as a depencency in setup.py. 2017-05-19 10:29:32 -04:00
Dave Halter
32432b1cd1 Remove the parser packages from setup.py. 2017-05-19 10:27:26 -04:00
Dave Halter
f92e675400 Remove the whole parser. 2017-05-19 10:26:24 -04:00
Dave Halter
fb1c208985 Remove the tests that have been moved to parso. 2017-05-19 10:23:56 -04:00
Dave Halter
3c57f781dd Move another few tests. 2017-05-15 15:18:42 -04:00
Dave Halter
f4548d127c Some simplifications for the parsers. 2017-05-15 15:02:45 -04:00
Dave Halter
882ddbf8ac Move some more parser tests. 2017-05-15 15:00:34 -04:00
Dave Halter
0a8c96cd22 Remove a test that is really not necessary anymore, because the issues that it was covering back then are not issues anymore with the new infrastructure. 2017-05-15 14:53:50 -04:00
Dave Halter
6848762f7c Move some more tests. 2017-05-15 14:51:25 -04:00
Dave Halter
f8b5aab6f4 Move some parser tests. 2017-05-15 13:57:26 -04:00
Dave Halter
90b531a3b3 Correcting a sentence. 2017-05-15 11:10:22 -04:00
Dave Halter
0da875281b Remove an unused compatibility function that was overriden by the same name lower in the same file. 2017-05-11 16:22:11 -04:00
Dave Halter
0b3590ce20 Python 3.6 was not tested in the default configuration of tox. 2017-05-08 19:55:35 +02:00
Dave Halter
9fb7fb66da Move another test to delete a file. 2017-05-07 16:39:32 +02:00
Dave Halter
3b033bb276 Remove two tests that are not necessary anymore because the code that made them necessary was removed (some import hacks). 2017-05-07 16:33:24 +02:00
Dave Halter
ab71c943ee Move a parser test to the correct place. 2017-05-07 16:29:48 +02:00
Dave Halter
d717c3bf40 Merge some import tests. 2017-05-07 16:20:49 +02:00
Dave Halter
f9f60177bf Move an analysis test. 2017-05-07 16:14:21 +02:00
Dave Halter
6b7376bc5d Move some stdlib tests. 2017-05-07 16:06:01 +02:00
Dave Halter
6c95f73d77 Remove a function that was not really needed. 2017-05-07 16:00:08 +02:00
Dave Halter
84d8279089 Import.paths -> Import.get_paths. 2017-05-07 15:47:34 +02:00
Dave Halter
9bf66b6149 Make Import.aliases private. 2017-05-07 15:38:03 +02:00
Dave Halter
66b28ca840 Small cleanup. 2017-05-07 15:22:45 +02:00
Dave Halter
fe49fc9b99 Add slots to the PythonMixin. 2017-05-07 15:06:34 +02:00
Dave Halter
536e62e67d Move is_scope and get_parent_scope out of the parser. 2017-05-07 14:58:53 +02:00
Dave Halter
0882849e65 Don't do a simple_stmt error recovery in the parser, because it makes it more complicated. 2017-05-07 14:52:46 +02:00
Dave Halter
30a02587a7 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-05-07 14:46:47 +02:00
Élie Gouzien
80fbdec1da Corrected test class name. 2017-05-06 19:40:36 +02:00
Élie Gouzien
405a339719 Add author Élie Gouzien. 2017-05-06 19:40:36 +02:00
Élie Gouzien
9d5cc0be06 Test that no repr() can slow down completion.
Was reported with issue #919.
2017-05-06 19:40:36 +02:00
Élie Gouzien
a78769954d Check whether inspect.getfile will raise a TypeError and anticipate it.
Anticipate the raise of TypeError from inspect.getfile to prevent the computation of repr() for the error message wich is not used.
Useful for some big pandas arrays.
Fix tentative of #919.
2017-05-06 19:40:36 +02:00
Dave Halter
14eeb60240 Remove is_scope from CompiledObject. It's not needed according to tests. 2017-05-05 09:23:50 +02:00
Dave Halter
f916b9b054 More docstrings. 2017-05-05 09:21:42 +02:00
Dave Halter
336b8a46d0 search_ancestor now uses *node_types as a parameter instead of a mix of tuple and simple string like isinstance. 2017-05-02 19:19:07 +02:00
Dave Halter
6ea06fdfd8 Even if static analysis is not working well, we can at least write it correctly. 2017-05-02 08:59:07 +02:00
Dave Halter
5c836a72b6 Lambda and Function docstrings render better. 2017-05-02 08:57:03 +02:00
Dave Halter
fc7cc1c814 Docstrings for get_defined_names. 2017-05-02 08:50:52 +02:00
Dave Halter
e96bb29d18 Param docstring. 2017-05-02 08:43:46 +02:00
Dave Halter
c1c3f35e08 Docstring for Param.get_code(). 2017-05-01 02:26:24 +02:00
Dave Halter
63679aabd9 Replace Param.get_description with get_code and a parameter include_coma. 2017-05-01 02:19:42 +02:00
Dave Halter
e0b0343a78 Remove expanduser from the parser path. Not sure if that makes sense so I'd rather remove it. 2017-04-30 15:23:43 +02:00
Dave Halter
e2f88db3c2 Trying to make coveralls work again. 2017-04-30 14:19:53 +02:00
Dave Halter
0f1570f682 position_nr -> position_index 2017-04-30 14:12:30 +02:00
Dave Halter
2383f5c0a0 docstrings for the parser tree. 2017-04-30 14:06:57 +02:00
Dave Halter
a1454e3e69 Fix a docstring test. 2017-04-30 03:11:09 +02:00
Dave Halter
78fd3ad861 is_generator is not needed in lambdas. 2017-04-30 03:07:48 +02:00
Dave Halter
1295d73efd path_for_name -> get_path_for_name 2017-04-30 03:03:58 +02:00
Dave Halter
e2d6c39ede Remove yields from lambda. It was previously removed from Function. 2017-04-30 02:59:09 +02:00
Dave Halter
076eea12bd Some minor refactoring of the python tree. 2017-04-30 02:56:44 +02:00
Dave Halter
8165e1a27f Add Module.iter_future_import_names to make checking for future imports easier. 2017-04-30 02:44:02 +02:00
Dave Halter
f2a77e58d8 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-04-30 02:34:38 +02:00
Dave Halter
d8761e6310 Use names instead of the isinstance checks in _search_in_scope 2017-04-30 02:33:51 +02:00
Dave Halter
6e9911daa3 Scope.imports -> iter_imports. 2017-04-30 02:31:30 +02:00
Dave Halter
42fe1aeaa1 Move yields -> iter_yield_exprs. 2017-04-30 02:13:25 +02:00
Dave Halter
606871eb62 returns -> iter_return_stmts 2017-04-30 01:45:59 +02:00
Dave Halter
b4039872bd Replace Scope.subscopes with iter_funcdefs and iter_classdefs. 2017-04-30 01:36:17 +02:00
Matthias Bussonnier
6f1ee0cfa8 Use stacklevel in warnings or filters don't work.
In particular with the right stacklevel IPython will display the warning
if code is directly entered by the user. Without this info it does not.

Use the opportunity to add in the warning since when things are
deprecated. This leads to one less lookup of information for the user.
2017-04-29 20:13:19 +02:00
Dave Halter
3e05061f3b Remove old unused code. 2017-04-28 18:34:02 +02:00
Dave Halter
ad536a837c A small change. 2017-04-28 18:29:35 +02:00
Dave Halter
b328e727ea Remove Scope.walk, because it was never used. 2017-04-28 18:20:07 +02:00
Dave Halter
eaa5100372 Removed Scope.statements from the parser tree. 2017-04-28 18:18:58 +02:00
Dave Halter
307adc2026 Scope.flows is never used so remove it. 2017-04-28 00:23:47 +02:00
Dave Halter
3cf4c66112 Change some more docstring stuff. 2017-04-28 00:23:28 +02:00
Dave Halter
bc4c5fafb7 Start creating documentation for the parser. 2017-04-27 21:50:31 +02:00
Dave Halter
02a8443541 search_ancestor docstring 2017-04-27 21:47:39 +02:00
Dave Halter
a846e687c3 Move search_ancestor to jedi.parser.tree. 2017-04-27 21:41:24 +02:00
Simon Ruggier
338ea42ed9 docstrings: fix "Sphinx param with type" pattern (#807)
* docstrings: fix "Sphinx param with type" pattern

Previously, the pattern only matched if the parameter description
followed on the same line, like so:

    :param type foo: A param named foo.

However, it's also valid for the parameter description to be wrapped
onto the next line, like so:

    :param type foo:
        A param named foo.

This change updates the pattern to match the second example as well, and
adds a test to verify this behaviour.

Fixes #806.

* Add Simon Ruggier to the AUTHORS file
2017-04-27 20:05:48 +02:00
Dave Halter
800bf4bbe2 _NodeOrLeaf -> NodeOrLeaf. 2017-04-27 19:59:30 +02:00
Dave Halter
e8cfb99ada Fix a docs issue. 2017-04-27 19:59:09 +02:00
Dave Halter
8bd41ee887 Better documentation of get_code. 2017-04-27 19:48:00 +02:00
Dave Halter
e8718c6ce5 Docs for IPython completion which depends now on Jedi. 2017-04-27 19:31:50 +02:00
Dave Halter
0474854037 More docstrings of a few _BaseOrLeaf methods/properties. 2017-04-27 17:39:46 +02:00
Dave Halter
e998a18d8e More docstrings. 2017-04-27 09:14:23 +02:00
Dave Halter
819e9f607e Move get_following_comment_same_line out of the parser tree. 2017-04-27 08:56:11 +02:00
Dave Halter
cc4681ec54 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-04-26 18:45:33 +02:00
Dave Halter
e8b32e358b Remove 'move' from the parser tree. 2017-04-26 18:45:18 +02:00
Dave Halter
dea09b096d Some docstrings for the parser. 2017-04-26 18:16:50 +02:00
Dave Halter
c124fc91ca Remove further clean_scope_docstring usages. 2017-04-26 09:52:18 +02:00
Dave Halter
bea28fd33f Give ExecutionParams a better way of knowing what called them. 2017-04-26 09:32:47 +02:00
Matthias Bussonnier
b0f10081d4 Fix : Jedi do not complete numpy arrays in dictionary
Fix ipython/ipython#10468
2017-04-21 13:14:07 +02:00
Dave Halter
f136745a8a follow_param -> infer_param. 2017-04-20 18:09:00 +02:00
Dave Halter
ea1905f121 Refactor the docstring input. 2017-04-20 18:06:40 +02:00
Dave Halter
fbde21166b find_return_types -> infer_return_types. 2017-04-20 09:56:16 +02:00
Dave Halter
ac8ed62a77 Remove FakeName since it's not actually used anymore. 2017-04-20 09:52:31 +02:00
Dave Halter
db683acfc1 One more docstring test. 2017-04-20 09:47:30 +02:00
Dave Halter
7ca62578e1 Add py__doc__ as a better approach to docstrings. 2017-04-20 09:45:15 +02:00
Dave Halter
b4631d6dd4 Progress in removing the docstring/call signature logic from the parser. 2017-04-18 18:48:05 +02:00
Dave Halter
deb028c3fb Move get_statement_of_position out of the parser tree. 2017-04-15 02:23:08 +02:00
Dave Halter
1cfe5c2945 Python3Method is not needed anymore in the parser. 2017-04-15 01:53:58 +02:00
Dave Halter
4bd3c91622 Fix Python 2 tests. 2017-04-15 01:49:20 +02:00
Dave Halter
c4e51f9969 Use object for Python 2 classes. 2017-04-15 01:47:48 +02:00
Dave Halter
d6d25db9a2 Remove __str__ from name. 2017-04-12 23:06:11 +02:00
Dave Halter
73a38267cf Simplify the Operator/Keyword string comparison. 2017-04-12 19:11:14 +02:00
Dave Halter
a0b65b52c6 used_names -> get_used_names(). 2017-04-12 08:56:11 +02:00
Dave Halter
b0ac07228b Restructure/Refactor has_absolute_import a bit. 2017-04-12 08:47:30 +02:00
Dave Halter
c056105502 get_except_clauses -> get_except_clause_tests 2017-04-12 08:40:27 +02:00
Dave Halter
7e560bffe8 Move in_which_test_node -> get_corresponding_test_node. 2017-04-12 08:35:48 +02:00
Dave Halter
6190a65f23 The Lambda type should be lambdef, not lambda. Use the grammar types. 2017-04-11 18:28:25 +02:00
Dave Halter
685e630c03 Simple refactoring. 2017-04-11 18:20:44 +02:00
Dave Halter
afa6427861 Fix the remaining lambda issue. 2017-04-11 18:18:31 +02:00
Dave Halter
5cd26615e8 Removed the name attribute from lambda. It doesn't exist so don't fake it. 2017-04-11 18:10:35 +02:00
Dave Halter
e675715357 Rename a few IfStmt methods. 2017-04-10 22:46:06 +02:00
Dave Halter
797953df39 More Flow cleanups. 2017-04-10 10:05:21 +02:00
Dave Halter
218e715553 Make the some names more concise in the parser tree. 2017-04-10 09:44:08 +02:00
Dave Halter
769cc80d6b Cleanup with_stmt. 2017-04-09 21:20:33 +02:00
Dave Halter
f855c2bb70 More parser tree simplifications. 2017-04-09 13:24:17 +02:00
Dave Halter
ff82763e6b get_annotation -> annotation (property). 2017-04-08 15:29:29 +02:00
Dave Halter
545cb26f78 stars -> star_count. 2017-04-08 15:26:57 +02:00
Dave Halter
1625834f81 Move get_comp_fors out of the parser. 2017-04-08 14:16:00 +02:00
Dave Halter
4cd7f40e3b Simplify get_executable_nodes. 2017-04-08 14:05:18 +02:00
Dave Halter
65a6c61dc6 Remove nodes_to_execute in favor of a function in parser_utils. 2017-04-08 12:59:49 +02:00
Dave Halter
8542047e5c Adding a tag should be part of the deployment script. 2017-04-06 18:21:20 +02:00
Dave Halter
053020c449 Bump version to 0.10.3. 2017-04-06 18:13:06 +02:00
Dave Halter
8bdf7e32ef Remove the dist folder before deploying. 2017-04-05 20:54:22 +02:00
Dave Halter
5427b02712 Merge branch 'dev' 2017-04-05 20:00:16 +02:00
Felipe Lacerda
aa2dfa9446 Fix path for grammar files in MANIFEST 2017-04-05 19:59:00 +02:00
Dave Halter
3cc97f4b73 Changelog for 0.10.2. 2017-04-05 18:04:41 +02:00
Dave Halter
536ad4c5f1 Added the forgotten changelog for 0.10.1. 2017-04-05 18:03:45 +02:00
Dave Halter
54242049d2 Bump version to 0.10.2. 2017-04-05 01:42:02 +02:00
Dave Halter
eb37f82411 Add memoization where it needs to be. Fixes #894. 2017-04-05 01:06:48 +02:00
Dave Halter
4b841370e4 Test full name for os.path imports. Fixes #873. 2017-04-05 01:00:20 +02:00
Dave Halter
fe5eaaf56c Add a better debugging message for import fails. 2017-04-04 23:27:45 +02:00
Dave Halter
fb8ed61b87 Add a way to cwd into a tmpdir. 2017-04-04 21:03:45 +02:00
Dave Halter
0117f83809 Forgot to include a test for #844. 2017-04-04 20:35:32 +02:00
Dave Halter
e660a5a703 Forgot to include the test for #884. 2017-04-04 20:31:27 +02:00
Dave Halter
947d91f792 Refactor the ClassName to allow inheritance in different modules. Fixes #884. 2017-04-04 20:11:07 +02:00
Dave Halter
d41e036427 Keyword-only arguments were not usable. Fixes #883 and #856. 2017-04-03 18:18:21 +02:00
Dave Halter
632072000e Fix the builtin docstring issue that we've had. Fixes #859. 2017-04-03 00:27:31 +02:00
Dave Halter
47c1b8fa07 Fix bug #844. 2017-04-02 22:21:57 +02:00
Dave Halter
9f1dda04c0 Remove print statements that are not needed. 2017-04-02 21:43:30 +02:00
Dave Halter
7ecaf19b59 Fix _remove_last_newline. Fixes #863. 2017-04-02 21:29:48 +02:00
Dave Halter
3a6d815e9e Another conversion. 2017-04-01 18:12:53 +02:00
Dave Halter
ed8370fa68 isinstance to type conversions. 2017-04-01 18:08:59 +02:00
Dave Halter
bd779655ae Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-04-01 17:51:36 +02:00
Dave Halter
d6d1a39bf2 Remove some print statements. 2017-04-01 17:50:47 +02:00
Dave Halter
4cc467123c Use PythonNode and not Node in the evaluator. 2017-04-01 17:39:52 +02:00
Andy Lee
1624f6945e Fix api.usages so it finds cross-module usages 2017-04-01 15:52:22 +02:00
Andy Lee
3e36238da3 Add test for cross-module usages 2017-04-01 15:52:22 +02:00
Dave Halter
281d6a87a0 Remove a few print statements. 2017-04-01 12:43:57 +02:00
Dave Halter
1fd10d978d Replace a few isinstance calls with the type. 2017-04-01 00:26:22 +02:00
Dave Halter
a6829ca546 Use the cache variables in more straight forward fashion. 2017-03-31 23:10:39 +02:00
Dave Halter
b708b7f07d Remove other crap code in the diff parser. 2017-03-31 21:48:30 +02:00
Dave Halter
b15aa197fd Remove CachedFastPaser, not needed anymore. 2017-03-31 21:46:57 +02:00
Dave Halter
1bb0c89f46 Remove source parameter from parser. 2017-03-31 21:44:43 +02:00
Dave Halter
a687910368 Also remove _parsed from all parsers. 2017-03-31 21:42:11 +02:00
Dave Halter
d2d165267d Remove unused get_root_node on the parser. 2017-03-31 21:41:36 +02:00
Dave Halter
7a51dbea08 Fix an issue in Python 2. 2017-03-31 21:03:15 +02:00
Dave Halter
f6f2765ab9 Rename the profile script to profile_output to avoid name clashes with the stdlibs profile.py. 2017-03-31 17:27:16 +02:00
Dave Halter
36b2fce030 The profile scripts default had to be changed. 2017-03-31 17:26:24 +02:00
Dave Halter
7e45ee3096 Refactor our parser caching a bit more. 2017-03-30 18:41:51 +02:00
Dave Halter
35fd1c70bd Rename parser.utils to parser.cache. 2017-03-30 01:57:48 +02:00
Dave Halter
db364bc44d Move underscore memoization. 2017-03-30 01:53:18 +02:00
Dave Halter
54d69fb9f4 Remove the ParserPickling class. 2017-03-30 01:50:50 +02:00
Dave Halter
8059c3c2c8 Save a module instead of a parser when pickling. 2017-03-30 00:55:04 +02:00
Dave Halter
932703f04a Remove an import that is not needed anymore. 2017-03-28 02:09:38 +02:00
Dave Halter
ee47be0140 Merge Parser and ParserWithRecovery. 2017-03-28 02:08:16 +02:00
Dave Halter
1d0796ac07 Remove a usage of the old module path. 2017-03-28 01:43:40 +02:00
Dave Halter
6a9c2f8795 Start using ContextualizedNode for py__iter__. 2017-03-28 01:34:07 +02:00
Dave Halter
bb9ea54402 Remove ImplicitTuple. 2017-03-27 23:18:06 +02:00
Dave Halter
8a35a04439 Remove the module path from the parser tree.
Some static analysis tests are still failing.
2017-03-27 18:13:32 +02:00
Dave Halter
b60ec024fa Remove start_parsing completely from the Parsers. 2017-03-26 12:52:37 +02:00
Dave Halter
63cafeaa87 Remove all usages of start_parsing=True in the fast parser. 2017-03-26 12:49:40 +02:00
Dave Halter
3d27d06781 Use the new parse method instead of a Parser. 2017-03-26 11:49:17 +02:00
Dave Halter
5c54650216 Code to source. 2017-03-26 01:50:19 +01:00
Dave Halter
aff0cbd68c Remove the last usage of save/load_parser in jedi. 2017-03-26 01:48:45 +01:00
Dave Halter
fb8ffde32e Fix an issue in the parse call. 2017-03-26 01:26:01 +01:00
Dave Halter
7874026ee5 A lot of work toward a better diff parser API. 2017-03-25 01:51:03 +01:00
Dave Halter
ac0d0869c9 Start using the parse function for caching as well. 2017-03-24 01:52:55 +01:00
Dave Halter
fb4cff8ef9 A small buildout script refactoring. 2017-03-23 14:22:27 -07:00
Dave Halter
5aa379945e Merge the FileNotFoundError cache. 2017-03-23 14:22:19 -07:00
Andy Lee
eb9af19559 Add test for loading deleted cache file 2017-03-23 08:17:11 -07:00
Andy Lee
3a851aac8c Catch FileNotFoundError when opening file cache 2017-03-23 08:16:51 -07:00
Dave Halter
6fef385774 Clean the path in pickling. 2017-03-23 08:52:25 +01:00
Dave Halter
26cce4d078 Add the grammar as an argument to saving the parser.
This makes collisions of different grammars when loading from the cache impossible.
2017-03-22 18:32:49 +01:00
Dave Halter
c41bee4253 Trying to ideas to reshape the diff parser APIs. 2017-03-22 09:38:06 +01:00
Dave Halter
2cb565561d Replace the diff parser imports with the modified path. 2017-03-21 22:10:01 +01:00
Dave Halter
3a2811fbe8 Move the diff parser. 2017-03-21 22:03:58 +01:00
Dave Halter
6f01264ed3 Restructure import's module loading. 2017-03-21 17:20:10 +01:00
Dave Halter
ff90beca6b Remove some documentation that was not necessary. 2017-03-20 21:10:49 +01:00
Dave Halter
d218acee6b Create a default implementation of leafs. 2017-03-20 19:34:48 +01:00
Dave Halter
c6811675b6 Rename ast_mapping to node_map. 2017-03-20 08:55:18 +01:00
Dave Halter
2d7fd30111 Remove _remove_last_newline from the parser. 2017-03-20 08:49:30 +01:00
Dave Halter
9dedb9ff68 Don't start parsing in our own API. 2017-03-20 08:44:52 +01:00
Dave Halter
53b4e78a9b Make some stuff private in the pgen parser API. 2017-03-20 08:36:11 +01:00
Dave Halter
689af9fc4e Remove tokens initialization parameter from the parser api. 2017-03-20 08:34:07 +01:00
Dave Halter
42e8861798 Simplify the parse method. 2017-03-19 22:15:01 +01:00
Dave Halter
b4af42ddb3 Some minor parser changes to get ready for bigger stuff. 2017-03-19 21:30:41 +01:00
Dave Halter
3163f4d821 Trying to start moving more stuff to the BaseParser. 2017-03-19 21:06:51 +01:00
Dave Halter
dad40597c5 Start moving stuff to the parser. 2017-03-18 15:01:34 +01:00
Dave Halter
52d855118a Remove get_parsed_node from the parser as well. 2017-03-18 03:55:23 +01:00
Dave Halter
0f66a3c7a8 Remove the module attribute from the parser. 2017-03-18 03:53:34 +01:00
Dave Halter
d0b6d41e99 Remove the old star import cache, because it's not even used. 2017-03-18 03:30:23 +01:00
Dave Halter
aaf6c61e69 Make remove_last_newline private. 2017-03-18 03:07:01 +01:00
Dave Halter
519fa9cfb5 Remove complicated merging of used names from the parser.
It's a lot of complicated code and a lot can go wrong. It also didn't speed up anything. If anything it made things like 5% slower. I have tested this with:

./scripts/diff_parser_profile.py wx._core.py

wx._core.py is not part of Jedi.
2017-03-16 22:00:01 +01:00
Dave Halter
ce41119051 Fix some stuff in a diff profile test script. 2017-03-16 21:45:51 +01:00
Dave Halter
8156a6b8a2 Remove used_names from the parser step. It's a separate iteration, now. 2017-03-16 21:28:42 +01:00
Dave Halter
fd50146f92 Simple cleanup. 2017-03-16 20:20:58 +01:00
Dave Halter
96c67cee26 Simplify the leaf with newlines stuff. 2017-03-16 20:18:30 +01:00
Dave Halter
4573ab19f4 Separate the python syntax tree stuff from the non python stuff. 2017-03-16 19:54:08 +01:00
Dave Halter
448bfd0992 Move the python parser tree. 2017-03-16 17:20:32 +01:00
Dave Halter
b136800cfc Don't use as in imports when not needed. 2017-03-16 08:45:12 +01:00
Dave Halter
06702d2a40 Move the python parser. 2017-03-16 08:40:19 +01:00
Dave Halter
a83b43ccfd Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-03-15 19:12:51 +01:00
Dave Halter
93f14157a6 Cleanup the ParseError stuff. 2017-03-15 18:41:58 +01:00
Dave Halter
0effd348e8 Add a note about the grammar creation. 2017-03-15 18:18:06 +01:00
Dave Halter
c332fba488 Fix a namespace packages related issue. 2017-03-15 08:59:24 +01:00
Dave Halter
375749c5c3 Small restructuring. 2017-03-15 08:56:49 +01:00
Dave Halter
55c9fd3227 Fix an issue in the fake parser 2017-03-15 08:44:49 +01:00
Matthias Bussonnier
9a851165ad Lookup optimisation
avoid a couple of dynamic lookup in the core of the parsing loop.
Performance improvement will be minimal, but adding a little many time
can be consequent.
2017-03-14 23:55:03 +01:00
Matthias Bussonnier
bb8fe0b24c A missing docstring. 2017-03-14 23:54:38 +01:00
Dave Halter
68a7365a0a A few docstring improvements. 2017-03-14 21:03:03 +01:00
Dave Halter
9efb3f0af2 More direct parser usage removals. 2017-03-14 19:31:54 +01:00
Dave Halter
717bfeb574 Remove an occurance of the complicated parser creation. 2017-03-14 19:27:03 +01:00
Dave Halter
97fc3bc23c Refactored the parser calls. Now it's possible to use jedi.parser.python.parse to quickly parse something. 2017-03-14 00:38:58 +01:00
Dave Halter
9b5e6d16da Improved grammar loading API. 2017-03-13 20:33:29 +01:00
Dave Halter
595ffc24d4 Move some more stuff to a python directory in the parser. 2017-03-13 00:54:39 +01:00
Dave Halter
922c480e2e Moved the parser to a new file. 2017-03-12 21:33:41 +01:00
Dave Halter
a635b6839a Remove unused code. 2017-03-12 21:28:32 +01:00
Dave Halter
af9b0ba8d6 Merge branch 'master' into dev 2017-03-12 20:51:17 +01:00
Alex Wiltschko
82d165a723 Missing paren 2017-03-12 20:41:17 +01:00
Dave Halter
a7b1e3fe70 Fixed another diff parser error. 2017-03-12 15:58:14 +01:00
Dave Halter
6e3b00802c Another endless while loop issue, add an assert. 2017-03-11 14:54:44 +01:00
Dave Halter
818fb4f60c Fix a bug that might have caused an endless while loop a while ago. Fixes #878. 2017-03-09 21:47:16 +01:00
Dave Halter
ccef008376 Python 2 compatibility. 2017-03-09 21:06:20 +01:00
Dave Halter
c7a74e6d1c Make the tokenizer a generator. 2017-03-09 18:53:09 +01:00
Dave Halter
989e4bac89 Speed up splitlines.
We use the python function again with the modifications we need.
I ran it with:

    python3 -m timeit  -n 10000 -s 'from jedi.common import splitlines; x = open("test_regression.py").read()'

The speed differences are quite remarkable, it's ~3 times faster:

    10000 loops, best of 3: 52.1 usec per loop

vs. the old:

    10000 loops, best of 3: 148 usec per loop

We might need to speedup splitlines with  as well. It's probably
also a factor 2-3 slower than it should be.
2017-03-09 08:58:57 +01:00
Dave Halter
b814a91f29 Avoid endless loop with an assert in the diff parser. 2017-03-08 18:33:38 +01:00
Dave Halter
5c9769c5a3 Merge remote-tracking branch 'origin/master' into dev 2017-03-07 19:01:53 +01:00
Dave Halter
ee98eab64c Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-03-07 19:01:42 +01:00
Dave Halter
05e05252fa Make python -m jedi.parser.tokenize possible for debugging purposes. 2017-03-07 18:31:12 +01:00
micbou
a859add6d7 Only resolve names for actual modules
A name can be part of an import statement without being a module.
2017-03-01 21:06:21 +01:00
Matthias Bussonnier
fc27ca1b6a 'fix a couple of error locations' 2017-02-24 13:03:03 +01:00
Matthias Bussonnier
784de85b36 Add test for handeling of newline in multiline strings 2017-02-24 00:05:38 +01:00
Matthias Bussonnier
0fb386d7e2 Make sure error token set the new_line flag when necessary
Should solve #855
2017-02-24 00:05:38 +01:00
daniel
5513f72987 added support for implicit ns packages and added tests 2017-02-23 23:53:14 +01:00
Matthias Bussonnier
ef1b1f41e4 Use start and end position in repr for simpler debugging. 2017-02-14 23:53:53 +01:00
Daniel M. Capella
68c6f8dd03 readme: Add maralla/completor.vim
https://github.com/maralla/completor.vim
2017-02-14 22:51:17 +01:00
Matthias Bussonnier
b72aa41019 Missing assert 2017-02-08 23:40:23 +01:00
Thomas Kluyver
adc08785b6 Build universal wheels
This tells wheel that the packages are compatible with Python 2 and 3.

The wheel currently on PyPI is only tagged 'py2', so installing on
Python 3 uses the sdist.
2017-02-04 18:13:18 +01:00
Dave Halter
8131f19751 Correct an issue in the tests with the last commit. 2017-02-04 18:11:54 +01:00
Dave Halter
b6e61133d8 Move the tests for the last PR #848. 2017-02-04 18:11:14 +01:00
Mathias Rav
37d7b85ed1 Add tests for decorator completion 2017-02-04 18:05:15 +01:00
Mathias Rav
c6cd18802b Add myself as contributor 2017-02-04 18:05:15 +01:00
Mathias Rav
c809aad67f Complete dotted names properly in decorators
Outside decorators, a dotted name is parsed by the grammar as
stmt -> test -> power -> atom trailer -> (NAME) ('.' NAME)
where the first NAME is an 'atom' and the second NAME is a 'trailer'.

Thus, testing if the most recent variable is a 'trailer' and the most
recent node is a '.' is almost always enough for Jedi to properly
complete dotted names.

However, the grammar for decorators is more restricted and doesn't allow
arbitrary atoms and trailers; instead, a dotted name in a decorator is
decorator -> '@' dotted_name -> '@' (NAME '.' NAME),
meaning the most recent variable will be 'dotted_name', not 'trailer'.

Besides in decorators, the 'dotted_name' variable is only used in import
statements which are handled previously in _get_context_completions,
so checking for 'dotted_name' in this arm of the if only covers decorators
and not inadvertently anything else.
2017-02-04 18:05:15 +01:00
Dave Halter
1d64a5caa1 Replace first_leaf and last_leaf with get_first_leaf and get_last_leaf. 2017-02-03 17:35:53 +01:00
Dave Halter
90fffd883e Clean up the function docstring. 2017-02-03 17:30:58 +01:00
Dave Halter
647aec11a6 Return None instead of raising an IndexError in get_next_leaf. 2017-02-03 17:26:02 +01:00
Dave Halter
c5071f9f49 Change get_previous_leaf to return None if there is no leaf anymore (at the start of the file). 2017-02-03 17:23:15 +01:00
Dave Halter
445bf6c419 Get rid of get_parent_until. 2017-02-03 09:59:32 +01:00
Dave Halter
b3cb7b5490 Remove the def isinstance from the parser. It was a really bad pattern. 2017-02-03 09:37:59 +01:00
Dave Halter
6ccac94162 Add a deploy script. 2017-02-03 00:40:19 +01:00
Dave Halter
f2b41b1752 Update already the version number so we don't forget it. 2017-02-03 00:38:08 +01:00
154 changed files with 3713 additions and 8255 deletions

View File

@@ -24,7 +24,7 @@ script:
- tox
after_script:
- if [ $TOXENV == "cov" ]; then
pip install --quiet --use-mirrors coveralls;
pip install --quiet coveralls;
coveralls;
fi

View File

@@ -40,6 +40,10 @@ Guido van Rossum (@gvanrossum) <guido@python.org>
Dmytro Sadovnychyi (@sadovnychyi) <jedi@dmit.ro>
Cristi Burcă (@scribu)
bstaint (@bstaint)
Mathias Rav (@Mortal) <rav@cs.au.dk>
Daniel Fiterman (@dfit99) <fitermandaniel2@gmail.com>
Simon Ruggier (@sruggier)
Élie Gouzien (@ElieGouzien)
Note: (@user) means a github user name.

View File

@@ -3,6 +3,25 @@
Changelog
---------
0.11.0 (2017-09-20)
+++++++++++++++++++
- Split Jedi's parser into a separate project called ``parso``.
- Avoiding side effects in REPL completion.
- Numpy docstring support should be much better.
- Moved the `settings.*recursion*` away, they are no longer usable.
0.10.2 (2017-04-05)
+++++++++++++++++++
- Python Packaging sucks. Some files were not included in 0.10.1.
0.10.1 (2017-04-05)
+++++++++++++++++++
- Fixed a few very annoying bugs.
- Prepared the parser to be factored out of Jedi.
0.10.0 (2017-02-03)
+++++++++++++++++++

View File

@@ -1,28 +1,8 @@
Pull Requests are great (on the **dev** branch)! Readme/Documentation changes
are ok in the master branch.
Pull Requests are great.
1. Fork the Repo on github.
2. If you are adding functionality or fixing a bug, please add a test!
3. Add your name to AUTHORS.txt
4. Push to your fork and submit a **pull request to the dev branch**.
My **master** branch is a 100% stable (should be). I only push to it after I am
certain that things are working out. Many people are using Jedi directly from
the github master branch.
4. Push to your fork and submit a pull request.
**Try to use the PEP8 style guide.**
Changing Issues to Pull Requests (Github)
-----------------------------------------
If you have have previously filed a GitHub issue and want to contribute code
that addresses that issue, we prefer it if you use
[hub](https://github.com/github/hub) to convert your existing issue to a pull
request. To do that, first push the changes to a separate branch in your fork
and then issue the following command:
hub pull-request -b davidhalter:dev -i <issue-number> -h <your-github-username>:<your-branch-name>
It's no strict requirement though, if you don't have hub installed or prefer to
use the web interface, then feel free to post a traditional pull request.

View File

@@ -1,14 +1,5 @@
All contributions towards Jedi are MIT licensed.
Some Python files have been taken from the standard library and are therefore
PSF licensed. Modifications on these files are dual licensed (both MIT and
PSF). These files are:
- jedi/parser/pgen2
- jedi/parser/tokenize.py
- jedi/parser/token.py
- test/test_parser/test_pgen2.py
-------------------------------------------------------------------------------
The MIT License (MIT)
@@ -31,52 +22,3 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
-------------------------------------------------------------------------------
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
--------------------------------------------
1. This LICENSE AGREEMENT is between the Python Software Foundation
("PSF"), and the Individual or Organization ("Licensee") accessing and
otherwise using this software ("Python") in source or binary form and
its associated documentation.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python alone or in any derivative version,
provided, however, that PSF's License Agreement and PSF's notice of copyright,
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
2011, 2012, 2013, 2014, 2015 Python Software Foundation; All Rights Reserved"
are retained in Python alone or in any derivative version prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python.
4. PSF is making Python available to Licensee on an "AS IS"
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using Python, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.

View File

@@ -7,8 +7,9 @@ include sith.py
include conftest.py
include pytest.ini
include tox.ini
include requirements.txt
include jedi/evaluate/compiled/fake/*.pym
include jedi/parser/grammar*.txt
include jedi/parser/python/grammar*.txt
recursive-include test *
recursive-include docs *
recursive-exclude * *.pyc

View File

@@ -12,7 +12,7 @@ Jedi - an awesome autocompletion/static analysis library for Python
*If you have specific questions, please add an issue or ask on* `stackoverflow
<https://stackoverflow.com>`_ *with the label* ``python-jedi``.
<https://stackoverflow.com/questions/tagged/python-jedi>`_ *with the label* ``python-jedi``.
Jedi is a static analysis tool for Python that can be used in IDEs/editors. Its
@@ -32,19 +32,20 @@ It's really easy.
Jedi can currently be used with the following editors/projects:
- Vim (jedi-vim_, YouCompleteMe_, deoplete-jedi_)
- Vim (jedi-vim_, YouCompleteMe_, deoplete-jedi_, completor.vim_)
- Emacs (Jedi.el_, company-mode_, elpy_, anaconda-mode_, ycmd_)
- Sublime Text (SublimeJEDI_ [ST2 + ST3], anaconda_ [only ST3])
- TextMate_ (Not sure if it's actually working)
- Kate_ version 4.13+ supports it natively, you have to enable it, though. [`proof
<https://projects.kde.org/projects/kde/applications/kate/repository/show?rev=KDE%2F4.13>`_]
- Atom_ (autocomplete-python_)
- Atom_ (autocomplete-python-jedi_)
- SourceLair_
- `GNOME Builder`_ (with support for GObject Introspection)
- `Visual Studio Code`_ (via `Python Extension <https://marketplace.visualstudio.com/items?itemName=donjayamanne.python>`_)
- Gedit (gedi_)
- wdb_ - Web Debugger
- `Eric IDE`_ (Available as a plugin)
- `Ipython 6.0.0+ <http://ipython.readthedocs.io/en/stable/whatsnew/version6.html>`_
and many more!
@@ -122,8 +123,11 @@ The returned objects are very powerful and really all you might need.
Autocompletion in your REPL (IPython, etc.)
-------------------------------------------
Starting with Ipython `6.0.0` Jedi is a dependency of IPython. Autocompletion
in IPython is therefore possible without additional configuration.
It's possible to have Jedi autocompletion in REPL modes - `example video <https://vimeo.com/122332037>`_.
This means that IPython and others are `supported
This means that in Python you can enable tab completion in a `REPL
<https://jedi.readthedocs.org/en/latest/docs/usage.html#tab-completion-in-the-python-shell>`_.
@@ -191,6 +195,7 @@ Acknowledgements
.. _jedi-vim: https://github.com/davidhalter/jedi-vim
.. _youcompleteme: http://valloric.github.io/YouCompleteMe/
.. _deoplete-jedi: https://github.com/zchee/deoplete-jedi
.. _completor.vim: https://github.com/maralla/completor.vim
.. _Jedi.el: https://github.com/tkf/emacs-jedi
.. _company-mode: https://github.com/syohex/emacs-company-jedi
.. _elpy: https://github.com/jorgenschaefer/elpy
@@ -202,7 +207,7 @@ Acknowledgements
.. _TextMate: https://github.com/lawrenceakka/python-jedi.tmbundle
.. _Kate: http://kate-editor.org
.. _Atom: https://atom.io/
.. _autocomplete-python: https://atom.io/packages/autocomplete-python
.. _autocomplete-python-jedi: https://atom.io/packages/autocomplete-python-jedi
.. _SourceLair: https://www.sourcelair.com
.. _GNOME Builder: https://wiki.gnome.org/Apps/Builder
.. _Visual Studio Code: https://code.visualstudio.com/

52
deploy-master.sh Normal file
View File

@@ -0,0 +1,52 @@
#!/usr/bin/env bash
# The script creates a separate folder in build/ and creates tags there, pushes
# them and then uploads the package to PyPI.
set -eu -o pipefail
BASE_DIR=$(dirname $(readlink -f "$0"))
cd $BASE_DIR
git fetch --tags
PROJECT_NAME=jedi
BRANCH=master
BUILD_FOLDER=build
[ -d $BUILD_FOLDER ] || mkdir $BUILD_FOLDER
# Remove the previous deployment first.
# Checkout the right branch
cd $BUILD_FOLDER
rm -rf $PROJECT_NAME
git clone .. $PROJECT_NAME
cd $PROJECT_NAME
git checkout $BRANCH
# Test first.
tox
# Create tag
tag=v$(python -c "import $PROJECT_NAME; print($PROJECT_NAME.__version__)")
master_ref=$(git show-ref -s heads/$BRANCH)
tag_ref=$(git show-ref -s $tag || true)
if [[ $tag_ref ]]; then
if [[ $tag_ref != $master_ref ]]; then
echo 'Cannot tag something that has already been tagged with another commit.'
exit 1
fi
else
git tag $tag
git push --tags
fi
# Package and upload to PyPI
#rm -rf dist/ - Not needed anymore, because the folder is never reused.
echo `pwd`
python setup.py sdist bdist_wheel
# Maybe do a pip install twine before.
twine upload dist/*
cd $BASE_DIR
# Back in the development directory fetch tags.
git fetch --tags

View File

@@ -14,7 +14,7 @@ Introduction
------------
This page tries to address the fundamental demand for documentation of the
|jedi| interals. Understanding a dynamic language is a complex task. Especially
|jedi| internals. Understanding a dynamic language is a complex task. Especially
because type inference in Python can be a very recursive task. Therefore |jedi|
couldn't get rid of complexity. I know that **simple is better than complex**,
but unfortunately it sometimes requires complex solutions to understand complex
@@ -161,7 +161,7 @@ Parameter completion (evaluate/dynamic.py)
Diff Parser (parser/diff.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.parser.diff
.. automodule:: jedi.parser.python.diff
.. _docstrings:

View File

@@ -109,7 +109,7 @@ option than to execute those modules. However: Execute isn't that critical (as
e.g. in pythoncomplete, which used to execute *every* import!), because it
means one import and no more. So basically the only dangerous thing is using
the import itself. If your ``c_builtin`` uses some strange initializations, it
might be dangerous. But if it does you're screwed anyways, because eventualy
might be dangerous. But if it does you're screwed anyways, because eventually
you're going to execute your code, which executes the import.

36
docs/docs/parser.rst Normal file
View File

@@ -0,0 +1,36 @@
.. _xxx:
Parser Tree
===========
Usage
-----
.. automodule:: jedi.parser.python
:members:
:undoc-members:
Parser Tree Base Class
----------------------
All nodes and leaves have these methods/properties:
.. autoclass:: jedi.parser.tree.NodeOrLeaf
:members:
:undoc-members:
Python Parser Tree
------------------
.. automodule:: jedi.parser.python.tree
:members:
:undoc-members:
:show-inheritance:
Utility
-------
.. autofunction:: jedi.parser.tree.search_ancestor

View File

@@ -54,7 +54,7 @@ Visual Studio Code:
Atom:
- autocomplete-python_
- autocomplete-python-jedi_
SourceLair:
@@ -82,9 +82,12 @@ and many more!
.. _repl-completion:
Tab completion in the Python Shell
Tab Completion in the Python Shell
----------------------------------
Starting with Ipython `6.0.0` Jedi is a dependency of IPython. Autocompletion
in IPython is therefore possible without additional configuration.
There are two different options how you can use Jedi autocompletion in
your Python interpreter. One with your custom ``$HOME/.pythonrc.py`` file
and one that uses ``PYTHONSTARTUP``.
@@ -111,7 +114,7 @@ Using a custom ``$HOME/.pythonrc.py``
.. _wdb: https://github.com/Kozea/wdb
.. _TextMate: https://github.com/lawrenceakka/python-jedi.tmbundle
.. _kate: http://kate-editor.org/
.. _autocomplete-python: https://atom.io/packages/autocomplete-python
.. _autocomplete-python-jedi: https://atom.io/packages/autocomplete-python-jedi
.. _SourceLair: https://www.sourcelair.com
.. _GNOME Builder: https://wiki.gnome.org/Apps/Builder/
.. _gedi: https://github.com/isamert/gedi

View File

@@ -36,8 +36,8 @@ As you see Jedi is pretty simple and allows you to concentrate on writing a
good text editor, while still having very good IDE features for Python.
"""
__version__ = '0.10.0'
__version__ = '0.11.0'
from jedi.api import Script, Interpreter, NotFoundError, set_debug_function
from jedi.api import preload_module, defined_names, names
from jedi.api import Script, Interpreter, set_debug_function, \
preload_module, names
from jedi import settings

View File

@@ -7,6 +7,7 @@ import imp
import os
import re
import pkgutil
import warnings
try:
import importlib
except ImportError:
@@ -34,12 +35,39 @@ class DummyFile(object):
del self.loader
def find_module_py33(string, path=None):
loader = importlib.machinery.PathFinder.find_module(string, path)
def find_module_py34(string, path=None, fullname=None):
implicit_namespace_pkg = False
spec = None
loader = None
spec = importlib.machinery.PathFinder.find_spec(string, path)
if hasattr(spec, 'origin'):
origin = spec.origin
implicit_namespace_pkg = origin == 'namespace'
# We try to disambiguate implicit namespace pkgs with non implicit namespace pkgs
if implicit_namespace_pkg:
fullname = string if not path else fullname
implicit_ns_info = ImplicitNSInfo(fullname, spec.submodule_search_locations._path)
return None, implicit_ns_info, False
# we have found the tail end of the dotted path
if hasattr(spec, 'loader'):
loader = spec.loader
return find_module_py33(string, path, loader)
def find_module_py33(string, path=None, loader=None, fullname=None):
loader = loader or importlib.machinery.PathFinder.find_module(string, path)
if loader is None and path is None: # Fallback to find builtins
try:
loader = importlib.find_loader(string)
with warnings.catch_warnings(record=True):
# Mute "DeprecationWarning: Use importlib.util.find_spec()
# instead." While we should replace that in the future, it's
# probably good to wait until we deprecate Python 3.3, since
# it was added in Python 3.4 and find_loader hasn't been
# removed in 3.6.
loader = importlib.find_loader(string)
except ValueError as e:
# See #491. Importlib might raise a ValueError, to avoid this, we
# just raise an ImportError to fix the issue.
@@ -81,7 +109,7 @@ def find_module_py33(string, path=None):
return module_file, module_path, is_package
def find_module_pre_py33(string, path=None):
def find_module_pre_py33(string, path=None, fullname=None):
try:
module_file, module_path, description = imp.find_module(string, path)
module_type = description[2]
@@ -121,6 +149,7 @@ def find_module_pre_py33(string, path=None):
find_module = find_module_py33 if is_py33 else find_module_pre_py33
find_module = find_module_py34 if is_py34 else find_module
find_module.__doc__ = """
Provides information about a module.
@@ -132,20 +161,18 @@ if the module is contained in a package.
"""
class ImplicitNSInfo(object):
"""Stores information returned from an implicit namespace spec"""
def __init__(self, name, paths):
self.name = name
self.paths = paths
# unicode function
try:
unicode = unicode
except NameError:
unicode = str
if is_py3:
u = lambda s: s
else:
u = lambda s: s.decode('utf-8')
u.__doc__ = """
Decode a raw string into unicode object. Do nothing in Python 3.
"""
# exec function
if is_py3:

View File

@@ -10,16 +10,15 @@ arguments.
.. warning:: Please, note that Jedi is **not thread safe**.
"""
import os
import warnings
import sys
from jedi.parser import load_grammar
from jedi.parser import tree
from jedi.parser.diff import FastParser
from jedi.parser.utils import save_parser
import parso
from parso.python import tree
from parso import python_bytes_to_unicode, split_lines
from jedi.parser_utils import get_executable_nodes, get_statement_of_position
from jedi import debug
from jedi import settings
from jedi import common
from jedi import cache
from jedi.api import classes
from jedi.api import interpreter
@@ -30,24 +29,14 @@ from jedi.evaluate import Evaluator
from jedi.evaluate import representation as er
from jedi.evaluate import imports
from jedi.evaluate.param import try_iter_content
from jedi.evaluate.helpers import get_module_names
from jedi.evaluate.sys_path import get_venv_path
from jedi.evaluate.helpers import get_module_names, evaluate_call_of_leaf
from jedi.evaluate.sys_path import get_venv_path, dotted_path_in_sys_path
from jedi.evaluate.iterable import unpack_tuple_to_dict
from jedi.evaluate.filters import TreeNameDefinition
# Jedi uses lots and lots of recursion. By setting this a little bit higher, we
# can remove some "maximum recursion depth" errors.
sys.setrecursionlimit(2000)
class NotFoundError(Exception):
"""A custom error to avoid catching the wrong exceptions.
.. deprecated:: 0.9.0
Not in use anymore, Jedi just returns no goto result if you're not on a
valid name.
.. todo:: Remove!
"""
sys.setrecursionlimit(3000)
class Script(object):
@@ -90,15 +79,7 @@ class Script(object):
"""
def __init__(self, source=None, line=None, column=None, path=None,
encoding='utf-8', source_path=None, source_encoding=None,
sys_path=None):
if source_path is not None:
warnings.warn("Use path instead of source_path.", DeprecationWarning)
path = source_path
if source_encoding is not None:
warnings.warn("Use encoding instead of source_encoding.", DeprecationWarning)
encoding = source_encoding
encoding='utf-8', sys_path=None):
self._orig_path = path
# An empty path (also empty string) should always result in no path.
self.path = os.path.abspath(path) if path else None
@@ -108,8 +89,9 @@ class Script(object):
with open(path, 'rb') as f:
source = f.read()
self._source = common.source_to_unicode(source, encoding)
self._code_lines = common.splitlines(self._source)
# TODO do we really want that?
self._source = python_bytes_to_unicode(source, encoding, errors='replace')
self._code_lines = split_lines(self._source)
line = max(len(self._code_lines), 1) if line is None else line
if not (0 < line <= len(self._code_lines)):
raise ValueError('`line` parameter is not in a valid range.')
@@ -123,7 +105,9 @@ class Script(object):
cache.clear_time_caches()
debug.reset_time()
self._grammar = load_grammar(version='%s.%s' % sys.version_info[:2])
# Load the Python grammar of the current interpreter.
self._grammar = parso.load_grammar()
if sys_path is None:
venv = os.getenv('VIRTUAL_ENV')
if venv:
@@ -133,28 +117,27 @@ class Script(object):
@cache.memoize_method
def _get_module_node(self):
cache.invalidate_star_import_cache(self._path)
parser = FastParser(self._grammar, self._source, self.path)
save_parser(self.path, parser, pickling=False)
return parser.module
return self._grammar.parse(
code=self._source,
path=self.path,
cache=False, # No disk cache, because the current script often changes.
diff_cache=True,
cache_path=settings.cache_directory
)
@cache.memoize_method
def _get_module(self):
module = er.ModuleContext(self._evaluator, self._get_module_node())
imports.add_module(self._evaluator, module.name.string_name, module)
module = er.ModuleContext(
self._evaluator,
self._get_module_node(),
self.path
)
if self.path is not None:
name = dotted_path_in_sys_path(self._evaluator.sys_path, self.path)
if name is not None:
imports.add_module(self._evaluator, name, module)
return module
@property
def source_path(self):
"""
.. deprecated:: 0.7.0
Use :attr:`.path` instead.
.. todo:: Remove!
"""
warnings.warn("Use path instead of source_path.", DeprecationWarning)
return self.path
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, repr(self._orig_path))
@@ -188,7 +171,7 @@ class Script(object):
:rtype: list of :class:`classes.Definition`
"""
module_node = self._get_module_node()
leaf = module_node.name_for_position(self._pos)
leaf = module_node.get_name_of_position(self._pos)
if leaf is None:
leaf = module_node.get_leaf_for_position(self._pos)
if leaf is None:
@@ -213,17 +196,25 @@ class Script(object):
:rtype: list of :class:`classes.Definition`
"""
def filter_follow_imports(names):
def filter_follow_imports(names, check):
for name in names:
if isinstance(name, (imports.ImportName, TreeNameDefinition)):
for context in name.infer():
yield context.name
if check(name):
for result in filter_follow_imports(name.goto(), check):
yield result
else:
yield name
names = self._goto()
if follow_imports:
names = filter_follow_imports(names)
def check(name):
if isinstance(name, er.ModuleName):
return False
return name.api_type == 'module'
else:
def check(name):
return isinstance(name, imports.SubModuleName)
names = filter_follow_imports(names, check)
defs = [classes.Definition(self._evaluator, d) for d in set(names)]
return helpers.sorted_definitions(defs)
@@ -232,7 +223,7 @@ class Script(object):
"""
Used for goto_assignments and usages.
"""
name = self._get_module_node().name_for_position(self._pos)
name = self._get_module_node().get_name_of_position(self._pos)
if name is None:
return []
context = self._evaluator.create_context(self._get_module(), name)
@@ -253,13 +244,13 @@ class Script(object):
settings.dynamic_flow_information, False
try:
module_node = self._get_module_node()
user_stmt = module_node.get_statement_for_position(self._pos)
user_stmt = get_statement_of_position(module_node, self._pos)
definition_names = self._goto()
if not definition_names and isinstance(user_stmt, tree.Import):
# For not defined imports (goto doesn't find something, we take
# the name as a definition. This is enough, because every name
# points to it.
name = user_stmt.name_for_position(self._pos)
name = user_stmt.get_name_of_position(self._pos)
if name is None:
# Must be syntax
return []
@@ -305,14 +296,13 @@ class Script(object):
self._get_module(),
call_signature_details.bracket_leaf
)
with common.scale_speed_settings(settings.scale_call_signatures):
definitions = helpers.cache_call_signatures(
self._evaluator,
context,
call_signature_details.bracket_leaf,
self._code_lines,
self._pos
)
definitions = helpers.cache_call_signatures(
self._evaluator,
context,
call_signature_details.bracket_leaf,
self._code_lines,
self._pos
)
debug.speed('func_call followed')
return [classes.CallSignature(self._evaluator, d.name,
@@ -326,7 +316,7 @@ class Script(object):
module_node = self._get_module_node()
self._evaluator.analysis_modules = [module_node]
try:
for node in module_node.nodes_to_execute():
for node in get_executable_nodes(module_node):
context = self._get_module().create_context(node)
if node.type in ('funcdef', 'classdef'):
# TODO This is stupid, should be private
@@ -336,16 +326,20 @@ class Script(object):
elif isinstance(node, tree.Import):
import_names = set(node.get_defined_names())
if node.is_nested():
import_names |= set(path[-1] for path in node.paths())
import_names |= set(path[-1] for path in node.get_paths())
for n in import_names:
imports.infer_import(context, n)
elif node.type == 'expr_stmt':
types = context.eval_node(node)
for testlist in node.children[:-1:2]:
# Iterate tuples.
unpack_tuple_to_dict(self._evaluator, types, testlist)
unpack_tuple_to_dict(context, types, testlist)
else:
try_iter_content(self._evaluator.goto_definitions(context, node))
if node.type == 'name':
defs = self._evaluator.goto_definitions(context, node)
else:
defs = evaluate_call_of_leaf(context, node)
try_iter_content(defs)
self._evaluator.reset_recursion_limitations()
ana = [a for a in self._evaluator.analysis if self.path == a.path]
@@ -397,30 +391,11 @@ class Interpreter(Script):
return interpreter.MixedModuleContext(
self._evaluator,
parser_module,
self.namespaces
self.namespaces,
path=self.path
)
def defined_names(source, path=None, encoding='utf-8'):
"""
Get all definitions in `source` sorted by its position.
This functions can be used for listing functions, classes and
data defined in a file. This can be useful if you want to list
them in "sidebar". Each element in the returned list also has
`defined_names` method which can be used to get sub-definitions
(e.g., methods in class).
:rtype: list of classes.Definition
.. deprecated:: 0.9.0
Use :func:`names` instead.
.. todo:: Remove!
"""
warnings.warn("Use call_signatures instead.", DeprecationWarning)
return names(source, path, encoding)
def names(source=None, path=None, encoding='utf-8', all_scopes=False,
definitions=True, references=False):
"""
@@ -448,7 +423,7 @@ def names(source=None, path=None, encoding='utf-8', all_scopes=False,
classes.Definition(
script._evaluator,
TreeNameDefinition(
module_context.create_context(name.parent),
module_context.create_context(name if name.parent.type == 'file_input' else name.parent),
name
)
) for name in get_module_names(script._get_module_node(), all_scopes)

View File

@@ -3,19 +3,21 @@ The :mod:`jedi.api.classes` module contains the return classes of the API.
These classes are the much bigger part of the whole API, because they contain
the interesting information about completion and goto operations.
"""
import warnings
import re
from parso.cache import parser_cache
from parso.python.tree import search_ancestor
from jedi._compatibility import u
from jedi import settings
from jedi import common
from jedi.parser.utils import load_parser
from jedi.cache import memoize_method
from jedi.evaluate import representation as er
from jedi.evaluate import instance
from jedi.evaluate import imports
from jedi.evaluate import compiled
from jedi.evaluate.filters import ParamName
from jedi.evaluate.imports import ImportName
from jedi.api.keywords import KeywordName
@@ -59,7 +61,7 @@ class BaseDefinition(object):
self._evaluator = evaluator
self._name = name
"""
An instance of :class:`jedi.parser.reprsentation.Name` subclass.
An instance of :class:`parso.reprsentation.Name` subclass.
"""
self.is_keyword = isinstance(self._name, KeywordName)
@@ -138,8 +140,8 @@ class BaseDefinition(object):
if tree_name is not None:
# TODO move this to their respective names.
definition = tree_name.get_definition()
if definition.type == 'import_from' and \
tree_name in definition.get_defined_names():
if definition is not None and definition.type == 'import_from' and \
tree_name.is_definition():
resolve = True
if isinstance(self._name, imports.SubModuleName) or resolve:
@@ -158,9 +160,17 @@ class BaseDefinition(object):
pass
if name.api_type == 'module':
module_context, = name.infer()
for n in reversed(module_context.py__name__().split('.')):
yield n
module_contexts = name.infer()
if module_contexts:
module_context, = module_contexts
for n in reversed(module_context.py__name__().split('.')):
yield n
else:
# We don't really know anything about the path here. This
# module is just an import that would lead in an
# ImportError. So simply return the name.
yield name.string_name
return
else:
yield name.string_name
@@ -244,30 +254,7 @@ class BaseDefinition(object):
the ``foo.docstring(fast=False)`` on every object, because it
parses all libraries starting with ``a``.
"""
if raw:
return _Help(self._name).raw(fast=fast)
else:
return _Help(self._name).full(fast=fast)
@property
def doc(self):
"""
.. deprecated:: 0.8.0
Use :meth:`.docstring` instead.
.. todo:: Remove!
"""
warnings.warn("Use docstring() instead.", DeprecationWarning)
return self.docstring()
@property
def raw_doc(self):
"""
.. deprecated:: 0.8.0
Use :meth:`.docstring` instead.
.. todo:: Remove!
"""
warnings.warn("Use docstring() instead.", DeprecationWarning)
return self.docstring(raw=True)
return _Help(self._name).docstring(fast=fast, raw=raw)
@property
def description(self):
@@ -360,7 +347,7 @@ class BaseDefinition(object):
raise AttributeError()
context = followed[0] # only check the first one.
return [_Param(self._evaluator, n) for n in get_param_names(context)]
return [Definition(self._evaluator, n) for n in get_param_names(context)]
def parent(self):
context = self._name.parent_context
@@ -391,12 +378,11 @@ class BaseDefinition(object):
return ''
path = self._name.get_root_context().py__file__()
parser = load_parser(path)
lines = common.splitlines(parser.source)
lines = parser_cache[self._evaluator.grammar._hashed][path].lines
line_nr = self._name.start_pos[0]
start_line_nr = line_nr - before
return '\n'.join(lines[start_line_nr:line_nr + after + 1])
index = self._name.start_pos[0] - 1
start_index = max(index - before, 0)
return ''.join(lines[start_index:index + after + 1])
class Completion(BaseDefinition):
@@ -421,7 +407,7 @@ class Completion(BaseDefinition):
append = '('
if isinstance(self._name, ParamName) and self._stack is not None:
node_names = list(self._stack.get_node_names(self._evaluator.grammar))
node_names = list(self._stack.get_node_names(self._evaluator.grammar._pgen_grammar))
if 'trailer' in node_names and 'argument' not in node_names:
append += '='
@@ -472,7 +458,7 @@ class Completion(BaseDefinition):
# In this case we can just resolve the like name, because we
# wouldn't load like > 100 Python modules anymore.
fast = False
return super(Completion, self,).docstring(raw, fast)
return super(Completion, self).docstring(raw=raw, fast=fast)
@property
def description(self):
@@ -541,9 +527,14 @@ class Definition(BaseDefinition):
typ = 'def'
return typ + ' ' + u(self._name.string_name)
elif typ == 'param':
return typ + ' ' + tree_name.get_definition().get_description()
code = search_ancestor(tree_name, 'param').get_code(
include_prefix=False,
include_comma=False
)
return typ + ' ' + code
definition = tree_name.get_definition()
definition = tree_name.get_definition() or tree_name
# Remove the prefix, because that's not what we want for get_code
# here.
txt = definition.get_code(include_prefix=False)
@@ -628,7 +619,7 @@ class CallSignature(Definition):
if self.params:
param_name = self.params[-1]._name
if param_name.tree_name is not None:
if param_name.tree_name.get_definition().stars == 2:
if param_name.tree_name.get_definition().star_count == 2:
return i
return None
@@ -637,7 +628,7 @@ class CallSignature(Definition):
tree_name = param._name.tree_name
if tree_name is not None:
# *args case
if tree_name.get_definition().stars == 1:
if tree_name.get_definition().star_count == 1:
return i
return None
return self._index
@@ -650,48 +641,11 @@ class CallSignature(Definition):
"""
return self._bracket_start_pos
@property
def call_name(self):
"""
.. deprecated:: 0.8.0
Use :attr:`.name` instead.
.. todo:: Remove!
The name (e.g. 'isinstance') as a string.
"""
warnings.warn("Use name instead.", DeprecationWarning)
return self.name
@property
def module(self):
"""
.. deprecated:: 0.8.0
Use :attr:`.module_name` for the module name.
.. todo:: Remove!
"""
return self._executable.get_parent_until()
def __repr__(self):
return '<%s: %s index %s>' % \
(type(self).__name__, self._name.string_name, self.index)
class _Param(Definition):
"""
Just here for backwards compatibility.
"""
def get_code(self):
"""
.. deprecated:: 0.8.0
Use :attr:`.description` and :attr:`.name` instead.
.. todo:: Remove!
A function to get the whole code of the param.
"""
warnings.warn("Use description instead.", DeprecationWarning)
return self.description
class _Help(object):
"""
Temporary implementation, will be used as `Script.help() or something in
@@ -701,35 +655,24 @@ class _Help(object):
self._name = definition
@memoize_method
def _get_node(self, fast):
if self._name.api_type == 'module' and not fast:
followed = self._name.infer()
if followed:
# TODO: Use all of the followed objects as input to Documentation.
context = next(iter(followed))
return context.tree_node
if self._name.tree_name is None:
return None
return self._name.tree_name.get_definition()
def _get_contexts(self, fast):
if isinstance(self._name, ImportName) and fast:
return {}
def full(self, fast=True):
node = self._get_node(fast)
try:
return node.doc
except AttributeError:
return self.raw(fast)
if self._name.api_type == 'statement':
return {}
def raw(self, fast=True):
return self._name.infer()
def docstring(self, fast=True, raw=True):
"""
The raw docstring ``__doc__`` for any object.
The docstring ``__doc__`` for any object.
See :attr:`doc` for example.
"""
node = self._get_node(fast)
if node is None:
return ''
# TODO: Use all of the followed objects as output. Possibly divinding
# them by a few dashes.
for context in self._get_contexts(fast=fast):
return context.py__doc__(include_call_signature=not raw)
try:
return node.raw_doc
except AttributeError:
return ''
return ''

View File

@@ -1,5 +1,7 @@
from jedi.parser import token
from jedi.parser import tree
from parso.python import token
from parso.python import tree
from parso.tree import search_ancestor, Leaf
from jedi import debug
from jedi import settings
from jedi.api import classes
@@ -8,6 +10,7 @@ from jedi.evaluate import imports
from jedi.api import keywords
from jedi.evaluate.helpers import evaluate_call_of_leaf
from jedi.evaluate.filters import get_global_filters
from jedi.parser_utils import get_statement_of_position
def get_call_signature_param_names(call_signatures):
@@ -22,7 +25,7 @@ def get_call_signature_param_names(call_signatures):
# public API and we don't want to make the internal
# Name object public.
tree_param = tree.search_ancestor(tree_name, 'param')
if tree_param.stars == 0: # no *args/**kwargs
if tree_param.star_count == 0: # no *args/**kwargs
yield p._name
@@ -51,7 +54,7 @@ def get_user_scope(module_context, position):
"""
Returns the scope in which the user resides. This includes flows.
"""
user_stmt = module_context.tree_node.get_statement_for_position(position)
user_stmt = get_statement_of_position(module_context.tree_node, position)
if user_stmt is None:
def scan(scope):
for s in scope.children:
@@ -134,37 +137,63 @@ class Completion:
return self._global_completions()
allowed_keywords, allowed_tokens = \
helpers.get_possible_completion_types(grammar, self.stack)
helpers.get_possible_completion_types(grammar._pgen_grammar, self.stack)
if 'if' in allowed_keywords:
leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)
previous_leaf = leaf.get_previous_leaf()
indent = self._position[1]
if not (leaf.start_pos <= self._position <= leaf.end_pos):
indent = leaf.start_pos[1]
if previous_leaf is not None:
stmt = previous_leaf
while True:
stmt = search_ancestor(
stmt, 'if_stmt', 'for_stmt', 'while_stmt', 'try_stmt',
'error_node',
)
if stmt is None:
break
type_ = stmt.type
if type_ == 'error_node':
first = stmt.children[0]
if isinstance(first, Leaf):
type_ = first.value + '_stmt'
# Compare indents
if stmt.start_pos[1] == indent:
if type_ == 'if_stmt':
allowed_keywords += ['elif', 'else']
elif type_ == 'try_stmt':
allowed_keywords += ['except', 'finally', 'else']
elif type_ == 'for_stmt':
allowed_keywords.append('else')
completion_names = list(self._get_keyword_completion_names(allowed_keywords))
if token.NAME in allowed_tokens or token.INDENT in allowed_tokens:
# This means that we actually have to do type inference.
symbol_names = list(self.stack.get_node_names(grammar))
symbol_names = list(self.stack.get_node_names(grammar._pgen_grammar))
nodes = list(self.stack.get_nodes())
if "import_stmt" in symbol_names:
level = 0
only_modules = True
level, names = self._parse_dotted_names(nodes)
if "import_from" in symbol_names:
if 'import' in nodes:
only_modules = False
else:
assert "import_name" in symbol_names
completion_names += self._get_importer_names(
names,
level,
only_modules
)
elif nodes and nodes[-1] in ('as', 'def', 'class'):
if nodes and nodes[-1] in ('as', 'def', 'class'):
# No completions for ``with x as foo`` and ``import x as foo``.
# Also true for defining names as a class or function.
return list(self._get_class_context_completions(is_function=True))
elif symbol_names[-1] == 'trailer' and nodes[-1] == '.':
elif "import_stmt" in symbol_names:
level, names = self._parse_dotted_names(nodes, "import_from" in symbol_names)
only_modules = not ("import_from" in symbol_names and 'import' in nodes)
completion_names += self._get_importer_names(
names,
level,
only_modules=only_modules,
)
elif symbol_names[-1] in ('trailer', 'dotted_name') and nodes[-1] == '.':
dot = self._module_node.get_leaf_for_position(self._position)
completion_names += self._trailer_completions(dot.get_previous_leaf())
else:
@@ -210,7 +239,7 @@ class Completion:
completion_names += filter.values()
return completion_names
def _parse_dotted_names(self, nodes):
def _parse_dotted_names(self, nodes, is_import_from):
level = 0
names = []
for node in nodes[1:]:
@@ -221,7 +250,12 @@ class Completion:
names += node.children[::2]
elif node.type == 'name':
names.append(node)
elif node == ',':
if not is_import_from:
names = []
else:
# Here if the keyword `import` comes along it stops checking
# for names.
break
return level, names
@@ -235,7 +269,7 @@ class Completion:
Autocomplete inherited methods when overriding in child class.
"""
leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)
cls = leaf.get_parent_until(tree.Class)
cls = tree.search_ancestor(leaf, 'classdef')
if isinstance(cls, (tree.Class, tree.Function)):
# Complete the methods that are defined in the super classes.
random_context = self._module_context.create_context(

View File

@@ -3,14 +3,15 @@ Helpers for the API
"""
import re
from collections import namedtuple
from textwrap import dedent
from parso.python.parser import Parser
from parso.python import tree
from parso import split_lines
from jedi._compatibility import u
from jedi.evaluate.helpers import evaluate_call_of_leaf
from jedi import parser
from jedi.parser import tree
from jedi.parser import tokenize
from jedi.cache import time_cache
from jedi import common
CompletionParts = namedtuple('CompletionParts', ['path', 'has_dot', 'name'])
@@ -52,7 +53,7 @@ class OnErrorLeaf(Exception):
def _is_on_comment(leaf, position):
comment_lines = common.splitlines(leaf.prefix)
comment_lines = split_lines(leaf.prefix)
difference = leaf.start_pos[0] - position[0]
prefix_start_pos = leaf.get_start_pos_of_prefix()
if difference == 0:
@@ -74,16 +75,14 @@ def _get_code_for_stack(code_lines, module_node, position):
return u('')
# If we're not on a comment simply get the previous leaf and proceed.
try:
leaf = leaf.get_previous_leaf()
except IndexError:
leaf = leaf.get_previous_leaf()
if leaf is None:
return u('') # At the beginning of the file.
is_after_newline = leaf.type == 'newline'
while leaf.type == 'newline':
try:
leaf = leaf.get_previous_leaf()
except IndexError:
leaf = leaf.get_previous_leaf()
if leaf is None:
return u('')
if leaf.type == 'error_leaf' or leaf.type == 'string':
@@ -95,11 +94,10 @@ def _get_code_for_stack(code_lines, module_node, position):
# impossible.
raise OnErrorLeaf(leaf)
else:
if leaf == ';':
user_stmt = leaf.parent
else:
user_stmt = leaf.get_definition()
if user_stmt.parent.type == 'simple_stmt':
user_stmt = leaf
while True:
if user_stmt.parent.type in ('file_input', 'suite', 'simple_stmt'):
break
user_stmt = user_stmt.parent
if is_after_newline:
@@ -120,23 +118,26 @@ def get_stack_at_position(grammar, code_lines, module_node, pos):
pass
def tokenize_without_endmarker(code):
tokens = tokenize.source_tokens(code, use_exact_op_types=True)
# TODO This is for now not an official parso API that exists purely
# for Jedi.
tokens = grammar._tokenize(code)
for token_ in tokens:
if token_.string == safeword:
raise EndMarkerReached()
else:
yield token_
code = _get_code_for_stack(code_lines, module_node, pos)
# The code might be indedented, just remove it.
code = dedent(_get_code_for_stack(code_lines, module_node, pos))
# We use a word to tell Jedi when we have reached the start of the
# completion.
# Use Z as a prefix because it's not part of a number suffix.
safeword = 'ZZZ_USER_WANTS_TO_COMPLETE_HERE_WITH_JEDI'
code = code + safeword
p = parser.ParserWithRecovery(grammar, code, start_parsing=False)
p = Parser(grammar._pgen_grammar, error_recovery=True)
try:
p.parse(tokenizer=tokenize_without_endmarker(code))
p.parse(tokens=tokenize_without_endmarker(code))
except EndMarkerReached:
return Stack(p.pgen_parser.stack)
raise SystemError("This really shouldn't happen. There's a bug in Jedi.")
@@ -153,7 +154,7 @@ class Stack(list):
yield node
def get_possible_completion_types(grammar, stack):
def get_possible_completion_types(pgen_grammar, stack):
def add_results(label_index):
try:
grammar_labels.append(inversed_tokens[label_index])
@@ -161,17 +162,17 @@ def get_possible_completion_types(grammar, stack):
try:
keywords.append(inversed_keywords[label_index])
except KeyError:
t, v = grammar.labels[label_index]
t, v = pgen_grammar.labels[label_index]
assert t >= 256
# See if it's a symbol and if we're in its first set
inversed_keywords
itsdfa = grammar.dfas[t]
itsdfa = pgen_grammar.dfas[t]
itsstates, itsfirst = itsdfa
for first_label_index in itsfirst.keys():
add_results(first_label_index)
inversed_keywords = dict((v, k) for k, v in grammar.keywords.items())
inversed_tokens = dict((v, k) for k, v in grammar.tokens.items())
inversed_keywords = dict((v, k) for k, v in pgen_grammar.keywords.items())
inversed_tokens = dict((v, k) for k, v in pgen_grammar.tokens.items())
keywords = []
grammar_labels = []
@@ -244,6 +245,8 @@ def _get_call_signature_details_from_error_node(node, position):
# until the parentheses is enough.
children = node.children[index:]
name = element.get_previous_leaf()
if name is None:
continue
if name.type == 'name' or name.parent.type in ('trailer', 'atom'):
return CallSignatureDetails(
element,
@@ -255,9 +258,8 @@ def get_call_signature_details(module, position):
leaf = module.get_leaf_for_position(position, include_prefixes=True)
if leaf.start_pos >= position:
# Whitespace / comments after the leaf count towards the previous leaf.
try:
leaf = leaf.get_previous_leaf()
except IndexError:
leaf = leaf.get_previous_leaf()
if leaf is None:
return None
if leaf == ')':
@@ -281,6 +283,8 @@ def get_call_signature_details(module, position):
if node.type == 'trailer' and node.children[0] == '(':
leaf = node.get_previous_leaf()
if leaf is None:
return None
return CallSignatureDetails(
node.children[0], *_get_index_and_key(node.children, position))
@@ -299,7 +303,7 @@ def cache_call_signatures(evaluator, context, bracket_leaf, code_lines, user_pos
whole = '\n'.join(other_lines + [before_cursor])
before_bracket = re.match(r'.*\(', whole, re.DOTALL)
module_path = bracket_leaf.get_parent_until().path
module_path = context.get_root_context().py__file__()
if module_path is None:
yield None # Don't cache!
else:

View File

@@ -8,16 +8,21 @@ from jedi.evaluate.compiled import mixed
from jedi.evaluate.context import Context
class NamespaceObject(object):
def __init__(self, dct):
self.__dict__ = dct
class MixedModuleContext(Context):
resets_positions = True
type = 'mixed_module'
def __init__(self, evaluator, tree_module, namespaces):
def __init__(self, evaluator, tree_module, namespaces, path):
self.evaluator = evaluator
self._namespaces = namespaces
self._namespace_objects = [type('jedi_namespace', (), n) for n in namespaces]
self._module_context = ModuleContext(evaluator, tree_module)
self._namespace_objects = [NamespaceObject(n) for n in namespaces]
self._module_context = ModuleContext(evaluator, tree_module, path=path)
self.tree_node = tree_module
def get_node(self):
@@ -33,7 +38,7 @@ class MixedModuleContext(Context):
self.evaluator,
parent_context=self,
compiled_object=compiled_object,
tree_name=self.tree_node.name
tree_context=self._module_context
)
for filter in mixed_object.get_filters(*args, **kwargs):
yield filter

View File

@@ -4,7 +4,8 @@ import keyword
from jedi._compatibility import is_py3, is_py35
from jedi import common
from jedi.evaluate.filters import AbstractNameDefinition
from jedi.parser.tree import Leaf
from parso.python.tree import Leaf
try:
from pydoc_data import topics as pydoc_topics
except ImportError:
@@ -75,12 +76,16 @@ class KeywordName(AbstractNameDefinition):
api_type = 'keyword'
def __init__(self, evaluator, name):
self.evaluator = evaluator
self.string_name = name
self.parent_context = evaluator.BUILTINS
def eval(self):
return set()
def infer(self):
return [Keyword(self.evaluator, self.string_name, (0, 0))]
class Keyword(object):
api_type = 'keyword'
@@ -90,9 +95,6 @@ class Keyword(object):
self.start_pos = pos
self.parent = evaluator.BUILTINS
def get_parent_until(self):
return self.parent
@property
def only_valid_as_leaf(self):
return self.name.value in keywords_only_valid_as_leaf
@@ -102,9 +104,8 @@ class Keyword(object):
""" For a `parsing.Name` like comparision """
return [self.name]
@property
def docstr(self):
return imitate_pydoc(self.name)
def py__doc__(self, include_call_signature=False):
return imitate_pydoc(self.name.string_name)
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, self.name)
@@ -138,6 +139,6 @@ def imitate_pydoc(string):
return ''
try:
return pydoc_topics.topics[label] if pydoc_topics else ''
return pydoc_topics.topics[label].strip() if pydoc_topics else ''
except KeyError:
return ''

View File

@@ -1,10 +1,14 @@
from jedi.api import classes
from jedi.parser import tree
from parso.python import tree
from jedi.evaluate import imports
from jedi.evaluate.filters import TreeNameDefinition
from jedi.evaluate.representation import ModuleContext
def compare_contexts(c1, c2):
return c1 == c2 or (c1[1] == c2[1] and c1[0].tree_node == c2[0].tree_node)
def usages(evaluator, definition_names, mods):
"""
:param definitions: list of Name
@@ -14,8 +18,9 @@ def usages(evaluator, definition_names, mods):
if name.api_type == 'module':
found = False
for context in name.infer():
found = True
yield context.name
if isinstance(context, ModuleContext):
found = True
yield context.name
if not found:
yield name
else:
@@ -36,10 +41,12 @@ def usages(evaluator, definition_names, mods):
definition_names = set(resolve_names(definition_names))
for m in imports.get_modules_containing_name(evaluator, mods, search_name):
if isinstance(m, ModuleContext):
for name_node in m.tree_node.used_names.get(search_name, []):
for name_node in m.tree_node.get_used_names().get(search_name, []):
context = evaluator.create_context(m, name_node)
result = evaluator.goto(context, name_node)
if [c for c in compare_array(result) if c in compare_definitions]:
if any(compare_contexts(c1, c2)
for c1 in compare_array(result)
for c2 in compare_definitions):
name = TreeNameDefinition(context, name_node)
definition_names.add(name)
# Previous definitions might be imports, so include them

View File

@@ -3,8 +3,6 @@ This caching is very important for speed and memory optimizations. There's
nothing really spectacular, just some decorators. The following cache types are
available:
- module caching (`load_parser` and `save_parser`), which uses pickle and is
really important to assure low load times of modules like ``numpy``.
- ``time_cache`` can be used to cache something for just a limited time span,
which can be useful if there's user interaction and the user cannot react
faster than a certain time.
@@ -14,14 +12,48 @@ there are global variables, which are holding the cache information. Some of
these variables are being cleaned after every API usage.
"""
import time
import inspect
from jedi import settings
from jedi.parser.utils import parser_cache
from jedi.parser.utils import underscore_memoization
from parso.cache import parser_cache
_time_caches = {}
def underscore_memoization(func):
"""
Decorator for methods::
class A(object):
def x(self):
if self._x:
self._x = 10
return self._x
Becomes::
class A(object):
@underscore_memoization
def x(self):
return 10
A now has an attribute ``_x`` written by this decorator.
"""
name = '_' + func.__name__
def wrapper(self):
try:
return getattr(self, name)
except AttributeError:
result = func(self)
if inspect.isgenerator(result):
result = list(result)
setattr(self, name, result)
return result
return wrapper
def clear_time_caches(delete_all=False):
""" Jedi caches many things, that should be completed after each completion
finishes.
@@ -90,31 +122,3 @@ def memoize_method(method):
dct[key] = result
return result
return wrapper
def _invalidate_star_import_cache_module(module, only_main=False):
""" Important if some new modules are being reparsed """
try:
t, modules = _time_caches['star_import_cache_validity'][module]
except KeyError:
pass
else:
del _time_caches['star_import_cache_validity'][module]
# This stuff was part of load_parser. However since we're most likely
# not going to use star import caching anymore, just ignore it.
#else:
# In case there is already a module cached and this module
# has to be reparsed, we also need to invalidate the import
# caches.
# _invalidate_star_import_cache_module(parser_cache_item.parser.module)
def invalidate_star_import_cache(path):
"""On success returns True."""
try:
parser_cache_item = parser_cache[path]
except KeyError:
pass
else:
_invalidate_star_import_cache_module(parser_cache_item.parser.module)

View File

@@ -2,10 +2,8 @@
import sys
import contextlib
import functools
import re
from ast import literal_eval
from jedi._compatibility import unicode, reraise
from jedi._compatibility import reraise
from jedi import settings
@@ -80,19 +78,6 @@ class PushBackIterator(object):
return self.current
@contextlib.contextmanager
def scale_speed_settings(factor):
a = settings.max_executions
b = settings.max_until_execution_unique
settings.max_executions *= factor
settings.max_until_execution_unique *= factor
try:
yield
finally:
settings.max_executions = a
settings.max_until_execution_unique = b
def indent_block(text, indention=' '):
"""This function indents a text block with a default of four spaces."""
temp = ''
@@ -115,72 +100,6 @@ def ignored(*exceptions):
pass
def source_to_unicode(source, encoding=None):
def detect_encoding():
"""
For the implementation of encoding definitions in Python, look at:
- http://www.python.org/dev/peps/pep-0263/
- http://docs.python.org/2/reference/lexical_analysis.html#encoding-declarations
"""
byte_mark = literal_eval(r"b'\xef\xbb\xbf'")
if source.startswith(byte_mark):
# UTF-8 byte-order mark
return 'utf-8'
first_two_lines = re.match(br'(?:[^\n]*\n){0,2}', source).group(0)
possible_encoding = re.search(br"coding[=:]\s*([-\w.]+)",
first_two_lines)
if possible_encoding:
return possible_encoding.group(1)
else:
# the default if nothing else has been set -> PEP 263
return encoding if encoding is not None else 'utf-8'
if isinstance(source, unicode):
# only cast str/bytes
return source
encoding = detect_encoding()
if not isinstance(encoding, unicode):
encoding = unicode(encoding, 'utf-8', 'replace')
# cast to unicode by default
return unicode(source, encoding, 'replace')
def splitlines(string, keepends=False):
"""
A splitlines for Python code. In contrast to Python's ``str.splitlines``,
looks at form feeds and other special characters as normal text. Just
splits ``\n`` and ``\r\n``.
Also different: Returns ``['']`` for an empty string input.
In Python 2.7 form feeds are used as normal characters when using
str.splitlines. However in Python 3 somewhere there was a decision to split
also on form feeds.
"""
if keepends:
# If capturing parentheses are used in pattern, then the text of all
# groups in the pattern are also returned as part of the resulting
# list.
lst = re.split('(\n|\r\n)', string)
# Need to merge the new lines with the actual lines.
odd = False
lines = []
for string in lst:
if odd:
line += string
lines.append(line)
else:
line = string
odd = not odd
if odd:
lines.append(line)
return lines
else:
return re.split('\n|\r\n', string)
def unite(iterable):
"""Turns a two dimensional array into a one dimensional."""
return set(typ for types in iterable for typ in types)

View File

@@ -1,5 +1,4 @@
from jedi._compatibility import encoding, is_py3, u
import inspect
import os
import time
@@ -61,7 +60,6 @@ enable_notice = False
# callback, interface: level, str
debug_function = None
ignored_modules = ['jedi.parser']
_debug_indent = 0
_start_time = time.time()
@@ -91,12 +89,9 @@ def dbg(message, *args, **kwargs):
assert color
if debug_function and enable_notice:
frm = inspect.stack()[1]
mod = inspect.getmodule(frm[0])
if not (mod.__name__ in ignored_modules):
i = ' ' * _debug_indent
_lazy_colorama_init()
debug_function(color, i + 'dbg: ' + message % tuple(u(repr(a)) for a in args))
i = ' ' * _debug_indent
_lazy_colorama_init()
debug_function(color, i + 'dbg: ' + message % tuple(u(repr(a)) for a in args))
def warning(message, *args, **kwargs):

View File

@@ -63,14 +63,16 @@ that are not used are just being ignored.
import copy
import sys
from jedi.parser import tree
from parso.python import tree
import parso
from jedi import debug
from jedi.common import unite
from jedi.evaluate import representation as er
from jedi.evaluate import imports
from jedi.evaluate import recursion
from jedi.evaluate import iterable
from jedi.evaluate.cache import memoize_default
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate import stdlib
from jedi.evaluate import finder
from jedi.evaluate import compiled
@@ -80,16 +82,43 @@ from jedi.evaluate import helpers
from jedi.evaluate import pep0484
from jedi.evaluate.filters import TreeNameDefinition, ParamName
from jedi.evaluate.instance import AnonymousInstance, BoundMethod
from jedi.evaluate.context import ContextualizedName, ContextualizedNode
from jedi import parser_utils
def _limit_context_infers(func):
"""
This is for now the way how we limit type inference going wild. There are
other ways to ensure recursion limits as well. This is mostly necessary
because of instance (self) access that can be quite tricky to limit.
I'm still not sure this is the way to go, but it looks okay for now and we
can still go anther way in the future. Tests are there. ~ dave
"""
def wrapper(evaluator, context, *args, **kwargs):
n = context.tree_node
try:
evaluator.inferred_element_counts[n] += 1
if evaluator.inferred_element_counts[n] > 300:
debug.warning('In context %s there were too many inferences.', n)
return set()
except KeyError:
evaluator.inferred_element_counts[n] = 1
return func(evaluator, context, *args, **kwargs)
return wrapper
class Evaluator(object):
def __init__(self, grammar, sys_path=None):
self.grammar = grammar
self.latest_grammar = parso.load_grammar(version='3.6')
self.memoize_cache = {} # for memoize decorators
# To memorize modules -> equals `sys.modules`.
self.modules = {} # like `sys.modules`.
self.compiled_cache = {} # see `evaluate.compiled.create()`
self.mixed_cache = {} # see `evaluate.compiled.mixed.create()`
self.inferred_element_counts = {}
self.mixed_cache = {} # see `evaluate.compiled.mixed._create()`
self.analysis = []
self.dynamic_params_depth = 0
self.is_analysis = False
@@ -113,7 +142,7 @@ class Evaluator(object):
self.execution_recursion_detector = recursion.ExecutionRecursionDetector(self)
def find_types(self, context, name_or_str, name_context, position=None,
search_global=False, is_goto=False):
search_global=False, is_goto=False, analysis_errors=True):
"""
This is the search function. The most important part to debug.
`remove_statements` and `filter_statements` really are the core part of
@@ -122,19 +151,21 @@ class Evaluator(object):
:param position: Position of the last statement -> tuple of line, column
:return: List of Names. Their parents are the types.
"""
f = finder.NameFinder(self, context, name_context, name_or_str, position)
f = finder.NameFinder(self, context, name_context, name_or_str,
position, analysis_errors=analysis_errors)
filters = f.get_filters(search_global)
if is_goto:
return f.filter_name(filters)
return f.find(filters, attribute_lookup=not search_global)
@_limit_context_infers
def eval_statement(self, context, stmt, seek_name=None):
with recursion.execution_allowed(self, stmt) as allowed:
if allowed or context.get_root_context() == self.BUILTINS:
return self._eval_stmt(context, stmt, seek_name)
return set()
#@memoize_default(default=[], evaluator_is_first_arg=True)
#@evaluator_function_cache(default=[])
@debug.increase_indent
def _eval_stmt(self, context, stmt, seek_name=None):
"""
@@ -150,29 +181,30 @@ class Evaluator(object):
types = self.eval_element(context, rhs)
if seek_name:
types = finder.check_tuple_assignments(self, types, seek_name)
c_node = ContextualizedName(context, seek_name)
types = finder.check_tuple_assignments(self, c_node, types)
first_operation = stmt.first_operation()
if first_operation not in ('=', None) and first_operation.type == 'operator':
first_operator = next(stmt.yield_operators(), None)
if first_operator not in ('=', None) and first_operator.type == 'operator':
# `=` is always the last character in aug assignments -> -1
operator = copy.copy(first_operation)
operator = copy.copy(first_operator)
operator.value = operator.value[:-1]
name = str(stmt.get_defined_names()[0])
name = stmt.get_defined_names()[0].value
left = context.py__getattribute__(
name, position=stmt.start_pos, search_global=True)
for_stmt = stmt.get_parent_until(tree.ForStmt)
if isinstance(for_stmt, tree.ForStmt) and types \
and for_stmt.defines_one_name():
for_stmt = tree.search_ancestor(stmt, 'for_stmt')
if for_stmt is not None and for_stmt.type == 'for_stmt' and types \
and parser_utils.for_stmt_defines_one_name(for_stmt):
# Iterate through result and add the values, that's possible
# only in for loops without clutter, because they are
# predictable. Also only do it, if the variable is not a tuple.
node = for_stmt.get_input_node()
for_iterables = self.eval_element(context, node)
ordered = list(iterable.py__iter__(self, for_iterables, node))
node = for_stmt.get_testlist()
cn = ContextualizedNode(context, node)
ordered = list(iterable.py__iter__(self, cn.infer(), cn))
for lazy_context in ordered:
dct = {str(for_stmt.children[1]): lazy_context.infer()}
dct = {for_stmt.children[1].value: lazy_context.infer()}
with helpers.predefine_names(context, for_stmt, dct):
t = self.eval_element(context, rhs)
left = precedence.calculate(self, context, left, operator, t)
@@ -191,7 +223,7 @@ class Evaluator(object):
if_stmt = if_stmt.parent
if if_stmt.type in ('if_stmt', 'for_stmt'):
break
if if_stmt.is_scope():
if parser_utils.is_scope(if_stmt):
if_stmt = None
break
predefined_if_name_dict = context.predefined_names.get(if_stmt)
@@ -207,8 +239,8 @@ class Evaluator(object):
# names in the suite.
if_names = helpers.get_names_of_node(if_stmt_test)
element_names = helpers.get_names_of_node(element)
str_element_names = [str(e) for e in element_names]
if any(str(i) in str_element_names for i in if_names):
str_element_names = [e.value for e in element_names]
if any(i.value in str_element_names for i in if_names):
for if_name in if_names:
definitions = self.goto_definitions(context, if_name)
# Every name that has multiple different definitions
@@ -229,12 +261,12 @@ class Evaluator(object):
new_name_dicts = list(original_name_dicts)
for i, name_dict in enumerate(new_name_dicts):
new_name_dicts[i] = name_dict.copy()
new_name_dicts[i][str(if_name)] = set([definition])
new_name_dicts[i][if_name.value] = set([definition])
name_dicts += new_name_dicts
else:
for name_dict in name_dicts:
name_dict[str(if_name)] = definitions
name_dict[if_name.value] = definitions
if len(name_dicts) > 1:
result = set()
for name_dict in name_dicts:
@@ -261,26 +293,28 @@ class Evaluator(object):
return self._eval_element_not_cached(context, element)
return self._eval_element_cached(context, element)
@memoize_default(default=set(), evaluator_is_first_arg=True)
@evaluator_function_cache(default=set())
def _eval_element_cached(self, context, element):
return self._eval_element_not_cached(context, element)
@debug.increase_indent
@_limit_context_infers
def _eval_element_not_cached(self, context, element):
debug.dbg('eval_element %s@%s', element, element.start_pos)
types = set()
if isinstance(element, (tree.Name, tree.Literal)) or element.type == 'atom':
typ = element.type
if typ in ('name', 'number', 'string', 'atom'):
types = self.eval_atom(context, element)
elif isinstance(element, tree.Keyword):
elif typ == 'keyword':
# For False/True/None
if element.value in ('False', 'True', 'None'):
types.add(compiled.builtin_from_name(self, element.value))
# else: print e.g. could be evaluated like this in Python 2.7
elif isinstance(element, tree.Lambda):
elif typ == 'lambdef':
types = set([er.FunctionContext(self, context, element)])
elif element.type == 'expr_stmt':
elif typ == 'expr_stmt':
types = self.eval_statement(context, element)
elif element.type in ('power', 'atom_expr'):
elif typ in ('power', 'atom_expr'):
first_child = element.children[0]
if not (first_child.type == 'keyword' and first_child.value == 'await'):
types = self.eval_atom(context, first_child)
@@ -290,22 +324,24 @@ class Evaluator(object):
types = set(precedence.calculate(self, context, types, trailer, right))
break
types = self.eval_trailer(context, types, trailer)
elif element.type in ('testlist_star_expr', 'testlist',):
elif typ in ('testlist_star_expr', 'testlist',):
# The implicit tuple in statements.
types = set([iterable.SequenceLiteralContext(self, context, element)])
elif element.type in ('not_test', 'factor'):
elif typ in ('not_test', 'factor'):
types = self.eval_element(context, element.children[-1])
for operator in element.children[:-1]:
types = set(precedence.factor_calculate(self, types, operator))
elif element.type == 'test':
elif typ == 'test':
# `x if foo else y` case.
types = (self.eval_element(context, element.children[0]) |
self.eval_element(context, element.children[-1]))
elif element.type == 'operator':
elif typ == 'operator':
# Must be an ellipsis, other operators are not evaluated.
assert element.value == '...'
# In Python 2 ellipsis is coded as three single dot tokens, not
# as one token 3 dot token.
assert element.value in ('.', '...')
types = set([compiled.create(self, Ellipsis)])
elif element.type == 'dotted_name':
elif typ == 'dotted_name':
types = self.eval_atom(context, element.children[0])
for next_name in element.children[2::2]:
# TODO add search_global=True?
@@ -314,12 +350,10 @@ class Evaluator(object):
for typ in types
)
types = types
elif element.type == 'eval_input':
elif typ == 'eval_input':
types = self._eval_element_not_cached(context, element.children[0])
elif element.type == 'annassign':
print(element.children[1])
elif typ == 'annassign':
types = pep0484._evaluate_for_annotation(context, element.children[1])
print('xxx')
else:
types = precedence.calculate_children(self, context, element.children)
debug.dbg('eval_element result %s', types)
@@ -331,14 +365,12 @@ class Evaluator(object):
generate the node (because it has just one child). In that case an atom
might be a name or a literal as well.
"""
if isinstance(atom, tree.Name):
if atom.type == 'name':
# This is the first global lookup.
stmt = atom.get_definition()
if isinstance(stmt, tree.CompFor):
stmt = stmt.get_parent_until((tree.ClassOrFunc, tree.ExprStmt))
if stmt.type != 'expr_stmt':
# We only need to adjust the start_pos for statements, because
# there the name cannot be used.
stmt = tree.search_ancestor(
atom, 'expr_stmt', 'lambdef'
) or atom
if stmt.type == 'lambdef':
stmt = atom
return context.py__getattribute__(
name_or_str=atom,
@@ -346,7 +378,8 @@ class Evaluator(object):
search_global=True
)
elif isinstance(atom, tree.Literal):
return set([compiled.create(self, atom.eval())])
string = parser_utils.safe_literal_eval(atom.value)
return set([compiled.create(self, string)])
else:
c = atom.children
if c[0].type == 'string':
@@ -412,10 +445,6 @@ class Evaluator(object):
@debug.increase_indent
def execute(self, obj, arguments):
if not isinstance(arguments, param.AbstractArguments):
raise NotImplementedError
arguments = param.Arguments(self, arguments)
if self.is_analysis:
arguments.eval_all()
@@ -438,30 +467,50 @@ class Evaluator(object):
return types
def goto_definitions(self, context, name):
def_ = name.get_definition()
is_simple_name = name.parent.type not in ('power', 'trailer')
if is_simple_name:
if name.parent.type == 'classdef' and name.parent.name == name:
def_ = name.get_definition(import_name_always=True)
if def_ is not None:
type_ = def_.type
if type_ == 'classdef':
return [er.ClassContext(self, name.parent, context)]
elif name.parent.type == 'funcdef':
elif type_ == 'funcdef':
return [er.FunctionContext(self, context, name.parent)]
elif name.parent.type == 'file_input':
raise NotImplementedError
if def_.type == 'expr_stmt' and name in def_.get_defined_names():
return self.eval_statement(context, def_, name)
elif def_.type == 'for_stmt':
if type_ == 'expr_stmt':
is_simple_name = name.parent.type not in ('power', 'trailer')
if is_simple_name:
return self.eval_statement(context, def_, name)
if type_ == 'for_stmt':
container_types = self.eval_element(context, def_.children[3])
for_types = iterable.py__iter__types(self, container_types, def_.children[3])
return finder.check_tuple_assignments(self, for_types, name)
elif def_.type in ('import_from', 'import_name'):
cn = ContextualizedNode(context, def_.children[3])
for_types = iterable.py__iter__types(self, container_types, cn)
c_node = ContextualizedName(context, name)
return finder.check_tuple_assignments(self, c_node, for_types)
if type_ in ('import_from', 'import_name'):
return imports.infer_import(context, name)
return helpers.evaluate_call_of_leaf(context, name)
def goto(self, context, name):
stmt = name.get_definition()
definition = name.get_definition(import_name_always=True)
if definition is not None:
type_ = definition.type
if type_ == 'expr_stmt':
# Only take the parent, because if it's more complicated than just
# a name it's something you can "goto" again.
is_simple_name = name.parent.type not in ('power', 'trailer')
if is_simple_name:
return [TreeNameDefinition(context, name)]
elif type_ == 'param':
return [ParamName(context, name)]
elif type_ in ('funcdef', 'classdef'):
return [TreeNameDefinition(context, name)]
elif type_ in ('import_from', 'import_name'):
module_names = imports.infer_import(context, name, is_goto=True)
return module_names
par = name.parent
if par.type == 'argument' and par.children[1] == '=' and par.children[0] == name:
typ = par.type
if typ == 'argument' and par.children[1] == '=' and par.children[0] == name:
# Named param goto.
trailer = par.parent
if trailer.type == 'arglist':
@@ -486,18 +535,7 @@ class Evaluator(object):
if param_name.string_name == name.value:
param_names.append(param_name)
return param_names
elif isinstance(par, tree.ExprStmt) and name in par.get_defined_names():
# Only take the parent, because if it's more complicated than just
# a name it's something you can "goto" again.
return [TreeNameDefinition(context, name)]
elif par.type == 'param' and par.name:
return [ParamName(context, name)]
elif isinstance(par, (tree.Param, tree.Function, tree.Class)) and par.name is name:
return [TreeNameDefinition(context, name)]
elif isinstance(stmt, tree.Import):
module_names = imports.infer_import(context, name, is_goto=True)
return module_names
elif par.type == 'dotted_name': # Is a decorator.
elif typ == 'dotted_name': # Is a decorator.
index = par.children.index(name)
if index > 0:
new_dotted = helpers.deep_ast_copy(par)
@@ -508,16 +546,17 @@ class Evaluator(object):
for value in values
)
if par.type == 'trailer' and par.children[0] == '.':
if typ == 'trailer' and par.children[0] == '.':
values = helpers.evaluate_call_of_leaf(context, name, cut_own_trailer=True)
return unite(
value.py__getattribute__(name, name_context=context, is_goto=True)
for value in values
)
else:
if stmt.type != 'expr_stmt':
# We only need to adjust the start_pos for statements, because
# there the name cannot be used.
stmt = tree.search_ancestor(
name, 'expr_stmt', 'lambdef'
) or name
if stmt.type == 'lambdef':
stmt = name
return context.py__getattribute__(
name,
@@ -530,7 +569,7 @@ class Evaluator(object):
while True:
node = node.parent
if node.is_scope():
if parser_utils.is_scope(node):
return node
elif node.type in ('argument', 'testlist_comp'):
if node.children[1].type == 'comp_for':
@@ -545,8 +584,8 @@ class Evaluator(object):
if scope_node == base_node:
return base_context
is_funcdef = scope_node.type in ('funcdef', 'lambda')
parent_scope = scope_node.get_parent_scope()
is_funcdef = scope_node.type in ('funcdef', 'lambdef')
parent_scope = parser_utils.get_parent_scope(scope_node)
parent_context = from_scope_node(parent_scope, child_is_funcdef=is_funcdef)
if is_funcdef:
@@ -579,10 +618,10 @@ class Evaluator(object):
base_node = base_context.tree_node
if node_is_context and node.is_scope():
if node_is_context and parser_utils.is_scope(node):
scope_node = node
else:
if node.parent.type in ('funcdef', 'classdef'):
if node.parent.type in ('funcdef', 'classdef') and node.parent.name == node:
# When we're on class/function names/leafs that define the
# object itself and not its contents.
node = node.parent

View File

@@ -2,11 +2,9 @@
Module for statical analysis.
"""
from jedi import debug
from jedi.parser import tree
from parso.python import tree
from jedi.evaluate.compiled import CompiledObject
from jedi.common import unite
CODES = {
'attribute-error': (1, AttributeError, 'Potential AttributeError.'),
@@ -82,7 +80,9 @@ def add(node_context, error_name, node, message=None, typ=Error, payload=None):
if _check_for_exception_catch(node_context, node, exception, payload):
return
module_path = node.get_root_node().path
# TODO this path is probably not right
module_context = node_context.get_root_context()
module_path = module_context.py__file__()
instance = typ(error_name, module_path, node.start_pos, message)
debug.warning(str(instance), format=False)
node_context.evaluator.analysis.append(instance)
@@ -99,7 +99,7 @@ def _check_for_setattr(instance):
node = module.tree_node
try:
stmts = node.used_names['setattr']
stmts = node.get_used_names()['setattr']
except KeyError:
return False
@@ -153,7 +153,7 @@ def _check_for_exception_catch(node_context, jedi_name, exception, payload=None)
and not (branch_type.start_pos < jedi_name.start_pos <= suite.end_pos):
return False
for node in obj.except_clauses():
for node in obj.get_except_clause_tests():
if node is None:
return True # An exception block that catches everything.
else:
@@ -190,7 +190,7 @@ def _check_for_exception_catch(node_context, jedi_name, exception, payload=None)
key, lazy_context = args[1]
names = list(lazy_context.infer())
assert len(names) == 1 and isinstance(names[0], CompiledObject)
assert names[0].obj == str(payload[1])
assert names[0].obj == payload[1].value
# Check objects
key, lazy_context = args[0]
@@ -203,10 +203,10 @@ def _check_for_exception_catch(node_context, jedi_name, exception, payload=None)
while obj is not None and not isinstance(obj, (tree.Function, tree.Class)):
if isinstance(obj, tree.Flow):
# try/except catch check
if obj.isinstance(tree.TryStmt) and check_try_for_except(obj, exception):
if obj.type == 'try_stmt' and check_try_for_except(obj, exception):
return True
# hasattr check
if exception == AttributeError and obj.isinstance(tree.IfStmt, tree.WhileStmt):
if exception == AttributeError and obj.type in ('if_stmt', 'while_stmt'):
if check_hasattr(obj.children[1], obj.children[3]):
return True
obj = obj.parent

View File

@@ -1,15 +1,15 @@
"""
- the popular ``memoize_default`` works like a typical memoize and returns the
- the popular ``_memoize_default`` works like a typical memoize and returns the
default otherwise.
- ``CachedMetaClass`` uses ``memoize_default`` to do the same with classes.
- ``CachedMetaClass`` uses ``_memoize_default`` to do the same with classes.
"""
import inspect
NO_DEFAULT = object()
_NO_DEFAULT = object()
def memoize_default(default=NO_DEFAULT, evaluator_is_first_arg=False, second_arg_is_evaluator=False):
def _memoize_default(default=_NO_DEFAULT, evaluator_is_first_arg=False, second_arg_is_evaluator=False):
""" This is a typical memoization decorator, BUT there is one difference:
To prevent recursion it sets defaults.
@@ -19,10 +19,11 @@ def memoize_default(default=NO_DEFAULT, evaluator_is_first_arg=False, second_arg
"""
def func(function):
def wrapper(obj, *args, **kwargs):
# TODO These checks are kind of ugly and slow.
if evaluator_is_first_arg:
cache = obj.memoize_cache
elif second_arg_is_evaluator: # needed for meta classes
cache = args[0].memoize_cache
elif second_arg_is_evaluator:
cache = args[0].memoize_cache # needed for meta classes
else:
cache = obj.evaluator.memoize_cache
@@ -36,7 +37,7 @@ def memoize_default(default=NO_DEFAULT, evaluator_is_first_arg=False, second_arg
if key in memo:
return memo[key]
else:
if default is not NO_DEFAULT:
if default is not _NO_DEFAULT:
memo[key] = default
rv = function(obj, *args, **kwargs)
if inspect.isgenerator(rv):
@@ -44,15 +45,37 @@ def memoize_default(default=NO_DEFAULT, evaluator_is_first_arg=False, second_arg
memo[key] = rv
return rv
return wrapper
return func
def evaluator_function_cache(default=_NO_DEFAULT):
def decorator(func):
return _memoize_default(default=default, evaluator_is_first_arg=True)(func)
return decorator
def evaluator_method_cache(default=_NO_DEFAULT):
def decorator(func):
return _memoize_default(default=default)(func)
return decorator
def _memoize_meta_class():
def decorator(call):
return _memoize_default(second_arg_is_evaluator=True)(call)
return decorator
class CachedMetaClass(type):
"""
This is basically almost the same than the decorator above, it just caches
class initializations. Either you do it this way or with decorators, but
with decorators you lose class access (isinstance, etc).
"""
@memoize_default(None, second_arg_is_evaluator=True)
@_memoize_meta_class()
def __call__(self, *args, **kwargs):
return super(CachedMetaClass, self).__call__(*args, **kwargs)

View File

@@ -5,16 +5,16 @@ import inspect
import re
import sys
import os
import types
from functools import partial
from jedi._compatibility import builtins as _builtins, unicode
from jedi._compatibility import builtins as _builtins, unicode, py_version
from jedi import debug
from jedi.cache import underscore_memoization, memoize_method
from jedi.parser.tree import Param, Operator
from jedi.evaluate.helpers import FakeName
from jedi.evaluate.filters import AbstractFilter, AbstractNameDefinition, \
ContextNameMixin
from jedi.evaluate.context import Context, LazyKnownContext
from jedi.evaluate.compiled.getattr_static import getattr_static
from . import fake
@@ -24,6 +24,23 @@ if os.path.altsep is not None:
_path_re = re.compile('(?:\.[^{0}]+|[{0}]__init__\.py)$'.format(re.escape(_sep)))
del _sep
# Those types don't exist in typing.
MethodDescriptorType = type(str.replace)
WrapperDescriptorType = type(set.__iter__)
# `object.__subclasshook__` is an already executed descriptor.
object_class_dict = type.__dict__["__dict__"].__get__(object)
ClassMethodDescriptorType = type(object_class_dict['__subclasshook__'])
ALLOWED_DESCRIPTOR_ACCESS = (
types.FunctionType,
types.GetSetDescriptorType,
types.MemberDescriptorType,
MethodDescriptorType,
WrapperDescriptorType,
ClassMethodDescriptorType,
staticmethod,
classmethod,
)
class CheckAttribute(object):
"""Raises an AttributeError if the attribute X isn't available."""
@@ -50,7 +67,7 @@ class CheckAttribute(object):
class CompiledObject(Context):
path = None # modules have this attribute - set it to None.
used_names = {} # To be consistent with modules.
used_names = lambda self: {} # To be consistent with modules.
def __init__(self, evaluator, obj, parent_context=None, faked_class=None):
super(CompiledObject, self).__init__(evaluator, parent_context)
@@ -94,45 +111,53 @@ class CompiledObject(Context):
def is_class(self):
return inspect.isclass(self.obj)
@property
def doc(self):
def py__doc__(self, include_call_signature=False):
return inspect.getdoc(self.obj) or ''
@property
def get_params(self):
return [] # TODO Fix me.
params_str, ret = self._parse_function_doc()
tokens = params_str.split(',')
if inspect.ismethoddescriptor(self.obj):
tokens.insert(0, 'self')
params = []
for p in tokens:
parts = [FakeName(part) for part in p.strip().split('=')]
if len(parts) > 1:
parts.insert(1, Operator('=', (0, 0)))
params.append(Param(parts, self))
return params
def get_param_names(self):
params_str, ret = self._parse_function_doc()
tokens = params_str.split(',')
if inspect.ismethoddescriptor(self.obj):
tokens.insert(0, 'self')
for p in tokens:
parts = p.strip().split('=')
if len(parts) > 1:
parts.insert(1, Operator('=', (0, 0)))
yield UnresolvableParamName(self, parts[0])
obj = self.obj
try:
if py_version < 33:
raise ValueError("inspect.signature was introduced in 3.3")
if py_version == 34:
# In 3.4 inspect.signature are wrong for str and int. This has
# been fixed in 3.5. The signature of object is returned,
# because no signature was found for str. Here we imitate 3.5
# logic and just ignore the signature if the magic methods
# don't match object.
# 3.3 doesn't even have the logic and returns nothing for str
# and classes that inherit from object.
user_def = inspect._signature_get_user_defined_method
if (inspect.isclass(obj)
and not user_def(type(obj), '__init__')
and not user_def(type(obj), '__new__')
and (obj.__init__ != object.__init__
or obj.__new__ != object.__new__)):
raise ValueError
signature = inspect.signature(obj)
except ValueError: # Has no signature
params_str, ret = self._parse_function_doc()
tokens = params_str.split(',')
if inspect.ismethoddescriptor(obj):
tokens.insert(0, 'self')
for p in tokens:
parts = p.strip().split('=')
yield UnresolvableParamName(self, parts[0])
else:
for signature_param in signature.parameters.values():
yield SignatureParamName(self, signature_param)
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, repr(self.obj))
@underscore_memoization
def _parse_function_doc(self):
if self.doc is None:
doc = self.py__doc__()
if doc is None:
return '', ''
return _parse_function_doc(self.doc)
return _parse_function_doc(doc)
@property
def api_type(self):
@@ -192,12 +217,6 @@ class CompiledObject(Context):
"""
return CompiledObjectFilter(self.evaluator, self, is_instance)
def get_subscope_by_name(self, name):
if name in dir(self.obj):
return CompiledName(self.evaluator, self, name).parent
else:
raise KeyError("CompiledObject doesn't have an attribute '%s'." % name)
@CheckAttribute
def py__getitem__(self, index):
if type(self.obj) not in (str, list, tuple, unicode, bytes, bytearray, dict):
@@ -212,7 +231,10 @@ class CompiledObject(Context):
# Get rid of side effects, we won't call custom `__getitem__`s.
return
for part in self.obj:
for i, part in enumerate(self.obj):
if i > 20:
# Should not go crazy with large iterators
break
yield LazyKnownContext(create(self.evaluator, part))
def py__name__(self):
@@ -230,9 +252,9 @@ class CompiledObject(Context):
return CompiledContextName(self, name)
def _execute_function(self, params):
from jedi.evaluate import docstrings
if self.type != 'funcdef':
return
for name in self._parse_function_doc()[1].split():
try:
bltn_obj = getattr(_builtins, name)
@@ -246,9 +268,8 @@ class CompiledObject(Context):
bltn_obj = create(self.evaluator, bltn_obj)
for result in self.evaluator.execute(bltn_obj, params):
yield result
def is_scope(self):
return True
for type_ in docstrings.infer_return_types(self):
yield type_
def get_self_attributes(self):
return [] # Instance compatibility
@@ -256,6 +277,9 @@ class CompiledObject(Context):
def get_imports(self):
return [] # Builtins don't have imports
def dict_values(self):
return set(create(self.evaluator, v) for v in self.obj.values())
class CompiledName(AbstractNameDefinition):
def __init__(self, evaluator, parent_context, name):
@@ -280,6 +304,29 @@ class CompiledName(AbstractNameDefinition):
return [_create_from_name(self._evaluator, module, self.parent_context, self.string_name)]
class SignatureParamName(AbstractNameDefinition):
api_type = 'param'
def __init__(self, compiled_obj, signature_param):
self.parent_context = compiled_obj.parent_context
self._signature_param = signature_param
@property
def string_name(self):
return self._signature_param.name
def infer(self):
p = self._signature_param
evaluator = self.parent_context.evaluator
types = set()
if p.default is not p.empty:
types.add(create(evaluator, p.default))
if p.annotation is not p.empty:
annotation = create(evaluator, p.annotation)
types |= annotation.execute_evaluated()
return types
class UnresolvableParamName(AbstractNameDefinition):
api_type = 'param'
@@ -325,16 +372,17 @@ class CompiledObjectFilter(AbstractFilter):
name = str(name)
obj = self._compiled_object.obj
try:
getattr(obj, name)
if self._is_instance and name not in dir(obj):
return []
attr, is_get_descriptor = getattr_static(obj, name)
except AttributeError:
return []
except Exception:
# This is a bit ugly. We're basically returning this to make
# lookups possible without having the actual attribute. However
# this makes proper completion possible.
return [EmptyCompiledName(self._evaluator, name)]
else:
if is_get_descriptor \
and not type(attr) in ALLOWED_DESCRIPTOR_ACCESS:
# In case of descriptors that have get methods we cannot return
# it's value, because that would mean code execution.
return [EmptyCompiledName(self._evaluator, name)]
if self._is_instance and name not in dir(obj):
return []
return [self._create_name(name)]
def values(self):
@@ -346,7 +394,7 @@ class CompiledObjectFilter(AbstractFilter):
is_instance = self._is_instance or fake.is_class_instance(obj)
# ``dir`` doesn't include the type names.
if not inspect.ismodule(obj) and obj != type and not is_instance:
if not inspect.ismodule(obj) and (obj is not type) and not is_instance:
for filter in create(self._evaluator, type).get_filters():
names += filter.values()
return names
@@ -413,7 +461,7 @@ def load_module(evaluator, path=None, name=None):
raise
except ImportError:
# If a module is "corrupt" or not really a Python module or whatever.
debug.warning('Module %s not importable.', path)
debug.warning('Module %s not importable in path %s.', dotted_path, path)
return None
finally:
sys.path = temp
@@ -575,7 +623,7 @@ def create(evaluator, obj, parent_context=None, module=None, faked=None):
# Modules don't have parents, be careful with caching: recurse.
return create(evaluator, obj)
else:
if parent_context is None and obj != _builtins:
if parent_context is None and obj is not _builtins:
return create(evaluator, obj, create(evaluator, _builtins))
try:

View File

@@ -7,10 +7,11 @@ mixing in Python code, the autocompletion should work much better for builtins.
import os
import inspect
import types
from itertools import chain
from parso.python import tree
from jedi._compatibility import is_py3, builtins, unicode, is_py34
from jedi.parser import ParserWithRecovery, load_grammar
from jedi.parser import tree as pt
modules = {}
@@ -46,7 +47,7 @@ class FakeDoesNotExist(Exception):
pass
def _load_faked_module(module):
def _load_faked_module(grammar, module):
module_name = module.__name__
if module_name == '__builtin__' and not is_py3:
module_name = 'builtins'
@@ -61,22 +62,20 @@ def _load_faked_module(module):
except IOError:
modules[module_name] = None
return
grammar = load_grammar(version='3.4')
module = ParserWithRecovery(grammar, unicode(source), module_name).module
modules[module_name] = module
modules[module_name] = m = grammar.parse(unicode(source))
if module_name == 'builtins' and not is_py3:
# There are two implementations of `open` for either python 2/3.
# -> Rename the python2 version (`look at fake/builtins.pym`).
open_func = _search_scope(module, 'open')
open_func = _search_scope(m, 'open')
open_func.children[1].value = 'open_python3'
open_func = _search_scope(module, 'open_python2')
open_func = _search_scope(m, 'open_python2')
open_func.children[1].value = 'open'
return module
return m
def _search_scope(scope, obj_name):
for s in scope.subscopes:
for s in chain(scope.iter_classdefs(), scope.iter_funcdefs()):
if s.name.value == obj_name:
return s
@@ -106,16 +105,16 @@ def get_module(obj):
return builtins
def _faked(module, obj, name):
def _faked(grammar, module, obj, name):
# Crazy underscore actions to try to escape all the internal madness.
if module is None:
module = get_module(obj)
faked_mod = _load_faked_module(module)
faked_mod = _load_faked_module(grammar, module)
if faked_mod is None:
return None, None
# Having the module as a `parser.tree.Module`, we need to scan
# Having the module as a `parser.python.tree.Module`, we need to scan
# for methods.
if name is None:
if inspect.isbuiltin(obj) or inspect.isclass(obj):
@@ -132,7 +131,7 @@ def _faked(module, obj, name):
return None, None
return _search_scope(cls, obj.__name__), faked_mod
else:
if obj == module:
if obj is module:
return _search_scope(faked_mod, name), faked_mod
else:
try:
@@ -156,7 +155,7 @@ def memoize_faked(obj):
key = (obj, args, frozenset(kwargs.items()))
try:
result = cache[key]
except TypeError:
except (TypeError, ValueError):
return obj(*args, **kwargs)
except KeyError:
result = obj(*args, **kwargs)
@@ -169,8 +168,8 @@ def memoize_faked(obj):
@memoize_faked
def _get_faked(module, obj, name=None):
result, fake_module = _faked(module, obj, name)
def _get_faked(grammar, module, obj, name=None):
result, fake_module = _faked(grammar, module, obj, name)
if result is None:
# We're not interested in classes. What we want is functions.
raise FakeDoesNotExist
@@ -182,9 +181,9 @@ def _get_faked(module, obj, name=None):
assert result.type == 'funcdef'
doc = '"""%s"""' % obj.__doc__ # TODO need escapes.
suite = result.children[-1]
string = pt.String(doc, (0, 0), '')
new_line = pt.Newline('\n', (0, 0))
docstr_node = pt.Node('simple_stmt', [string, new_line])
string = tree.String(doc, (0, 0), '')
new_line = tree.Newline('\n', (0, 0))
docstr_node = tree.PythonNode('simple_stmt', [string, new_line])
suite.children.insert(1, docstr_node)
return result, fake_module
@@ -198,9 +197,9 @@ def get_faked(evaluator, module, obj, name=None, parent_context=None):
else:
raise FakeDoesNotExist
faked, fake_module = _get_faked(module and module.obj, obj, name)
faked, fake_module = _get_faked(evaluator.latest_grammar, module and module.obj, obj, name)
if module is not None:
module.used_names = fake_module.used_names
module.get_used_names = fake_module.get_used_names
return faked

View File

@@ -32,9 +32,16 @@ def range(start, stop=None, step=1):
class file():
def __iter__(self):
yield ''
def next(self):
return ''
def readlines(self):
return ['']
def __enter__(self):
return self
class xrange():
# Attention: this function doesn't exist in Py3k (there it is range).

View File

@@ -4,3 +4,9 @@ class TextIOWrapper():
def __iter__(self):
yield str()
def readlines(self):
return ['']
def __enter__(self):
return self

View File

@@ -0,0 +1,33 @@
# Just copied this code from Python 3.6.
class itemgetter:
"""
Return a callable object that fetches the given item(s) from its operand.
After f = itemgetter(2), the call f(r) returns r[2].
After g = itemgetter(2, 5, 3), the call g(r) returns (r[2], r[5], r[3])
"""
__slots__ = ('_items', '_call')
def __init__(self, item, *items):
if not items:
self._items = (item,)
def func(obj):
return obj[item]
self._call = func
else:
self._items = items = (item,) + items
def func(obj):
return tuple(obj[i] for i in items)
self._call = func
def __call__(self, obj):
return self._call(obj)
def __repr__(self):
return '%s.%s(%s)' % (self.__class__.__module__,
self.__class__.__name__,
', '.join(map(repr, self._items)))
def __reduce__(self):
return self.__class__, self._items

View File

@@ -0,0 +1,175 @@
"""
A static version of getattr.
This is a backport of the Python 3 code with a little bit of additional
information returned to enable Jedi to make decisions.
"""
import types
from jedi._compatibility import py_version
_sentinel = object()
def _check_instance(obj, attr):
instance_dict = {}
try:
instance_dict = object.__getattribute__(obj, "__dict__")
except AttributeError:
pass
return dict.get(instance_dict, attr, _sentinel)
def _check_class(klass, attr):
for entry in _static_getmro(klass):
if _shadowed_dict(type(entry)) is _sentinel:
try:
return entry.__dict__[attr]
except KeyError:
pass
return _sentinel
def _is_type(obj):
try:
_static_getmro(obj)
except TypeError:
return False
return True
def _shadowed_dict_newstyle(klass):
dict_attr = type.__dict__["__dict__"]
for entry in _static_getmro(klass):
try:
class_dict = dict_attr.__get__(entry)["__dict__"]
except KeyError:
pass
else:
if not (type(class_dict) is types.GetSetDescriptorType and
class_dict.__name__ == "__dict__" and
class_dict.__objclass__ is entry):
return class_dict
return _sentinel
def _static_getmro_newstyle(klass):
return type.__dict__['__mro__'].__get__(klass)
if py_version >= 30:
_shadowed_dict = _shadowed_dict_newstyle
_get_type = type
_static_getmro = _static_getmro_newstyle
else:
def _shadowed_dict(klass):
"""
In Python 2 __dict__ is not overwritable:
class Foo(object): pass
setattr(Foo, '__dict__', 4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __dict__ must be a dictionary object
It applies to both newstyle and oldstyle classes:
class Foo(object): pass
setattr(Foo, '__dict__', 4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: attribute '__dict__' of 'type' objects is not writable
It also applies to instances of those objects. However to keep things
straight forward, newstyle classes always use the complicated way of
accessing it while oldstyle classes just use getattr.
"""
if type(klass) is _oldstyle_class_type:
return getattr(klass, '__dict__', _sentinel)
return _shadowed_dict_newstyle(klass)
class _OldStyleClass():
pass
_oldstyle_instance_type = type(_OldStyleClass())
_oldstyle_class_type = type(_OldStyleClass)
def _get_type(obj):
type_ = object.__getattribute__(obj, '__class__')
if type_ is _oldstyle_instance_type:
# Somehow for old style classes we need to access it directly.
return obj.__class__
return type_
def _static_getmro(klass):
if type(klass) is _oldstyle_class_type:
def oldstyle_mro(klass):
"""
Oldstyle mro is a really simplistic way of look up mro:
https://stackoverflow.com/questions/54867/what-is-the-difference-between-old-style-and-new-style-classes-in-python
"""
yield klass
for base in klass.__bases__:
for yield_from in oldstyle_mro(base):
yield yield_from
return oldstyle_mro(klass)
return _static_getmro_newstyle(klass)
def _safe_hasattr(obj, name):
return _check_class(_get_type(obj), name) is not _sentinel
def _safe_is_data_descriptor(obj):
return (_safe_hasattr(obj, '__set__') or _safe_hasattr(obj, '__delete__'))
def getattr_static(obj, attr, default=_sentinel):
"""Retrieve attributes without triggering dynamic lookup via the
descriptor protocol, __getattr__ or __getattribute__.
Note: this function may not be able to retrieve all attributes
that getattr can fetch (like dynamically created attributes)
and may find attributes that getattr can't (like descriptors
that raise AttributeError). It can also return descriptor objects
instead of instance members in some cases. See the
documentation for details.
Returns a tuple `(attr, is_get_descriptor)`. is_get_descripter means that
the attribute is a descriptor that has a `__get__` attribute.
"""
instance_result = _sentinel
if not _is_type(obj):
klass = _get_type(obj)
dict_attr = _shadowed_dict(klass)
if (dict_attr is _sentinel or
type(dict_attr) is types.MemberDescriptorType):
instance_result = _check_instance(obj, attr)
else:
klass = obj
klass_result = _check_class(klass, attr)
if instance_result is not _sentinel and klass_result is not _sentinel:
if _safe_hasattr(klass_result, '__get__') \
and _safe_is_data_descriptor(klass_result):
# A get/set descriptor has priority over everything.
return klass_result, True
if instance_result is not _sentinel:
return instance_result, False
if klass_result is not _sentinel:
return klass_result, _safe_hasattr(klass_result, '__get__')
if obj is klass:
# for types we check the metaclass too
for entry in _static_getmro(type(klass)):
if _shadowed_dict(type(entry)) is _sentinel:
try:
return entry.__dict__[attr], False
except KeyError:
pass
if default is not _sentinel:
return default, False
raise AttributeError(attr)

View File

@@ -5,19 +5,20 @@ Used only for REPL Completion.
import inspect
import os
from jedi import common
from jedi.parser.diff import FastParser
from jedi import settings
from jedi.evaluate import compiled
from jedi.cache import underscore_memoization
from jedi.evaluate import imports
from jedi.evaluate.context import Context
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate.compiled.getattr_static import getattr_static
class MixedObject(object):
"""
A ``MixedObject`` is used in two ways:
1. It uses the default logic of ``parser.tree`` objects,
1. It uses the default logic of ``parser.python.tree`` objects,
2. except for getattr calls. The names dicts are generated in a fashion
like ``CompiledObject``.
@@ -30,25 +31,12 @@ class MixedObject(object):
fewer special cases, because we in Python you don't have the same freedoms
to modify the runtime.
"""
def __init__(self, evaluator, parent_context, compiled_object, tree_name):
def __init__(self, evaluator, parent_context, compiled_object, tree_context):
self.evaluator = evaluator
self.parent_context = parent_context
self.compiled_object = compiled_object
self._context = tree_context
self.obj = compiled_object.obj
self._tree_name = tree_name
name_module = tree_name.get_root_node()
if parent_context.tree_node.get_root_node() != name_module:
from jedi.evaluate.representation import ModuleContext
module_context = ModuleContext(evaluator, name_module)
name = compiled_object.get_root_context().py__name__()
imports.add_module(evaluator, name, module_context)
else:
module_context = parent_context.get_root_context()
self._context = module_context.create_context(
tree_name.parent,
node_is_context=True,
node_is_object=True
)
# We have to overwrite everything that has to do with trailers, name
# lookups and filters to make it possible to route name lookups towards
@@ -90,13 +78,14 @@ class MixedName(compiled.CompiledName):
def infer(self):
obj = self.parent_context.obj
try:
# TODO use logic from compiled.CompiledObjectFilter
obj = getattr(obj, self.string_name)
except AttributeError:
# Happens e.g. in properties of
# PyQt4.QtGui.QStyleOptionComboBox.currentText
# -> just set it to None
obj = None
return [create(self._evaluator, obj, parent_context=self.parent_context)]
return [_create(self._evaluator, obj, parent_context=self.parent_context)]
@property
def api_type(self):
@@ -115,30 +104,47 @@ class MixedObjectFilter(compiled.CompiledObjectFilter):
#return MixedName(self._evaluator, self._compiled_object, name)
def parse(grammar, path):
with open(path) as f:
source = f.read()
source = common.source_to_unicode(source)
return FastParser(grammar, source, path)
@evaluator_function_cache()
def _load_module(evaluator, path, python_object):
module = parse(evaluator.grammar, path).module
module = evaluator.grammar.parse(
path=path,
cache=True,
diff_cache=True,
cache_path=settings.cache_directory
).get_root_node()
python_module = inspect.getmodule(python_object)
evaluator.modules[python_module.__name__] = module
return module
def _get_object_to_check(python_object):
"""Check if inspect.getfile has a chance to find the source."""
if (inspect.ismodule(python_object) or
inspect.isclass(python_object) or
inspect.ismethod(python_object) or
inspect.isfunction(python_object) or
inspect.istraceback(python_object) or
inspect.isframe(python_object) or
inspect.iscode(python_object)):
return python_object
try:
return python_object.__class__
except AttributeError:
raise TypeError # Prevents computation of `repr` within inspect.
def find_syntax_node_name(evaluator, python_object):
try:
python_object = _get_object_to_check(python_object)
path = inspect.getsourcefile(python_object)
except TypeError:
# The type might not be known (e.g. class_with_dict.__weakref__)
return None
return None, None
if path is None or not os.path.exists(path):
# The path might not exist or be e.g. <stdin>.
return None
return None, None
module = _load_module(evaluator, path, python_object)
@@ -146,17 +152,22 @@ def find_syntax_node_name(evaluator, python_object):
# We don't need to check names for modules, because there's not really
# a way to write a module in a module in Python (and also __name__ can
# be something like ``email.utils``).
return module.name
return module, path
try:
name_str = python_object.__name__
except AttributeError:
# Stuff like python_function.__code__.
return None, None
name_str = python_object.__name__
if name_str == '<lambda>':
return None # It's too hard to find lambdas.
return None, None # It's too hard to find lambdas.
# Doesn't always work (e.g. os.stat_result)
try:
names = module.used_names[name_str]
names = module.get_used_names()[name_str]
except KeyError:
return None
return None, None
names = [n for n in names if n.is_definition()]
try:
@@ -173,22 +184,49 @@ def find_syntax_node_name(evaluator, python_object):
# There's a chance that the object is not available anymore, because
# the code has changed in the background.
if line_names:
return line_names[-1]
return line_names[-1].parent, path
# It's really hard to actually get the right definition, here as a last
# resort we just return the last one. This chance might lead to odd
# completions at some points but will lead to mostly correct type
# inference, because people tend to define a public name in a module only
# once.
return names[-1]
return names[-1].parent, path
@compiled.compiled_objects_cache('mixed_cache')
def create(evaluator, obj, parent_context=None, *args):
tree_name = find_syntax_node_name(evaluator, obj)
def _create(evaluator, obj, parent_context=None, *args):
tree_node, path = find_syntax_node_name(evaluator, obj)
compiled_object = compiled.create(
evaluator, obj, parent_context=parent_context.compiled_object)
if tree_name is None:
if tree_node is None:
return compiled_object
return MixedObject(evaluator, parent_context, compiled_object, tree_name)
module_node = tree_node.get_root_node()
if parent_context.tree_node.get_root_node() == module_node:
module_context = parent_context.get_root_context()
else:
from jedi.evaluate.representation import ModuleContext
module_context = ModuleContext(evaluator, module_node, path=path)
# TODO this __name__ is probably wrong.
name = compiled_object.get_root_context().py__name__()
imports.add_module(evaluator, name, module_context)
tree_context = module_context.create_context(
tree_node,
node_is_context=True,
node_is_object=True
)
if tree_node.type == 'classdef':
if not inspect.isclass(obj):
# Is an instance, not a class.
tree_context, = tree_context.execute_evaluated()
return MixedObject(
evaluator,
parent_context,
compiled_object,
tree_context=tree_context
)

View File

@@ -1,9 +1,14 @@
from jedi._compatibility import Python3Method
from jedi.common import unite
from parso.python.tree import ExprStmt, CompFor
from jedi.parser_utils import clean_scope_docstring, get_doc_with_call_signature
class Context(object):
api_type = None
"""
Should be defined, otherwise the API returns empty types.
"""
"""
To be defined by subclasses.
"""
@@ -14,8 +19,11 @@ class Context(object):
self.evaluator = evaluator
self.parent_context = parent_context
def get_parent_flow_context(self):
return self.parent_context
@property
def api_type(self):
# By default just lower name of the class. Can and should be
# overwritten.
return self.__class__.__name__.lower()
def get_root_context(self):
context = self
@@ -41,17 +49,18 @@ class Context(object):
def eval_stmt(self, stmt, seek_name=None):
return self.evaluator.eval_statement(self, stmt, seek_name)
@Python3Method
def eval_trailer(self, types, trailer):
return self.evaluator.eval_trailer(self, types, trailer)
@Python3Method
def py__getattribute__(self, name_or_str, name_context=None, position=None,
search_global=False, is_goto=False):
search_global=False, is_goto=False,
analysis_errors=True):
if name_context is None:
name_context = self
return self.evaluator.find_types(
self, name_or_str, name_context, position, search_global, is_goto)
self, name_or_str, name_context, position, search_global, is_goto,
analysis_errors)
def create_context(self, node, node_is_context=False, node_is_object=False):
return self.evaluator.create_context(self, node, node_is_context, node_is_object)
@@ -66,6 +75,18 @@ class Context(object):
"""
return True
def py__doc__(self, include_call_signature=False):
try:
self.tree_node.get_doc_node
except AttributeError:
return ''
else:
if include_call_signature:
return get_doc_with_call_signature(self.tree_node)
else:
return clean_scope_docstring(self.tree_node)
return None
class TreeContext(Context):
def __init__(self, evaluator, parent_context=None):
@@ -76,12 +97,6 @@ class TreeContext(Context):
return '<%s: %s>' % (self.__class__.__name__, self.tree_node)
class FlowContext(TreeContext):
def get_parent_flow_context(self):
if 1:
return self.parent_context
class AbstractLazyContext(object):
def __init__(self, data):
self.data = data
@@ -141,3 +156,51 @@ class MergedLazyContexts(AbstractLazyContext):
"""data is a list of lazy contexts."""
def infer(self):
return unite(l.infer() for l in self.data)
class ContextualizedNode(object):
def __init__(self, context, node):
self.context = context
self._node = node
def get_root_context(self):
return self.context.get_root_context()
def infer(self):
return self.context.eval_node(self._node)
class ContextualizedName(ContextualizedNode):
# TODO merge with TreeNameDefinition?!
@property
def name(self):
return self._node
def assignment_indexes(self):
"""
Returns an array of tuple(int, node) of the indexes that are used in
tuple assignments.
For example if the name is ``y`` in the following code::
x, (y, z) = 2, ''
would result in ``[(1, xyz_node), (0, yz_node)]``.
"""
indexes = []
node = self._node.parent
compare = self._node
while node is not None:
if node.type in ('testlist', 'testlist_comp', 'testlist_star_expr', 'exprlist'):
for i, child in enumerate(node.children):
if child == compare:
indexes.insert(0, (int(i / 2), node))
break
else:
raise LookupError("Couldn't find the assignment.")
elif isinstance(node, (ExprStmt, CompFor)):
break
compare = node
node = node.parent
return indexes

View File

@@ -1,11 +1,12 @@
"""
Docstrings are another source of information for functions and classes.
:mod:`jedi.evaluate.dynamic` tries to find all executions of functions, while
the docstring parsing is much easier. There are two different types of
the docstring parsing is much easier. There are three different types of
docstrings that |jedi| understands:
- `Sphinx <http://sphinx-doc.org/markup/desc.html#info-field-lists>`_
- `Epydoc <http://epydoc.sourceforge.net/manual-fields.html>`_
- `Numpydoc <https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`_
For example, the sphinx annotation ``:type foo: str`` clearly states that the
type of ``foo`` is ``str``.
@@ -14,23 +15,22 @@ As an addition to parameter searching, this module also provides return
annotations.
"""
from ast import literal_eval
import re
from textwrap import dedent
from parso import parse
from jedi._compatibility import u
from jedi.common import unite
from jedi.evaluate import context
from jedi.evaluate.cache import memoize_default
from jedi.parser import ParserWithRecovery, load_grammar
from jedi.parser.tree import search_ancestor
from jedi.evaluate.cache import evaluator_method_cache
from jedi.common import indent_block
from jedi.evaluate.iterable import SequenceLiteralContext, FakeSequence
DOCSTRING_PARAM_PATTERNS = [
r'\s*:type\s+%s:\s*([^\n]+)', # Sphinx
r'\s*:param\s+(\w+)\s+%s:[^\n]+', # Sphinx param with type
r'\s*:param\s+(\w+)\s+%s:[^\n]*', # Sphinx param with type
r'\s*@type\s+%s:\s*([^\n]+)', # Epydoc
]
@@ -47,23 +47,78 @@ try:
except ImportError:
def _search_param_in_numpydocstr(docstr, param_str):
return []
def _search_return_in_numpydocstr(docstr):
return []
else:
def _search_param_in_numpydocstr(docstr, param_str):
"""Search `docstr` (in numpydoc format) for type(-s) of `param_str`."""
params = NumpyDocString(docstr)._parsed_data['Parameters']
try:
# This is a non-public API. If it ever changes we should be
# prepared and return gracefully.
params = NumpyDocString(docstr)._parsed_data['Parameters']
except (KeyError, AttributeError):
return []
for p_name, p_type, p_descr in params:
if p_name == param_str:
m = re.match('([^,]+(,[^,]+)*?)(,[ ]*optional)?$', p_type)
if m:
p_type = m.group(1)
if p_type.startswith('{'):
types = set(type(x).__name__ for x in literal_eval(p_type))
return list(types)
else:
return [p_type]
return list(_expand_typestr(p_type))
return []
def _search_return_in_numpydocstr(docstr):
"""
Search `docstr` (in numpydoc format) for type(-s) of function returns.
"""
doc = NumpyDocString(docstr)
try:
# This is a non-public API. If it ever changes we should be
# prepared and return gracefully.
returns = doc._parsed_data['Returns']
returns += doc._parsed_data['Yields']
except (KeyError, AttributeError):
raise StopIteration
for r_name, r_type, r_descr in returns:
#Return names are optional and if so the type is in the name
if not r_type:
r_type = r_name
for type_ in _expand_typestr(r_type):
yield type_
def _expand_typestr(type_str):
"""
Attempts to interpret the possible types in `type_str`
"""
# Check if alternative types are specified with 'or'
if re.search('\\bor\\b', type_str):
for t in type_str.split('or'):
yield t.split('of')[0].strip()
# Check if like "list of `type`" and set type to list
elif re.search('\\bof\\b', type_str):
yield type_str.split('of')[0]
# Check if type has is a set of valid literal values eg: {'C', 'F', 'A'}
elif type_str.startswith('{'):
node = parse(type_str, version='3.6').children[0]
if node.type == 'atom':
for leaf in node.children[1].children:
if leaf.type == 'number':
if '.' in leaf.value:
yield 'float'
else:
yield 'int'
elif leaf.type == 'string':
if 'b' in leaf.string_prefix.lower():
yield 'bytes'
else:
yield 'str'
# Ignore everything else.
# Otherwise just work with what we have.
else:
yield type_str
def _search_param_in_docstr(docstr, param_str):
"""
@@ -119,7 +174,11 @@ def _strip_rst_role(type_str):
def _evaluate_for_statement_string(module_context, string):
code = dedent(u("""
def pseudo_docstring_stuff():
# Create a pseudo function for docstring statements.
'''
Create a pseudo function for docstring statements.
Need this docstring so that if the below part is not valid Python this
is still a function.
'''
{0}
"""))
if string is None:
@@ -133,25 +192,23 @@ def _evaluate_for_statement_string(module_context, string):
# Take the default grammar here, if we load the Python 2.7 grammar here, it
# will be impossible to use `...` (Ellipsis) as a token. Docstring types
# don't need to conform with the current grammar.
p = ParserWithRecovery(load_grammar(), code.format(indent_block(string)))
grammar = module_context.evaluator.latest_grammar
module = grammar.parse(code.format(indent_block(string)))
try:
funcdef = p.module.subscopes[0]
funcdef = next(module.iter_funcdefs())
# First pick suite, then simple_stmt and then the node,
# which is also not the last item, because there's a newline.
stmt = funcdef.children[-1].children[-1].children[-2]
except (AttributeError, IndexError):
return []
from jedi.evaluate.param import ValuesArguments
from jedi.evaluate.representation import FunctionContext
function_context = FunctionContext(
module_context.evaluator,
module_context,
funcdef
)
func_execution_context = function_context.get_function_execution(
ValuesArguments([])
)
func_execution_context = function_context.get_function_execution()
# Use the module of the param.
# TODO this module is not the module of the param in case of a function
# call. In that case it's the module of the function call.
@@ -184,30 +241,42 @@ def _execute_array_values(evaluator, array):
return array.execute_evaluated()
@memoize_default()
def follow_param(module_context, param):
@evaluator_method_cache()
def infer_param(execution_context, param):
from jedi.evaluate.instance import AnonymousInstanceFunctionExecution
def eval_docstring(docstring):
return set(
[p for param_str in _search_param_in_docstr(docstring, str(param.name))
for p in _evaluate_for_statement_string(module_context, param_str)]
p
for param_str in _search_param_in_docstr(docstring, param.name.value)
for p in _evaluate_for_statement_string(module_context, param_str)
)
module_context = execution_context.get_root_context()
func = param.get_parent_function()
types = eval_docstring(func.raw_doc)
if func.name.value == '__init__':
cls = search_ancestor(func, 'classdef')
if cls is not None:
types |= eval_docstring(cls.raw_doc)
if func.type == 'lambdef':
return set()
types = eval_docstring(execution_context.py__doc__())
if isinstance(execution_context, AnonymousInstanceFunctionExecution) and \
execution_context.function_context.name.string_name == '__init__':
class_context = execution_context.instance.class_context
types |= eval_docstring(class_context.py__doc__())
return types
@memoize_default()
def find_return_types(module_context, func):
@evaluator_method_cache()
def infer_return_types(function_context):
def search_return_in_docstr(code):
for p in DOCSTRING_RETURN_PATTERNS:
match = p.search(code)
if match:
return _strip_rst_role(match.group(1))
yield _strip_rst_role(match.group(1))
# Check for numpy style return hint
for type_ in _search_return_in_numpydocstr(code):
yield type_
for type_str in search_return_in_docstr(function_context.py__doc__()):
for type_eval in _evaluate_for_statement_string(function_context.get_root_context(), type_str):
yield type_eval
type_str = search_return_in_docstr(func.raw_doc)
return _evaluate_for_statement_string(module_context, type_str)

View File

@@ -17,13 +17,15 @@ It works as follows:
- execute these calls and check the input. This work with a ``ParamListener``.
"""
from jedi.parser import tree
from parso.python import tree
from jedi import settings
from jedi import debug
from jedi.evaluate.cache import memoize_default
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate import imports
from jedi.evaluate.param import TreeArguments, create_default_param
from jedi.evaluate.param import TreeArguments, create_default_params
from jedi.evaluate.helpers import is_stdlib_path
from jedi.common import to_list, unite
from jedi.parser_utils import get_parent_scope
MAX_PARAM_SEARCHES = 20
@@ -52,7 +54,7 @@ class MergedExecutedParams(object):
@debug.increase_indent
def search_params(evaluator, parent_context, funcdef):
def search_params(evaluator, execution_context, funcdef):
"""
A dynamic search for param values. If you try to complete a type:
@@ -66,12 +68,21 @@ def search_params(evaluator, parent_context, funcdef):
is.
"""
if not settings.dynamic_params:
return set()
return create_default_params(execution_context, funcdef)
evaluator.dynamic_params_depth += 1
try:
path = execution_context.get_root_context().py__file__()
if path is not None and is_stdlib_path(path):
# We don't want to search for usages in the stdlib. Usually people
# don't work with it (except if you are a core maintainer, sorry).
# This makes everything slower. Just disable it and run the tests,
# you will see the slowdown, especially in 3.6.
return create_default_params(execution_context, funcdef)
debug.dbg('Dynamic param search in %s.', funcdef.name.value, color='MAGENTA')
module_context = parent_context.get_root_context()
module_context = execution_context.get_root_context()
function_executions = _search_function_executions(
evaluator,
module_context,
@@ -85,14 +96,14 @@ def search_params(evaluator, parent_context, funcdef):
params = [MergedExecutedParams(executed_params) for executed_params in zipped_params]
# Evaluate the ExecutedParams to types.
else:
params = [create_default_param(parent_context, p) for p in funcdef.params]
return create_default_params(execution_context, funcdef)
debug.dbg('Dynamic param result finished', color='MAGENTA')
return params
finally:
evaluator.dynamic_params_depth -= 1
@memoize_default([], evaluator_is_first_arg=True)
@evaluator_function_cache(default=[])
@to_list
def _search_function_executions(evaluator, module_context, funcdef):
"""
@@ -103,7 +114,7 @@ def _search_function_executions(evaluator, module_context, funcdef):
func_string_name = funcdef.name.value
compare_node = funcdef
if func_string_name == '__init__':
cls = funcdef.get_parent_scope()
cls = get_parent_scope(funcdef)
if isinstance(cls, tree.Class):
func_string_name = cls.name.value
compare_node = cls
@@ -137,7 +148,7 @@ def _search_function_executions(evaluator, module_context, funcdef):
def _get_possible_nodes(module_context, func_string_name):
try:
names = module_context.tree_node.used_names[func_string_name]
names = module_context.tree_node.get_used_names()[func_string_name]
except KeyError:
return

View File

@@ -4,9 +4,10 @@ are needed for name resolution.
"""
from abc import abstractmethod
from jedi.parser.tree import search_ancestor
from parso.tree import search_ancestor
from jedi.evaluate import flow_analysis
from jedi.common import to_list, unite
from jedi.parser_utils import get_parent_scope
class AbstractNameDefinition(object):
@@ -19,6 +20,12 @@ class AbstractNameDefinition(object):
def infer(self):
raise NotImplementedError
@abstractmethod
def goto(self):
# Typically names are already definitions and therefore a goto on that
# name will always result on itself.
return set([self])
def get_root_context(self):
return self.parent_context.get_root_context()
@@ -43,6 +50,9 @@ class AbstractTreeName(AbstractNameDefinition):
self.parent_context = parent_context
self.tree_name = tree_name
def goto(self):
return self.parent_context.evaluator.goto(self.parent_context, self.tree_name)
@property
def string_name(self):
return self.tree_name.value
@@ -73,8 +83,13 @@ class ContextName(ContextNameMixin, AbstractTreeName):
class TreeNameDefinition(AbstractTreeName):
def get_parent_flow_context(self):
return self.parent_context
_API_TYPES = dict(
import_name='module',
import_from='module',
funcdef='function',
param='param',
classdef='class',
)
def infer(self):
# Refactor this, should probably be here.
@@ -83,14 +98,10 @@ class TreeNameDefinition(AbstractTreeName):
@property
def api_type(self):
definition = self.tree_name.get_definition()
return dict(
import_name='module',
import_from='module',
funcdef='function',
param='param',
classdef='class',
).get(definition.type, 'statement')
definition = self.tree_name.get_definition(import_name_always=True)
if definition is None:
return 'statement'
return self._API_TYPES.get(definition.type, 'statement')
class ParamName(AbstractTreeName):
@@ -106,13 +117,15 @@ class ParamName(AbstractTreeName):
def get_param(self):
params = self.parent_context.get_params()
param_node = search_ancestor(self.tree_name, 'param')
return params[param_node.position_nr]
return params[param_node.position_index]
class AnonymousInstanceParamName(ParamName):
def infer(self):
param_node = search_ancestor(self.tree_name, 'param')
if param_node.position_nr == 0:
# TODO I think this should not belong here. It's not even really true,
# because classmethod and other descriptors can change it.
if param_node.position_index == 0:
# This is a speed optimization, to return the self param (because
# it's known). This only affects anonymous instances.
return set([self.parent_context.instance])
@@ -142,7 +155,7 @@ class AbstractUsedNamesFilter(AbstractFilter):
def __init__(self, context, parser_scope):
self._parser_scope = parser_scope
self._used_names = self._parser_scope.get_root_node().used_names
self._used_names = self._parser_scope.get_root_node().get_used_names()
self.context = context
def get(self, name):
@@ -192,7 +205,7 @@ class ParserTreeFilter(AbstractUsedNamesFilter):
if parent.type == 'trailer':
return False
base_node = parent if parent.type in ('classdef', 'funcdef') else name
return base_node.get_parent_scope() == self._parser_scope
return get_parent_scope(base_node) == self._parser_scope
def _check_flows(self, names):
for name in sorted(names, key=lambda name: name.start_pos, reverse=True):
@@ -278,7 +291,7 @@ def get_global_filters(evaluator, context, until_position, origin_scope):
... y = None
... '''))
>>> module_node = script._get_module_node()
>>> scope = module_node.subscopes[0]
>>> scope = next(module_node.iter_funcdefs())
>>> scope
<Function: func@3-5>
>>> context = script._get_module().create_context(scope)
@@ -287,7 +300,7 @@ def get_global_filters(evaluator, context, until_position, origin_scope):
First we get the names names from the function scope.
>>> no_unicode_pprint(filters[0])
<ParserTreeFilter: <ModuleContext: <Module: None@2-5>>>
<ParserTreeFilter: <ModuleContext: @2-5>>
>>> sorted(str(n) for n in filters[0].values())
['<TreeNameDefinition: func@(3, 4)>', '<TreeNameDefinition: x@(2, 0)>']
>>> filters[0]._until_position

View File

@@ -15,7 +15,8 @@ Unfortunately every other thing is being ignored (e.g. a == '' would be easy to
check for -> a is a string). There's big potential in these checks.
"""
from jedi.parser import tree
from parso.python import tree
from parso.tree import search_ancestor
from jedi import debug
from jedi.common import unite
from jedi import settings
@@ -29,11 +30,14 @@ from jedi.evaluate import analysis
from jedi.evaluate import flow_analysis
from jedi.evaluate import param
from jedi.evaluate import helpers
from jedi.evaluate.filters import get_global_filters
from jedi.evaluate.filters import get_global_filters, TreeNameDefinition
from jedi.evaluate.context import ContextualizedName, ContextualizedNode
from jedi.parser_utils import is_scope, get_parent_scope
class NameFinder(object):
def __init__(self, evaluator, context, name_context, name_or_str, position=None):
def __init__(self, evaluator, context, name_context, name_or_str,
position=None, analysis_errors=True):
self._evaluator = evaluator
# Make sure that it's not just a syntax tree node.
self._context = context
@@ -45,6 +49,7 @@ class NameFinder(object):
self._string_name = name_or_str
self._position = position
self._found_predefined_types = None
self._analysis_errors = analysis_errors
@debug.increase_indent
def find(self, filters, attribute_lookup):
@@ -62,7 +67,7 @@ class NameFinder(object):
types = self._names_to_types(names, attribute_lookup)
if not names and not types \
if not names and self._analysis_errors and not types \
and not (isinstance(self._name, tree.Name) and
isinstance(self._name.parent.parent, tree.Param)):
if isinstance(self._name, tree.Name):
@@ -78,7 +83,13 @@ class NameFinder(object):
def _get_origin_scope(self):
if isinstance(self._name, tree.Name):
return self._name.get_parent_until(tree.Scope, reverse=True)
scope = self._name
while scope.parent is not None:
# TODO why if classes?
if not isinstance(scope, tree.Scope):
break
scope = scope.parent
return scope
else:
return None
@@ -98,7 +109,7 @@ class NameFinder(object):
if self._context.predefined_names:
# TODO is this ok? node might not always be a tree.Name
node = self._name
while node is not None and not node.is_scope():
while node is not None and not is_scope(node):
node = node.parent
if node.type in ("if_stmt", "for_stmt", "comp_for"):
try:
@@ -111,9 +122,21 @@ class NameFinder(object):
break
for filter in filters:
names = filter.get(self._name)
names = filter.get(self._string_name)
if names:
if len(names) == 1:
n, = names
if isinstance(n, TreeNameDefinition):
# Something somewhere went terribly wrong. This
# typically happens when using goto on an import in an
# __init__ file. I think we need a better solution, but
# it's kind of hard, because for Jedi it's not clear
# that that name has not been defined, yet.
if n.tree_name == self._name:
if self._name.get_definition().type == 'import_from':
continue
break
debug.dbg('finder.filter_name "%s" in (%s): %s@%s', self._string_name,
self._context, names, self._position)
return list(names)
@@ -152,7 +175,7 @@ class NameFinder(object):
if base_node.type == 'comp_for':
return types
while True:
flow_scope = flow_scope.get_parent_scope(include_flows=True)
flow_scope = get_parent_scope(flow_scope, include_flows=True)
n = _check_flow_information(self._name_context, flow_scope,
self._name, self._position)
if n is not None:
@@ -164,39 +187,51 @@ class NameFinder(object):
def _name_to_types(evaluator, context, tree_name):
types = []
node = tree_name.get_definition()
if node.isinstance(tree.ForStmt):
node = tree_name.get_definition(import_name_always=True)
if node is None:
node = tree_name.parent
if node.type == 'global_stmt':
context = evaluator.create_context(context, tree_name)
finder = NameFinder(evaluator, context, context, tree_name.value)
filters = finder.get_filters(search_global=True)
# For global_stmt lookups, we only need the first possible scope,
# which means the function itself.
filters = [next(filters)]
return finder.find(filters, attribute_lookup=False)
elif node.type not in ('import_from', 'import_name'):
raise ValueError("Should not happen.")
typ = node.type
if typ == 'for_stmt':
types = pep0484.find_type_from_comment_hint_for(context, node, tree_name)
if types:
return types
if node.isinstance(tree.WithStmt):
if typ == 'with_stmt':
types = pep0484.find_type_from_comment_hint_with(context, node, tree_name)
if types:
return types
if node.type in ('for_stmt', 'comp_for'):
if typ in ('for_stmt', 'comp_for'):
try:
types = context.predefined_names[node][tree_name.value]
except KeyError:
container_types = context.eval_node(node.children[3])
for_types = iterable.py__iter__types(evaluator, container_types, node.children[3])
types = check_tuple_assignments(evaluator, for_types, tree_name)
elif node.isinstance(tree.ExprStmt):
cn = ContextualizedNode(context, node.children[3])
for_types = iterable.py__iter__types(evaluator, cn.infer(), cn)
c_node = ContextualizedName(context, tree_name)
types = check_tuple_assignments(evaluator, c_node, for_types)
elif typ == 'expr_stmt':
types = _remove_statements(evaluator, context, node, tree_name)
elif node.isinstance(tree.WithStmt):
types = context.eval_node(node.node_from_name(tree_name))
elif isinstance(node, tree.Import):
elif typ == 'with_stmt':
context_managers = context.eval_node(node.get_test_node_from_name(tree_name))
enter_methods = unite(
context_manager.py__getattribute__('__enter__')
for context_manager in context_managers
)
types = unite(method.execute_evaluated() for method in enter_methods)
elif typ in ('import_from', 'import_name'):
types = imports.infer_import(context, tree_name)
elif node.type in ('funcdef', 'classdef'):
elif typ in ('funcdef', 'classdef'):
types = _apply_decorators(evaluator, context, node)
elif node.type == 'global_stmt':
context = evaluator.create_context(context, tree_name)
finder = NameFinder(evaluator, context, context, str(tree_name))
filters = finder.get_filters(search_global=True)
# For global_stmt lookups, we only need the first possible scope,
# which means the function itself.
filters = [next(filters)]
types += finder.find(filters, attribute_lookup=False)
elif isinstance(node, tree.TryStmt):
elif typ == 'try_stmt':
# TODO an exception can also be a tuple. Check for those.
# TODO check for types that are not classes and add it to
# the static analysis report.
@@ -234,7 +269,7 @@ def _apply_decorators(evaluator, context, node):
trailer_nodes = dec.children[2:-1]
if trailer_nodes:
# Create a trailer and evaluate it.
trailer = tree.Node('trailer', trailer_nodes)
trailer = tree.PythonNode('trailer', trailer_nodes)
trailer.parent = dec
dec_values = evaluator.eval_trailer(context, dec_values, trailer)
@@ -271,8 +306,7 @@ def _remove_statements(evaluator, context, stmt, name):
if check_instance is not None:
# class renames
types = set([er.get_instance_el(evaluator, check_instance, a, True)
if isinstance(a, (er.Function, tree.Function))
else a for a in types])
if isinstance(a, er.Function) else a for a in types])
return types
@@ -289,11 +323,11 @@ def _check_flow_information(context, flow, search_name, pos):
return None
result = None
if flow.is_scope():
if is_scope(flow):
# Check for asserts.
module_node = flow.get_root_node()
try:
names = module_node.used_names[search_name.value]
names = module_node.get_used_names()[search_name.value]
except KeyError:
return None
names = reversed([
@@ -302,13 +336,13 @@ def _check_flow_information(context, flow, search_name, pos):
])
for name in names:
ass = tree.search_ancestor(name, 'assert_stmt')
ass = search_ancestor(name, 'assert_stmt')
if ass is not None:
result = _check_isinstance_type(context, ass.assertion(), search_name)
result = _check_isinstance_type(context, ass.assertion, search_name)
if result is not None:
return result
if isinstance(flow, (tree.IfStmt, tree.WhileStmt)):
if flow.type in ('if_stmt', 'while_stmt'):
potential_ifs = [c for c in flow.children[1::4] if c != ':']
for if_test in reversed(potential_ifs):
if search_name.start_pos > if_test.end_pos:
@@ -322,7 +356,7 @@ def _check_isinstance_type(context, element, search_name):
# this might be removed if we analyze and, etc
assert len(element.children) == 2
first, trailer = element.children
assert isinstance(first, tree.Name) and first.value == 'isinstance'
assert first.type == 'name' and first.value == 'isinstance'
assert trailer.type == 'trailer' and trailer.children[0] == '('
assert len(trailer.children) == 3
@@ -338,7 +372,8 @@ def _check_isinstance_type(context, element, search_name):
is_instance_call = helpers.call_of_leaf(lazy_context_object.data)
# Do a simple get_code comparison. They should just have the same code,
# and everything will be all right.
assert is_instance_call.get_code(normalized=True) == call.get_code(normalized=True)
normalize = context.evaluator.grammar._normalize
assert normalize(is_instance_call) == normalize(call)
except AssertionError:
return None
@@ -354,13 +389,14 @@ def _check_isinstance_type(context, element, search_name):
return result
def check_tuple_assignments(evaluator, types, name):
def check_tuple_assignments(evaluator, contextualized_name, types):
"""
Checks if tuples are assigned.
"""
lazy_context = None
for index, node in name.assignment_indexes():
iterated = iterable.py__iter__(evaluator, types, node)
for index, node in contextualized_name.assignment_indexes():
cn = ContextualizedNode(contextualized_name.context, node)
iterated = iterable.py__iter__(evaluator, types, cn)
for _ in range(index + 1):
try:
lazy_context = next(iterated)

View File

@@ -1,3 +1,6 @@
from jedi.parser_utils import get_flow_branch_keyword, is_scope, get_parent_scope
class Status(object):
lookup_table = {}
@@ -31,14 +34,14 @@ UNSURE = Status(None, 'unsure')
def _get_flow_scopes(node):
while True:
node = node.get_parent_scope(include_flows=True)
if node is None or node.is_scope():
node = get_parent_scope(node, include_flows=True)
if node is None or is_scope(node):
return
yield node
def reachability_check(context, context_scope, node, origin_scope=None):
first_flow_scope = node.get_parent_scope(include_flows=True)
first_flow_scope = get_parent_scope(node, include_flows=True)
if origin_scope is not None:
origin_flow_scopes = list(_get_flow_scopes(origin_scope))
node_flow_scopes = list(_get_flow_scopes(node))
@@ -46,8 +49,8 @@ def reachability_check(context, context_scope, node, origin_scope=None):
branch_matches = True
for flow_scope in origin_flow_scopes:
if flow_scope in node_flow_scopes:
node_keyword = flow_scope.get_branch_keyword(node)
origin_keyword = flow_scope.get_branch_keyword(origin_scope)
node_keyword = get_flow_branch_keyword(flow_scope, node)
origin_keyword = get_flow_branch_keyword(flow_scope, origin_scope)
branch_matches = node_keyword == origin_keyword
if flow_scope.type == 'if_stmt':
if not branch_matches:
@@ -76,14 +79,14 @@ def reachability_check(context, context_scope, node, origin_scope=None):
def _break_check(context, context_scope, flow_scope, node):
reachable = REACHABLE
if flow_scope.type == 'if_stmt':
if flow_scope.node_after_else(node):
for check_node in flow_scope.check_nodes():
if flow_scope.is_node_after_else(node):
for check_node in flow_scope.get_test_nodes():
reachable = _check_if(context, check_node)
if reachable in (REACHABLE, UNSURE):
break
reachable = reachable.invert()
else:
flow_node = flow_scope.node_in_which_check_node(node)
flow_node = flow_scope.get_corresponding_test_node(node)
if flow_node is not None:
reachable = _check_if(context, flow_node)
elif flow_scope.type in ('try_stmt', 'while_stmt'):
@@ -94,7 +97,7 @@ def _break_check(context, context_scope, flow_scope, node):
return reachable
if context_scope != flow_scope and context_scope != flow_scope.parent:
flow_scope = flow_scope.get_parent_scope(include_flows=True)
flow_scope = get_parent_scope(flow_scope, include_flows=True)
return reachable & _break_check(context, context_scope, flow_scope, node)
else:
return reachable

View File

@@ -1,8 +1,23 @@
import copy
import sys
import re
import os
from itertools import chain
from contextlib import contextmanager
from jedi.parser import tree
from parso.python import tree
from jedi.parser_utils import get_parent_scope
def is_stdlib_path(path):
# Python standard library paths look like this:
# /usr/lib/python3.5/...
# TODO The implementation below is probably incorrect and not complete.
if 'dist-packages' in path or 'site-packages' in path:
return False
base_path = os.path.join(sys.prefix, 'lib', 'python')
return bool(re.match(re.escape(base_path) + '\d.\d', path))
def deep_ast_copy(obj):
@@ -137,36 +152,16 @@ def get_module_names(module, all_scopes):
Returns a dictionary with name parts as keys and their call paths as
values.
"""
names = chain.from_iterable(module.used_names.values())
names = chain.from_iterable(module.get_used_names().values())
if not all_scopes:
# We have to filter all the names that don't have the module as a
# parent_scope. There's None as a parent, because nodes in the module
# node have the parent module and not suite as all the others.
# Therefore it's important to catch that case.
names = [n for n in names if n.get_parent_scope().parent in (module, None)]
names = [n for n in names if get_parent_scope(n).parent in (module, None)]
return names
class FakeName(tree.Name):
def __init__(self, name_str, parent=None, start_pos=(0, 0), is_definition=None):
"""
In case is_definition is defined (not None), that bool value will be
returned.
"""
super(FakeName, self).__init__(name_str, start_pos)
self.parent = parent
self._is_definition = is_definition
def get_definition(self):
return self.parent
def is_definition(self):
if self._is_definition is None:
return super(FakeName, self).is_definition()
else:
return self._is_definition
@contextmanager
def predefine_names(context, flow_scope, dct):
predefined = context.predefined_names

View File

@@ -16,28 +16,30 @@ import os
import pkgutil
import sys
from jedi._compatibility import find_module, unicode
from parso.python import tree
from parso.tree import search_ancestor
from parso.cache import parser_cache
from parso import python_bytes_to_unicode
from jedi._compatibility import find_module, unicode, ImplicitNSInfo
from jedi import debug
from jedi import settings
from jedi.common import source_to_unicode, unite
from jedi.parser.diff import FastParser
from jedi.parser import tree
from jedi.parser.utils import save_parser, load_parser, parser_cache
from jedi.common import unite
from jedi.evaluate import sys_path
from jedi.evaluate import helpers
from jedi.evaluate import compiled
from jedi.evaluate import analysis
from jedi.evaluate.cache import memoize_default
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.filters import AbstractNameDefinition
# This memoization is needed, because otherwise we will infinitely loop on
# certain imports.
@memoize_default(default=set())
@evaluator_method_cache(default=set())
def infer_import(context, tree_name, is_goto=False):
module_context = context.get_root_context()
import_node = tree_name.get_parent_until(tree.Import)
import_path = import_node.path_for_name(tree_name)
import_node = search_ancestor(tree_name, 'import_name', 'import_from')
import_path = import_node.get_path_for_name(tree_name)
from_import_name = None
evaluator = context.evaluator
try:
@@ -60,12 +62,16 @@ def infer_import(context, tree_name, is_goto=False):
#if import_node.is_nested() and not self.nested_resolve:
# scopes = [NestedImportModule(module, import_node)]
if not types:
return set()
if from_import_name is not None:
types = unite(
t.py__getattribute__(
unicode(from_import_name),
from_import_name,
name_context=context,
is_goto=is_goto
is_goto=is_goto,
analysis_errors=False
) for t in types
)
@@ -138,6 +144,7 @@ def get_init_path(directory_path):
class ImportName(AbstractNameDefinition):
start_pos = (1, 0)
_level = 0
def __init__(self, parent_context, string_name):
self.parent_context = parent_context
@@ -148,8 +155,12 @@ class ImportName(AbstractNameDefinition):
self.parent_context.evaluator,
[self.string_name],
self.parent_context,
level=self._level,
).follow()
def goto(self):
return [m.name for m in self.infer()]
def get_root_context(self):
# Not sure if this is correct.
return self.parent_context.get_root_context()
@@ -160,13 +171,7 @@ class ImportName(AbstractNameDefinition):
class SubModuleName(ImportName):
def infer(self):
return Importer(
self.parent_context.evaluator,
[self.string_name],
self.parent_context,
level=1
).follow()
_level = 1
class Importer(object):
@@ -201,28 +206,45 @@ class Importer(object):
path = module_context.py__file__()
if path is not None:
import_path = list(import_path)
p = path
for i in range(level):
path = os.path.dirname(path)
dir_name = os.path.basename(path)
p = os.path.dirname(p)
dir_name = os.path.basename(p)
# This is not the proper way to do relative imports. However, since
# Jedi cannot be sure about the entry point, we just calculate an
# absolute path here.
if dir_name:
import_path.insert(0, dir_name)
# TODO those sys.modules modifications are getting
# really stupid. this is the 3rd time that we're using
# this. We should probably refactor.
if path.endswith(os.path.sep + 'os.py'):
import_path.insert(0, 'os')
else:
import_path.insert(0, dir_name)
else:
_add_error(module_context, import_path[-1])
import_path = []
# TODO add import error.
debug.warning('Attempted relative import beyond top-level package.')
# If no path is defined in the module we have no ideas where we
# are in the file system. Therefore we cannot know what to do.
# In this case we just let the path there and ignore that it's
# a relative path. Not sure if that's a good idea.
else:
# Here we basically rewrite the level to 0.
import_path = tuple(base) + tuple(import_path)
base = tuple(base)
if level > 1:
base = base[:-level + 1]
import_path = base + tuple(import_path)
self.import_path = import_path
@property
def str_import_path(self):
"""Returns the import path as pure strings instead of `Name`."""
return tuple(str(name) for name in self.import_path)
return tuple(
name.value if isinstance(name, tree.Name) else name
for name in self.import_path)
def sys_path_with_modifications(self):
in_path = []
@@ -255,7 +277,10 @@ class Importer(object):
"""
This method is very similar to importlib's `_gcd_import`.
"""
import_parts = [str(i) for i in import_path]
import_parts = [
i.value if isinstance(i, tree.Name) else i
for i in import_path
]
# Handle "magic" Flask extension imports:
# ``flask.ext.foo`` is really ``flask_foo`` or ``flaskext.foo``.
@@ -290,14 +315,14 @@ class Importer(object):
# ``os.path``, because it's a very important one in Python
# that is being achieved by messing with ``sys.modules`` in
# ``os``.
if [str(i) for i in import_path] == ['os', 'path']:
if import_parts == ['os', 'path']:
return parent_module.py__getattribute__('path')
try:
method = parent_module.py__path__
except AttributeError:
# The module is not a package.
_add_error(parent_module, import_path[-1])
_add_error(self.module_context, import_path[-1])
return set()
else:
paths = method()
@@ -306,13 +331,15 @@ class Importer(object):
# At the moment we are only using one path. So this is
# not important to be correct.
try:
if not isinstance(path, list):
path = [path]
module_file, module_path, is_pkg = \
find_module(import_parts[-1], [path])
find_module(import_parts[-1], path, fullname=module_name)
break
except ImportError:
module_path = None
if module_path is None:
_add_error(parent_module, import_path[-1])
_add_error(self.module_context, import_path[-1])
return set()
else:
parent_module = None
@@ -323,7 +350,7 @@ class Importer(object):
sys.path, temp = sys_path, sys.path
try:
module_file, module_path, is_pkg = \
find_module(import_parts[-1])
find_module(import_parts[-1], fullname=module_name)
finally:
sys.path = temp
except ImportError:
@@ -331,22 +358,27 @@ class Importer(object):
_add_error(self.module_context, import_path[-1])
return set()
source = None
code = None
if is_pkg:
# In this case, we don't have a file yet. Search for the
# __init__ file.
if module_path.endswith(('.zip', '.egg')):
source = module_file.loader.get_source(module_name)
code = module_file.loader.get_source(module_name)
else:
module_path = get_init_path(module_path)
elif module_file:
source = module_file.read()
code = module_file.read()
module_file.close()
if module_file is None and not module_path.endswith(('.py', '.zip', '.egg')):
if isinstance(module_path, ImplicitNSInfo):
from jedi.evaluate.representation import ImplicitNamespaceContext
fullname, paths = module_path.name, module_path.paths
module = ImplicitNamespaceContext(self._evaluator, fullname=fullname)
module.paths = paths
elif module_file is None and not module_path.endswith(('.py', '.zip', '.egg')):
module = compiled.load_module(self._evaluator, module_path)
else:
module = _load_module(self._evaluator, module_path, source, sys_path, parent_module)
module = _load_module(self._evaluator, module_path, code, sys_path, parent_module)
if module is None:
# The file might raise an ImportError e.g. and therefore not be
@@ -384,7 +416,7 @@ class Importer(object):
:param only_modules: Indicates wheter it's possible to import a
definition that is not defined in a module.
"""
from jedi.evaluate.representation import ModuleContext
from jedi.evaluate.representation import ModuleContext, ImplicitNamespaceContext
names = []
if self.import_path:
# flask
@@ -405,20 +437,23 @@ class Importer(object):
# Non-modules are not completable.
if context.api_type != 'module': # not a module
continue
# namespace packages
if isinstance(context, ModuleContext) and \
context.py__file__().endswith('__init__.py'):
if isinstance(context, ModuleContext) and context.py__file__().endswith('__init__.py'):
paths = context.py__path__()
names += self._get_module_names(paths, in_module=context)
# implicit namespace packages
elif isinstance(context, ImplicitNamespaceContext):
paths = context.paths
names += self._get_module_names(paths)
if only_modules:
# In the case of an import like `from x.` we don't need to
# add all the variables.
if ('os',) == self.str_import_path and not self.level:
# os.path is a hardcoded exception, because it's a
# ``sys.modules`` modification.
names.append(self._generate_name('path'))
names.append(self._generate_name('path', context))
continue
@@ -438,31 +473,22 @@ class Importer(object):
return names
def _load_module(evaluator, path=None, source=None, sys_path=None, parent_module=None):
def load(source):
dotted_path = path and compiled.dotted_from_fs_path(path, sys_path)
if path is not None and path.endswith(('.py', '.zip', '.egg')) \
and dotted_path not in settings.auto_import_modules:
if source is None:
with open(path, 'rb') as f:
source = f.read()
else:
return compiled.load_module(evaluator, path)
p = path
p = FastParser(evaluator.grammar, source_to_unicode(source), p)
save_parser(path, p)
return p.module
def _load_module(evaluator, path=None, code=None, sys_path=None, parent_module=None):
if sys_path is None:
sys_path = evaluator.sys_path
cached = load_parser(path)
module_node = load(source) if cached is None else cached.module
if isinstance(module_node, compiled.CompiledObject):
return module_node
dotted_path = path and compiled.dotted_from_fs_path(path, sys_path)
if path is not None and path.endswith(('.py', '.zip', '.egg')) \
and dotted_path not in settings.auto_import_modules:
from jedi.evaluate.representation import ModuleContext
return ModuleContext(evaluator, module_node)
module_node = evaluator.grammar.parse(
code=code, path=path, cache=True, diff_cache=True,
cache_path=settings.cache_directory)
from jedi.evaluate.representation import ModuleContext
return ModuleContext(evaluator, module_node, path=path)
else:
return compiled.load_module(evaluator, path)
def add_module(evaluator, module_name, module):
@@ -482,22 +508,26 @@ def get_modules_containing_name(evaluator, modules, name):
def check_python_file(path):
try:
parser_cache_item = parser_cache[path]
# TODO I don't think we should use the cache here?!
node_cache_item = parser_cache[evaluator.grammar._hashed][path]
except KeyError:
try:
return check_fs(path)
except IOError:
return None
else:
return er.ModuleContext(evaluator, parser_cache_item.parser.module)
module_node = node_cache_item.node
return er.ModuleContext(evaluator, module_node, path=path)
def check_fs(path):
with open(path, 'rb') as f:
source = source_to_unicode(f.read())
if name in source:
module_name = os.path.basename(path)[:-3] # Remove `.py`.
module = _load_module(evaluator, path, source)
add_module(evaluator, module_name, module)
code = python_bytes_to_unicode(f.read(), errors='replace')
if name in code:
module = _load_module(evaluator, path, code)
module_name = sys_path.dotted_path_in_sys_path(evaluator.sys_path, path)
if module_name is not None:
add_module(evaluator, module_name, module)
return module
# skip non python modules

View File

@@ -6,11 +6,31 @@ from jedi import debug
from jedi.evaluate import compiled
from jedi.evaluate import filters
from jedi.evaluate.context import Context, LazyKnownContext, LazyKnownContexts
from jedi.evaluate.cache import memoize_default
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.param import AbstractArguments, AnonymousArguments
from jedi.cache import memoize_method
from jedi.evaluate import representation as er
from jedi.evaluate.dynamic import search_params
from jedi.evaluate import iterable
from jedi.parser_utils import get_parent_scope
class InstanceFunctionExecution(er.FunctionExecutionContext):
def __init__(self, instance, parent_context, function_context, var_args):
self.instance = instance
var_args = InstanceVarArgs(self, var_args)
super(InstanceFunctionExecution, self).__init__(
instance.evaluator, parent_context, function_context, var_args)
class AnonymousInstanceFunctionExecution(er.FunctionExecutionContext):
function_execution_filter = filters.AnonymousInstanceFunctionExecutionFilter
def __init__(self, instance, parent_context, function_context, var_args):
self.instance = instance
super(AnonymousInstanceFunctionExecution, self).__init__(
instance.evaluator, parent_context, function_context, var_args)
class AbstractInstanceContext(Context):
@@ -18,6 +38,7 @@ class AbstractInstanceContext(Context):
This class is used to evaluate instances.
"""
api_type = 'instance'
function_execution_cls = InstanceFunctionExecution
def __init__(self, evaluator, parent_context, class_context, var_args):
super(AbstractInstanceContext, self).__init__(evaluator, parent_context)
@@ -135,7 +156,7 @@ class AbstractInstanceContext(Context):
bound_method = BoundMethod(
self.evaluator, self, class_context, self.parent_context, func_node
)
return InstanceFunctionExecution(
return self.function_execution_cls(
self,
class_context.parent_context,
bound_method,
@@ -147,11 +168,11 @@ class AbstractInstanceContext(Context):
if isinstance(name, LazyInstanceName):
yield self._create_init_execution(name.class_context, name.tree_name.parent)
@memoize_default()
@evaluator_method_cache()
def create_instance_context(self, class_context, node):
if node.parent.type in ('funcdef', 'classdef'):
node = node.parent
scope = node.get_parent_scope()
scope = get_parent_scope(node)
if scope == class_context.tree_node:
return class_context
else:
@@ -162,9 +183,15 @@ class AbstractInstanceContext(Context):
else:
bound_method = BoundMethod(
self.evaluator, self, class_context,
self.parent_context, scope
parent_context, scope
)
return bound_method.get_function_execution()
elif scope.type == 'classdef':
class_context = er.ClassContext(self.evaluator, scope, parent_context)
return class_context
elif scope.type == 'comp_for':
# Comprehensions currently don't have a special scope in Jedi.
return self.create_instance_context(class_context, scope)
else:
raise NotImplementedError
return class_context
@@ -189,25 +216,32 @@ class CompiledInstance(AbstractInstanceContext):
return compiled.CompiledContextName(self, self.class_context.name.string_name)
def create_instance_context(self, class_context, node):
if node.get_parent_scope().type == 'classdef':
if get_parent_scope(node).type == 'classdef':
return class_context
else:
return super(CompiledInstance, self).create_instance_context(class_context, node)
class TreeInstance(AbstractInstanceContext):
def __init__(self, evaluator, parent_context, class_context, var_args):
super(TreeInstance, self).__init__(evaluator, parent_context,
class_context, var_args)
self.tree_node = class_context.tree_node
@property
def name(self):
return filters.ContextName(self, self.class_context.name.tree_name)
class AnonymousInstance(TreeInstance):
function_execution_cls = AnonymousInstanceFunctionExecution
def __init__(self, evaluator, parent_context, class_context):
super(AnonymousInstance, self).__init__(
evaluator,
parent_context,
class_context,
var_args=None
var_args=AnonymousArguments(),
)
@@ -258,8 +292,9 @@ class BoundMethod(er.FunctionContext):
def get_function_execution(self, arguments=None):
if arguments is None:
arguments = AnonymousArguments()
return AnonymousInstanceFunctionExecution(
self._instance, self.parent_context, self)
self._instance, self.parent_context, self, arguments)
else:
return InstanceFunctionExecution(
self._instance, self.parent_context, self, arguments)
@@ -332,7 +367,7 @@ class InstanceClassFilter(filters.ParserTreeFilter):
while node is not None:
if node == self._parser_scope or node == self.context:
return True
node = node.get_parent_scope()
node = get_parent_scope(node)
return False
def _access_possible(self, name):
@@ -372,74 +407,27 @@ class SelfNameFilter(InstanceClassFilter):
return names
class ParamArguments(object):
"""
TODO This seems like a strange class, clean up?
"""
class LazyParamContext(object):
def __init__(self, fucking_param):
self._param = fucking_param
def infer(self):
return self._param.infer()
def __init__(self, class_context, funcdef):
self._class_context = class_context
self._funcdef = funcdef
def unpack(self, func=None):
params = search_params(
self._class_context.evaluator,
self._class_context,
self._funcdef
)
is_first = True
for p in params:
# TODO Yeah, here at last, the class seems to be really wrong.
if is_first:
is_first = False
continue
yield None, self.LazyParamContext(p)
class InstanceVarArgs(object):
def __init__(self, instance, funcdef, var_args):
self._instance = instance
self._funcdef = funcdef
class InstanceVarArgs(AbstractArguments):
def __init__(self, execution_context, var_args):
self._execution_context = execution_context
self._var_args = var_args
@memoize_method
def _get_var_args(self):
if self._var_args is None:
# TODO this parent_context might be wrong. test?!
return ParamArguments(self._instance.class_context, self._funcdef)
return self._var_args
@property
def argument_node(self):
return self._var_args.argument_node
@property
def trailer(self):
return self._var_args.trailer
def unpack(self, func=None):
yield None, LazyKnownContext(self._instance)
yield None, LazyKnownContext(self._execution_context.instance)
for values in self._get_var_args().unpack(func):
yield values
def get_calling_nodes(self):
return self._get_var_args().get_calling_nodes()
def __getattr__(self, name):
return getattr(self._var_args, name)
class InstanceFunctionExecution(er.FunctionExecutionContext):
def __init__(self, instance, parent_context, function_context, var_args):
self.instance = instance
var_args = InstanceVarArgs(instance, function_context.tree_node, var_args)
super(InstanceFunctionExecution, self).__init__(
instance.evaluator, parent_context, function_context, var_args)
class AnonymousInstanceFunctionExecution(InstanceFunctionExecution):
function_execution_filter = filters.AnonymousInstanceFunctionExecutionFilter
def __init__(self, instance, parent_context, function_context):
super(AnonymousInstanceFunctionExecution, self).__init__(
instance, parent_context, function_context, None)

View File

@@ -32,9 +32,10 @@ from jedi.evaluate import pep0484
from jedi.evaluate import context
from jedi.evaluate import precedence
from jedi.evaluate import recursion
from jedi.evaluate.cache import memoize_default
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.filters import DictFilter, AbstractNameDefinition, \
ParserTreeFilter
from jedi.parser_utils import get_comp_fors
class AbstractSequence(context.Context):
@@ -164,7 +165,6 @@ class GeneratorMixin(object):
class Generator(GeneratorMixin, context.Context):
"""Handling of `yield` functions."""
def __init__(self, evaluator, func_execution_context):
super(Generator, self).__init__(evaluator, parent_context=evaluator.BUILTINS)
self._func_execution_context = func_execution_context
@@ -228,7 +228,7 @@ class Comprehension(AbstractSequence):
"""
return self._get_comprehension().children[index]
@memoize_default()
@evaluator_method_cache()
def _get_comp_for_context(self, parent_context, comp_for):
# TODO shouldn't this be part of create_context?
return CompForContext.from_comp_for(parent_context, comp_for)
@@ -240,30 +240,31 @@ class Comprehension(AbstractSequence):
parent_context = parent_context or self._defining_context
input_types = parent_context.eval_node(input_node)
iterated = py__iter__(evaluator, input_types, input_node)
cn = context.ContextualizedNode(parent_context, input_node)
iterated = py__iter__(evaluator, input_types, cn)
exprlist = comp_for.children[1]
for i, lazy_context in enumerate(iterated):
types = lazy_context.infer()
dct = unpack_tuple_to_dict(evaluator, types, exprlist)
context = self._get_comp_for_context(
dct = unpack_tuple_to_dict(parent_context, types, exprlist)
context_ = self._get_comp_for_context(
parent_context,
comp_for,
)
with helpers.predefine_names(context, comp_for, dct):
with helpers.predefine_names(context_, comp_for, dct):
try:
for result in self._nested(comp_fors[1:], context):
for result in self._nested(comp_fors[1:], context_):
yield result
except IndexError:
iterated = context.eval_node(self._eval_node())
iterated = context_.eval_node(self._eval_node())
if self.array_type == 'dict':
yield iterated, context.eval_node(self._eval_node(2))
yield iterated, context_.eval_node(self._eval_node(2))
else:
yield iterated
@memoize_default(default=[])
@evaluator_method_cache(default=[])
@common.to_list
def _iterate(self):
comp_fors = tuple(self._get_comp_for().get_comp_fors())
comp_fors = tuple(get_comp_fors(self._get_comp_for()))
for result in self._nested(comp_fors):
yield result
@@ -492,16 +493,6 @@ class _FakeArray(SequenceLiteralContext):
# TODO is this class really needed?
class ImplicitTuple(_FakeArray):
def __init__(self, evaluator, testlist):
super(ImplicitTuple, self).__init__(evaluator, testlist, 'tuple')
raise NotImplementedError
self._testlist = testlist
def _items(self):
return self._testlist.children[::2]
class FakeSequence(_FakeArray):
def __init__(self, evaluator, array_type, lazy_context_list):
"""
@@ -510,16 +501,15 @@ class FakeSequence(_FakeArray):
super(FakeSequence, self).__init__(evaluator, None, array_type)
self._lazy_context_list = lazy_context_list
def _items(self):
raise DeprecationWarning
return self._context_list
def py__getitem__(self, index):
return set(self._lazy_context_list[index].infer())
def py__iter__(self):
return self._lazy_context_list
def py__bool__(self):
return bool(len(self._lazy_context_list))
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self._lazy_context_list)
@@ -539,12 +529,6 @@ class FakeDict(_FakeArray):
def dict_values(self):
return unite(lazy_context.infer() for lazy_context in self._dct.values())
def _items(self):
raise DeprecationWarning
for key, values in self._dct.items():
# TODO this is not proper. The values could be multiple values?!
yield key, values[0]
def exact_key_items(self):
return self._dct.items()
@@ -571,33 +555,33 @@ class MergedArray(_FakeArray):
return sum(len(a) for a in self._arrays)
def unpack_tuple_to_dict(evaluator, types, exprlist):
def unpack_tuple_to_dict(context, types, exprlist):
"""
Unpacking tuple assignments in for statements and expr_stmts.
"""
if exprlist.type == 'name':
return {exprlist.value: types}
elif exprlist.type == 'atom' and exprlist.children[0] in '([':
return unpack_tuple_to_dict(evaluator, types, exprlist.children[1])
return unpack_tuple_to_dict(context, types, exprlist.children[1])
elif exprlist.type in ('testlist', 'testlist_comp', 'exprlist',
'testlist_star_expr'):
dct = {}
parts = iter(exprlist.children[::2])
n = 0
for lazy_context in py__iter__(evaluator, types, exprlist):
for lazy_context in py__iter__(context.evaluator, types, exprlist):
n += 1
try:
part = next(parts)
except StopIteration:
# TODO this context is probably not right.
analysis.add(next(iter(types)), 'value-error-too-many-values', part,
analysis.add(context, 'value-error-too-many-values', part,
message="ValueError: too many values to unpack (expected %s)" % n)
else:
dct.update(unpack_tuple_to_dict(evaluator, lazy_context.infer(), part))
dct.update(unpack_tuple_to_dict(context, lazy_context.infer(), part))
has_parts = next(parts, None)
if types and has_parts is not None:
# TODO this context is probably not right.
analysis.add(next(iter(types)), 'value-error-too-few-values', has_parts,
analysis.add(context, 'value-error-too-few-values', has_parts,
message="ValueError: need more than %s values to unpack" % n)
return dct
elif exprlist.type == 'power' or exprlist.type == 'atom_expr':
@@ -611,17 +595,19 @@ def unpack_tuple_to_dict(evaluator, types, exprlist):
raise NotImplementedError
def py__iter__(evaluator, types, node=None):
def py__iter__(evaluator, types, contextualized_node=None):
debug.dbg('py__iter__')
type_iters = []
for typ in types:
try:
iter_method = typ.py__iter__
except AttributeError:
if node is not None:
# TODO this context is probably not right.
analysis.add(typ, 'type-error-not-iterable', node,
message="TypeError: '%s' object is not iterable" % typ)
if contextualized_node is not None:
analysis.add(
contextualized_node.context,
'type-error-not-iterable',
contextualized_node._node,
message="TypeError: '%s' object is not iterable" % typ)
else:
type_iters.append(iter_method())
@@ -631,12 +617,15 @@ def py__iter__(evaluator, types, node=None):
)
def py__iter__types(evaluator, types, node=None):
def py__iter__types(evaluator, types, contextualized_node=None):
"""
Calls `py__iter__`, but ignores the ordering in the end and just returns
all types that it contains.
"""
return unite(lazy_context.infer() for lazy_context in py__iter__(evaluator, types, node))
return unite(
lazy_context.infer()
for lazy_context in py__iter__(evaluator, types, contextualized_node)
)
def py__getitem__(evaluator, context, types, trailer):
@@ -665,7 +654,7 @@ def py__getitem__(evaluator, context, types, trailer):
if isinstance(index, (compiled.CompiledObject, Slice)):
index = index.obj
if type(index) not in (float, int, str, unicode, slice):
if type(index) not in (float, int, str, unicode, slice, type(Ellipsis)):
# If the index is not clearly defined, we have to get all the
# possiblities.
for typ in list(types):
@@ -702,7 +691,7 @@ def check_array_additions(context, sequence):
return _check_array_additions(context, sequence)
@memoize_default(default=set())
@evaluator_method_cache(default=set())
@debug.increase_indent
def _check_array_additions(context, sequence):
"""
@@ -741,7 +730,7 @@ def _check_array_additions(context, sequence):
added_types = set()
for add_name in search_names:
try:
possible_names = module_context.tree_node.used_names[add_name]
possible_names = module_context.tree_node.get_used_names()[add_name]
except KeyError:
continue
else:
@@ -872,8 +861,11 @@ def create_index_types(evaluator, context, index):
if index == ':':
# Like array[:]
return set([Slice(context, None, None, None)])
elif index.type == 'subscript': # subscript is a slice operation.
# Like array[:3]
elif index.type == 'subscript' and not index.children[0] == '.':
# subscript basically implies a slice operation, except for Python 2's
# Ellipsis.
# e.g. array[:3]
result = []
for el in index.children:
if el == ':':

View File

@@ -3,7 +3,7 @@ from collections import defaultdict
from jedi._compatibility import zip_longest
from jedi import debug
from jedi import common
from jedi.parser import tree
from parso.python import tree
from jedi.evaluate import iterable
from jedi.evaluate import analysis
from jedi.evaluate import context
@@ -61,7 +61,7 @@ class AbstractArguments():
raise ValueError
yield values
def eval_all(self, func=None):
def eval_all(self, funcdef=None):
"""
Evaluates all arguments as a support for static analysis
(normally Jedi).
@@ -70,6 +70,25 @@ class AbstractArguments():
types = lazy_context.infer()
try_iter_content(types)
def get_calling_nodes(self):
raise NotImplementedError
def unpack(self, funcdef=None):
raise NotImplementedError
def get_params(self, execution_context):
return get_params(execution_context, self)
class AnonymousArguments(AbstractArguments):
def get_params(self, execution_context):
from jedi.evaluate.dynamic import search_params
return search_params(
execution_context.evaluator,
execution_context,
execution_context.tree_node
)
class TreeArguments(AbstractArguments):
def __init__(self, evaluator, context, argument_node, trailer=None):
@@ -110,12 +129,12 @@ class TreeArguments(AbstractArguments):
else:
yield 0, child
def unpack(self, func=None):
def unpack(self, funcdef=None):
named_args = []
for stars, el in self._split():
if stars == 1:
for star_count, el in self._split():
if star_count == 1:
arrays = self.context.eval_node(el)
iterators = [_iterate_star_args(self.context, a, el, func)
iterators = [_iterate_star_args(self.context, a, el, funcdef)
for a in arrays]
iterators = list(iterators)
for values in list(zip_longest(*iterators)):
@@ -124,10 +143,10 @@ class TreeArguments(AbstractArguments):
yield None, context.get_merged_lazy_context(
[v for v in values if v is not None]
)
elif stars == 2:
elif star_count == 2:
arrays = self._evaluator.eval_element(self.context, el)
for dct in arrays:
for key, values in _star_star_dict(self.context, dct, el, func):
for key, values in _star_star_dict(self.context, dct, el, funcdef):
yield key, values
else:
if el.type == 'argument':
@@ -148,12 +167,12 @@ class TreeArguments(AbstractArguments):
yield named_arg
def as_tree_tuple_objects(self):
for stars, argument in self._split():
for star_count, argument in self._split():
if argument.type == 'argument':
argument, default = argument.children[::2]
else:
default = None
yield argument, default, stars
yield argument, default, star_count
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.argument_node)
@@ -168,8 +187,8 @@ class TreeArguments(AbstractArguments):
break
old_arguments_list.append(arguments)
for name, default, stars in reversed(list(arguments.as_tree_tuple_objects())):
if not stars or not isinstance(name, tree.Name):
for name, default, star_count in reversed(list(arguments.as_tree_tuple_objects())):
if not star_count or not isinstance(name, tree.Name):
continue
names = self._evaluator.goto(arguments.context, name)
@@ -195,7 +214,7 @@ class ValuesArguments(AbstractArguments):
def __init__(self, values_list):
self._values_list = values_list
def unpack(self, func=None):
def unpack(self, funcdef=None):
for values in self._values_list:
yield None, context.LazyKnownContexts(values)
@@ -208,43 +227,44 @@ class ValuesArguments(AbstractArguments):
class ExecutedParam(object):
"""Fake a param and give it values."""
def __init__(self, var_args_context, original_param, var_args, lazy_context):
self._root_context = var_args_context.get_root_context()
self._original_param = original_param
self.var_args = var_args
def __init__(self, execution_context, param_node, lazy_context):
self._execution_context = execution_context
self._param_node = param_node
self._lazy_context = lazy_context
self.string_name = self._original_param.name.value
self.string_name = param_node.name.value
def infer(self):
pep0484_hints = pep0484.follow_param(self._root_context, self._original_param)
doc_params = docstrings.follow_param(self._root_context, self._original_param)
pep0484_hints = pep0484.infer_param(self._execution_context, self._param_node)
doc_params = docstrings.infer_param(self._execution_context, self._param_node)
if pep0484_hints or doc_params:
return list(set(pep0484_hints) | set(doc_params))
return self._lazy_context.infer()
@property
def position_nr(self):
# Need to use the original logic here, because it uses the parent.
return self._original_param.position_nr
def var_args(self):
return self._execution_context.var_args
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.string_name)
def get_params(evaluator, parent_context, func, var_args):
def get_params(execution_context, var_args):
result_params = []
param_dict = {}
for param in func.params:
param_dict[str(param.name)] = param
unpacked_va = list(var_args.unpack(func))
funcdef = execution_context.tree_node
parent_context = execution_context.parent_context
for param in funcdef.get_params():
param_dict[param.name.value] = param
unpacked_va = list(var_args.unpack(funcdef))
var_arg_iterator = common.PushBackIterator(iter(unpacked_va))
non_matching_keys = defaultdict(lambda: [])
keys_used = {}
keys_only = False
had_multiple_value_error = False
for param in func.params:
for param in funcdef.get_params():
# The value and key can both be null. There, the defaults apply.
# args / kwargs will just be empty arrays / dicts, respectively.
# Wrong value count is just ignored. If you try to test cases that are
@@ -260,12 +280,12 @@ def get_params(evaluator, parent_context, func, var_args):
if key in keys_used:
had_multiple_value_error = True
m = ("TypeError: %s() got multiple values for keyword argument '%s'."
% (func.name, key))
% (funcdef.name, key))
for node in var_args.get_calling_nodes():
analysis.add(parent_context, 'type-error-multiple-values',
node, message=m)
else:
keys_used[key] = ExecutedParam(parent_context, key_param, var_args, argument)
keys_used[key] = ExecutedParam(execution_context, key_param, argument)
key, argument = next(var_arg_iterator, (None, None))
try:
@@ -274,7 +294,7 @@ def get_params(evaluator, parent_context, func, var_args):
except KeyError:
pass
if param.stars == 1:
if param.star_count == 1:
# *args param
lazy_context_list = []
if argument is not None:
@@ -285,11 +305,11 @@ def get_params(evaluator, parent_context, func, var_args):
var_arg_iterator.push_back((key, argument))
break
lazy_context_list.append(argument)
seq = iterable.FakeSequence(evaluator, 'tuple', lazy_context_list)
seq = iterable.FakeSequence(execution_context.evaluator, 'tuple', lazy_context_list)
result_arg = context.LazyKnownContext(seq)
elif param.stars == 2:
elif param.star_count == 2:
# **kwargs param
dct = iterable.FakeDict(evaluator, dict(non_matching_keys))
dct = iterable.FakeDict(execution_context.evaluator, dict(non_matching_keys))
result_arg = context.LazyKnownContext(dct)
non_matching_keys = {}
else:
@@ -300,7 +320,7 @@ def get_params(evaluator, parent_context, func, var_args):
result_arg = context.LazyUnknownContext()
if not keys_only:
for node in var_args.get_calling_nodes():
m = _error_argument_count(func, len(unpacked_va))
m = _error_argument_count(funcdef, len(unpacked_va))
analysis.add(parent_context, 'type-error-too-few-arguments',
node, message=m)
else:
@@ -308,7 +328,7 @@ def get_params(evaluator, parent_context, func, var_args):
else:
result_arg = argument
result_params.append(ExecutedParam(parent_context, param, var_args, result_arg))
result_params.append(ExecutedParam(execution_context, param, result_arg))
if not isinstance(result_arg, context.LazyUnknownContext):
keys_used[param.name.value] = result_params[-1]
@@ -320,16 +340,16 @@ def get_params(evaluator, parent_context, func, var_args):
param = param_dict[k]
if not (non_matching_keys or had_multiple_value_error or
param.stars or param.default):
param.star_count or param.default):
# add a warning only if there's not another one.
for node in var_args.get_calling_nodes():
m = _error_argument_count(func, len(unpacked_va))
m = _error_argument_count(funcdef, len(unpacked_va))
analysis.add(parent_context, 'type-error-too-few-arguments',
node, message=m)
for key, lazy_context in non_matching_keys.items():
m = "TypeError: %s() got an unexpected keyword argument '%s'." \
% (func.name, key)
% (funcdef.name, key)
add_argument_issue(
parent_context,
'type-error-keyword-argument',
@@ -339,7 +359,7 @@ def get_params(evaluator, parent_context, func, var_args):
remaining_arguments = list(var_arg_iterator)
if remaining_arguments:
m = _error_argument_count(func, len(unpacked_va))
m = _error_argument_count(funcdef, len(unpacked_va))
# Just report an error for the first param that is not needed (like
# cPython).
first_key, lazy_context = remaining_arguments[0]
@@ -349,21 +369,21 @@ def get_params(evaluator, parent_context, func, var_args):
return result_params
def _iterate_star_args(context, array, input_node, func=None):
def _iterate_star_args(context, array, input_node, funcdef=None):
try:
iter_ = array.py__iter__
except AttributeError:
if func is not None:
# TODO this func should not be needed.
if funcdef is not None:
# TODO this funcdef should not be needed.
m = "TypeError: %s() argument after * must be a sequence, not %s" \
% (func.name.value, array)
% (funcdef.name.value, array)
analysis.add(context, 'type-error-star', input_node, message=m)
else:
for lazy_context in iter_():
yield lazy_context
def _star_star_dict(context, array, input_node, func):
def _star_star_dict(context, array, input_node, funcdef):
from jedi.evaluate.instance import CompiledInstance
if isinstance(array, CompiledInstance) and array.name.string_name == 'dict':
# For now ignore this case. In the future add proper iterators and just
@@ -372,35 +392,42 @@ def _star_star_dict(context, array, input_node, func):
elif isinstance(array, iterable.AbstractSequence) and array.array_type == 'dict':
return array.exact_key_items()
else:
if func is not None:
if funcdef is not None:
m = "TypeError: %s argument after ** must be a mapping, not %s" \
% (func.name.value, array)
% (funcdef.name.value, array)
analysis.add(context, 'type-error-star-star', input_node, message=m)
return {}
def _error_argument_count(func, actual_count):
default_arguments = sum(1 for p in func.params if p.default or p.stars)
def _error_argument_count(funcdef, actual_count):
params = funcdef.get_params()
default_arguments = sum(1 for p in params if p.default or p.star_count)
if default_arguments == 0:
before = 'exactly '
else:
before = 'from %s to ' % (len(func.params) - default_arguments)
before = 'from %s to ' % (len(params) - default_arguments)
return ('TypeError: %s() takes %s%s arguments (%s given).'
% (func.name, before, len(func.params), actual_count))
% (funcdef.name, before, len(params), actual_count))
def create_default_param(parent_context, param):
if param.stars == 1:
def _create_default_param(execution_context, param):
if param.star_count == 1:
result_arg = context.LazyKnownContext(
iterable.FakeSequence(parent_context.evaluator, 'tuple', [])
iterable.FakeSequence(execution_context.evaluator, 'tuple', [])
)
elif param.stars == 2:
elif param.star_count == 2:
result_arg = context.LazyKnownContext(
iterable.FakeDict(parent_context.evaluator, {})
iterable.FakeDict(execution_context.evaluator, {})
)
elif param.default is None:
result_arg = context.LazyUnknownContext()
else:
result_arg = context.LazyTreeContext(parent_context, param.default)
return ExecutedParam(parent_context, param, None, result_arg)
result_arg = context.LazyTreeContext(execution_context.parent_context, param.default)
return ExecutedParam(execution_context, param, result_arg)
def create_default_params(execution_context, funcdef):
return [_create_default_param(execution_context, p)
for p in funcdef.get_params()]

View File

@@ -0,0 +1,6 @@
from jedi.evaluate.cache import evaluator_function_cache
@evaluator_function_cache()
def get_yield_exprs(evaluator, funcdef):
return list(funcdef.iter_yield_exprs())

View File

@@ -20,17 +20,19 @@ x support for type hint comments for functions, `# type: (int, str) -> int`.
"""
import itertools
import os
from jedi.parser import \
Parser, load_grammar, ParseError, ParserWithRecovery, tree
import re
from parso import ParserSyntaxError
from parso.python import tree
from jedi.common import unite
from jedi.evaluate.cache import memoize_default
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate import compiled
from jedi.evaluate.context import LazyTreeContext
from jedi import debug
from jedi import _compatibility
import re
from jedi import parser_utils
def _evaluate_for_annotation(context, annotation, index=None):
@@ -62,50 +64,54 @@ def _fix_forward_reference(context, node):
if isinstance(evaled_node, compiled.CompiledObject) and \
isinstance(evaled_node.obj, str):
try:
p = Parser(load_grammar(), _compatibility.unicode(evaled_node.obj),
start_symbol='eval_input')
new_node = p.get_parsed_node()
except ParseError:
new_node = context.evaluator.grammar.parse(
_compatibility.unicode(evaled_node.obj),
start_symbol='eval_input',
error_recovery=False
)
except ParserSyntaxError:
debug.warning('Annotation not parsed: %s' % evaled_node.obj)
return node
else:
module = node.get_parent_until()
new_node.move(module.end_pos[0])
module = node.get_root_node()
parser_utils.move(new_node, module.end_pos[0])
new_node.parent = context.tree_node
return new_node
else:
return node
@memoize_default()
def follow_param(context, param):
annotation = param.annotation()
return _evaluate_for_annotation(context, annotation)
@evaluator_method_cache()
def infer_param(execution_context, param):
annotation = param.annotation
module_context = execution_context.get_root_context()
return _evaluate_for_annotation(module_context, annotation)
def py__annotations__(funcdef):
return_annotation = funcdef.annotation()
return_annotation = funcdef.annotation
if return_annotation:
dct = {'return': return_annotation}
else:
dct = {}
for function_param in funcdef.params:
param_annotation = function_param.annotation()
for function_param in funcdef.get_params():
param_annotation = function_param.annotation
if param_annotation is not None:
dct[function_param.name.value] = param_annotation
return dct
@memoize_default()
def find_return_types(context, func):
annotation = py__annotations__(func).get("return", None)
return _evaluate_for_annotation(context, annotation)
@evaluator_method_cache()
def infer_return_types(function_context):
annotation = py__annotations__(function_context.tree_node).get("return", None)
module_context = function_context.get_root_context()
return _evaluate_for_annotation(module_context, annotation)
_typing_module = None
def _get_typing_replacement_module():
def _get_typing_replacement_module(grammar):
"""
The idea is to return our jedi replacement for the PEP-0484 typing module
as discussed at https://github.com/davidhalter/jedi/issues/663
@@ -116,8 +122,7 @@ def _get_typing_replacement_module():
os.path.abspath(os.path.join(__file__, "../jedi_typing.py"))
with open(typing_path) as f:
code = _compatibility.unicode(f.read())
p = ParserWithRecovery(load_grammar(), code)
_typing_module = p.module
_typing_module = grammar.parse(code)
return _typing_module
@@ -149,7 +154,11 @@ def py__getitem__(context, typ, node):
return context.eval_node(nodes[0])
from jedi.evaluate.representation import ModuleContext
typing = ModuleContext(context.evaluator, _get_typing_replacement_module())
typing = ModuleContext(
context.evaluator,
module_node=_get_typing_replacement_module(context.evaluator.latest_grammar),
path=None
)
factories = typing.py__getattribute__("factory")
assert len(factories) == 1
factory = list(factories)[0]
@@ -190,7 +199,7 @@ def find_type_from_comment_hint_assign(context, node, name):
def _find_type_from_comment_hint(context, node, varlist, name):
index = None
if varlist.type in ("testlist_star_expr", "exprlist"):
if varlist.type in ("testlist_star_expr", "exprlist", "testlist"):
# something like "a, b = 1, 2"
index = 0
for child in varlist.children:
@@ -202,7 +211,7 @@ def _find_type_from_comment_hint(context, node, varlist, name):
else:
return []
comment = node.get_following_comment_same_line()
comment = parser_utils.get_following_comment_same_line(node)
if comment is None:
return []
match = re.match(r"^#\s*type:\s*([^#]*)", comment)

View File

@@ -4,7 +4,6 @@ Handles operator precedence.
import operator as op
from jedi._compatibility import unicode
from jedi.parser import tree
from jedi import debug
from jedi.evaluate.compiled import CompiledObject, create, builtin_from_name
from jedi.evaluate import analysis
@@ -46,7 +45,7 @@ def calculate_children(evaluator, context, children):
for operator in iterator:
right = next(iterator)
if operator.type == 'comp_op': # not in / is not
operator = ' '.join(str(c.value) for c in operator.children)
operator = ' '.join(c.value for c in operator.children)
# handle lazy evaluation of and/or here.
if operator in ('and', 'or'):

View File

@@ -6,13 +6,48 @@ the right time. You can read more about them :ref:`here <settings-recursion>`.
Next to :mod:`jedi.evaluate.cache` this module also makes |jedi| not
thread-safe. Why? ``execution_recursion_decorator`` uses class variables to
count the function calls.
.. _settings-recursion:
Settings
~~~~~~~~~~
Recursion settings are important if you don't want extremly
recursive python code to go absolutely crazy.
The default values are based on experiments while completing the |jedi| library
itself (inception!). But I don't think there's any other Python library that
uses recursion in a similarly extreme way. Completion should also be fast and
therefore the quality might not always be maximal.
.. autodata:: recursion_limit
.. autodata:: total_function_execution_limit
.. autodata:: per_function_execution_limit
.. autodata:: per_function_recursion_limit
"""
from contextlib import contextmanager
from jedi import debug
from jedi import settings
recursion_limit = 15
"""
Like ``sys.getrecursionlimit()``, just for |jedi|.
"""
total_function_execution_limit = 200
"""
This is a hard limit of how many non-builtin functions can be executed.
"""
per_function_execution_limit = 6
"""
The maximal amount of times a specific function may be executed.
"""
per_function_recursion_limit = 2
"""
A function may not be executed more than this number of times recursively.
"""
class RecursionDetector(object):
def __init__(self):
self.pushed_nodes = []
@@ -58,47 +93,42 @@ class ExecutionRecursionDetector(object):
Catches recursions of executions.
"""
def __init__(self, evaluator):
self.recursion_level = 0
self.parent_execution_funcs = []
self.execution_funcs = set()
self.execution_count = 0
self._evaluator = evaluator
def __call__(self, execution):
debug.dbg('Execution recursions: %s', execution, self.recursion_level,
self.execution_count, len(self.execution_funcs))
if self.check_recursion(execution):
result = set()
else:
result = self.func(execution)
self.pop_execution()
return result
self._recursion_level = 0
self._parent_execution_funcs = []
self._funcdef_execution_counts = {}
self._execution_count = 0
def pop_execution(self):
self.parent_execution_funcs.pop()
self.recursion_level -= 1
self._parent_execution_funcs.pop()
self._recursion_level -= 1
def push_execution(self, execution):
in_par_execution_funcs = execution.tree_node in self.parent_execution_funcs
in_execution_funcs = execution.tree_node in self.execution_funcs
self.recursion_level += 1
self.execution_count += 1
self.execution_funcs.add(execution.tree_node)
self.parent_execution_funcs.append(execution.tree_node)
funcdef = execution.tree_node
if self.execution_count > settings.max_executions:
return True
# These two will be undone in pop_execution.
self._recursion_level += 1
self._parent_execution_funcs.append(funcdef)
module = execution.get_root_context()
if module == self._evaluator.BUILTINS:
# We have control over builtins so we know they are not recursing
# like crazy. Therefore we just let them execute always, because
# they usually just help a lot with getting good results.
return False
if in_par_execution_funcs:
if self.recursion_level > settings.max_function_recursion_level:
return True
if in_execution_funcs and \
len(self.execution_funcs) > settings.max_until_execution_unique:
if self._recursion_level > recursion_limit:
return True
if self.execution_count > settings.max_executions_without_builtins:
if self._execution_count >= total_function_execution_limit:
return True
self._execution_count += 1
if self._funcdef_execution_counts.setdefault(funcdef, 0) >= per_function_execution_limit:
return True
self._funcdef_execution_counts[funcdef] += 1
if self._parent_execution_funcs.count(funcdef) > per_function_recursion_limit:
return True
return False

View File

@@ -1,5 +1,5 @@
"""
Like described in the :mod:`jedi.parser.tree` module,
Like described in the :mod:`parso.python.tree` module,
there's a need for an ast like module to represent the states of parsed
modules.
@@ -32,6 +32,8 @@ py__package__() Only on modules. For the import system.
py__path__() Only on modules. For the import system.
py__get__(call_object) Only on instances. Simulates
descriptors.
py__doc__(include_call_signature: Returns the docstring for a context.
bool)
====================================== ========================================
"""
@@ -39,12 +41,14 @@ import os
import pkgutil
import imp
import re
from itertools import chain
from parso.python import tree
from parso import python_bytes_to_unicode
from jedi._compatibility import use_metaclass
from jedi.parser import tree
from jedi import debug
from jedi import common
from jedi.evaluate.cache import memoize_default, CachedMetaClass, NO_DEFAULT
from jedi.evaluate.cache import evaluator_method_cache, CachedMetaClass
from jedi.evaluate import compiled
from jedi.evaluate import recursion
from jedi.evaluate import iterable
@@ -56,9 +60,12 @@ from jedi.evaluate import imports
from jedi.evaluate import helpers
from jedi.evaluate.filters import ParserTreeFilter, FunctionExecutionFilter, \
GlobalNameFilter, DictFilter, ContextName, AbstractNameDefinition, \
ParamName, AnonymousInstanceParamName, TreeNameDefinition
from jedi.evaluate.dynamic import search_params
ParamName, AnonymousInstanceParamName, TreeNameDefinition, \
ContextNameMixin
from jedi.evaluate import context
from jedi.evaluate.context import ContextualizedNode
from jedi import parser_utils
from jedi.evaluate.parser_cache import get_yield_exprs
def apply_py__get__(context, base_context):
@@ -72,8 +79,19 @@ def apply_py__get__(context, base_context):
class ClassName(TreeNameDefinition):
def __init__(self, parent_context, tree_name, name_context):
super(ClassName, self).__init__(parent_context, tree_name)
self._name_context = name_context
def infer(self):
for result_context in super(ClassName, self).infer():
# TODO this _name_to_types might get refactored and be a part of the
# parent class. Once it is, we can probably just overwrite method to
# achieve this.
from jedi.evaluate.finder import _name_to_types
inferred = _name_to_types(
self.parent_context.evaluator, self._name_context, self.tree_name)
for result_context in inferred:
for c in apply_py__get__(result_context, self.parent_context):
yield c
@@ -81,6 +99,10 @@ class ClassName(TreeNameDefinition):
class ClassFilter(ParserTreeFilter):
name_class = ClassName
def _convert_names(self, names):
return [self.name_class(self.context, name, self._node_context)
for name in names]
class ClassContext(use_metaclass(CachedMetaClass, context.TreeContext)):
"""
@@ -93,7 +115,7 @@ class ClassContext(use_metaclass(CachedMetaClass, context.TreeContext)):
super(ClassContext, self).__init__(evaluator, parent_context=parent_context)
self.tree_node = classdef
@memoize_default(default=())
@evaluator_method_cache(default=())
def py__mro__(self):
def add(cls):
if cls not in mro:
@@ -129,7 +151,7 @@ class ClassContext(use_metaclass(CachedMetaClass, context.TreeContext)):
add(cls_new)
return tuple(mro)
@memoize_default(default=())
@evaluator_method_cache(default=())
def py__bases__(self):
arglist = self.tree_node.get_super_arglist()
if arglist:
@@ -148,7 +170,7 @@ class ClassContext(use_metaclass(CachedMetaClass, context.TreeContext)):
def get_params(self):
from jedi.evaluate.instance import AnonymousInstance
anon = AnonymousInstance(self.evaluator, self.parent_context, self)
return [AnonymousInstanceParamName(anon, param.name) for param in self.funcdef.params]
return [AnonymousInstanceParamName(anon, param.name) for param in self.funcdef.get_params()]
def get_filters(self, search_global, until_position=None, origin_scope=None, is_instance=False):
if search_global:
@@ -159,26 +181,18 @@ class ClassContext(use_metaclass(CachedMetaClass, context.TreeContext)):
origin_scope=origin_scope
)
else:
for scope in self.py__mro__():
if isinstance(scope, compiled.CompiledObject):
for filter in scope.get_filters(is_instance=is_instance):
for cls in self.py__mro__():
if isinstance(cls, compiled.CompiledObject):
for filter in cls.get_filters(is_instance=is_instance):
yield filter
else:
yield ClassFilter(
self.evaluator, self, node_context=scope,
self.evaluator, self, node_context=cls,
origin_scope=origin_scope)
def is_class(self):
return True
def get_subscope_by_name(self, name):
raise DeprecationWarning
for s in self.py__mro__():
for sub in reversed(s.subscopes):
if sub.name.value == name:
return sub
raise KeyError("Couldn't find subscope.")
def get_function_slot_names(self, name):
for filter in self.get_filters(search_global=False):
names = filter.get(name)
@@ -202,6 +216,20 @@ class ClassContext(use_metaclass(CachedMetaClass, context.TreeContext)):
return ContextName(self, self.tree_node.name)
class LambdaName(AbstractNameDefinition):
string_name = '<lambda>'
def __init__(self, lambda_context):
self._lambda_context = lambda_context
self.parent_context = lambda_context.parent_context
def start_pos(self):
return self._lambda_context.tree_node.start_pos
def infer(self):
return set([self._lambda_context])
class FunctionContext(use_metaclass(CachedMetaClass, context.TreeContext)):
"""
Needed because of decorators. Decorators are evaluated here.
@@ -230,17 +258,17 @@ class FunctionContext(use_metaclass(CachedMetaClass, context.TreeContext)):
"""
Created to be used by inheritance.
"""
if self.tree_node.is_generator():
yield_exprs = get_yield_exprs(self.evaluator, self.tree_node)
if yield_exprs:
return set([iterable.Generator(self.evaluator, function_execution)])
else:
return function_execution.get_return_values()
def get_function_execution(self, arguments=None):
e = self.evaluator
if arguments is None:
return AnonymousFunctionExecution(e, self.parent_context, self)
else:
return FunctionExecutionContext(e, self.parent_context, self, arguments)
arguments = param.AnonymousArguments()
return FunctionExecutionContext(self.evaluator, self.parent_context, self, arguments)
def py__call__(self, arguments):
function_execution = self.get_function_execution(arguments)
@@ -249,7 +277,7 @@ class FunctionContext(use_metaclass(CachedMetaClass, context.TreeContext)):
def py__class__(self):
# This differentiation is only necessary for Python2. Python3 does not
# use a different method class.
if isinstance(self.tree_node.get_parent_scope(), tree.Class):
if isinstance(parser_utils.get_parent_scope(self.tree_node), tree.Class):
name = 'METHOD_CLASS'
else:
name = 'FUNCTION_CLASS'
@@ -257,11 +285,14 @@ class FunctionContext(use_metaclass(CachedMetaClass, context.TreeContext)):
@property
def name(self):
if self.tree_node.type == 'lambdef':
return LambdaName(self)
return ContextName(self, self.tree_node.name)
def get_param_names(self):
function_execution = self.get_function_execution()
return [ParamName(function_execution, param.name) for param in self.tree_node.params]
return [ParamName(function_execution, param.name)
for param in self.tree_node.get_params()]
class FunctionExecutionContext(context.TreeContext):
@@ -281,20 +312,20 @@ class FunctionExecutionContext(context.TreeContext):
self.tree_node = function_context.tree_node
self.var_args = var_args
@memoize_default(default=set())
@evaluator_method_cache(default=set())
@recursion.execution_recursion_decorator()
def get_return_values(self, check_yields=False):
funcdef = self.tree_node
if funcdef.type == 'lambda':
if funcdef.type == 'lambdef':
return self.evaluator.eval_element(self, funcdef.children[-1])
if check_yields:
types = set()
returns = funcdef.yields
returns = get_yield_exprs(self.evaluator, funcdef)
else:
returns = funcdef.returns
types = set(docstrings.find_return_types(self.get_root_context(), funcdef))
types |= set(pep0484.find_return_types(self.get_root_context(), funcdef))
returns = funcdef.iter_return_stmts()
types = set(docstrings.infer_return_types(self.function_context))
types |= set(pep0484.infer_return_types(self.function_context))
for r in returns:
check = flow_analysis.reachability_check(self, funcdef, r)
@@ -304,26 +335,36 @@ class FunctionExecutionContext(context.TreeContext):
if check_yields:
types |= set(self._eval_yield(r))
else:
types |= self.eval_node(r.children[1])
try:
children = r.children
except AttributeError:
types.add(compiled.create(self.evaluator, None))
else:
types |= self.eval_node(children[1])
if check is flow_analysis.REACHABLE:
debug.dbg('Return reachable: %s', r)
break
return types
def _eval_yield(self, yield_expr):
if yield_expr.type == 'keyword':
# `yield` just yields None.
yield context.LazyKnownContext(compiled.create(self.evaluator, None))
return
node = yield_expr.children[1]
if node.type == 'yield_arg': # It must be a yield from.
yield_from_types = self.eval_node(node.children[1])
for lazy_context in iterable.py__iter__(self.evaluator, yield_from_types, node):
cn = ContextualizedNode(self, node.children[1])
for lazy_context in iterable.py__iter__(self.evaluator, cn.infer(), cn):
yield lazy_context
else:
yield context.LazyTreeContext(self, node)
@recursion.execution_recursion_decorator(default=iter([]))
def get_yield_values(self):
for_parents = [(y, tree.search_ancestor(y, ('for_stmt', 'funcdef',
'while_stmt', 'if_stmt')))
for y in self.tree_node.yields]
for_parents = [(y, tree.search_ancestor(y, 'for_stmt', 'funcdef',
'while_stmt', 'if_stmt'))
for y in get_yield_exprs(self.evaluator, self.tree_node)]
# Calculate if the yields are placed within the same for loop.
yields_order = []
@@ -335,7 +376,7 @@ class FunctionExecutionContext(context.TreeContext):
if parent.type == 'suite':
parent = parent.parent
if for_stmt.type == 'for_stmt' and parent == self.tree_node \
and for_stmt.defines_one_name(): # Simplicity for now.
and parser_utils.for_stmt_defines_one_name(for_stmt): # Simplicity for now.
if for_stmt == last_for_stmt:
yields_order[-1][1].append(yield_)
else:
@@ -357,12 +398,12 @@ class FunctionExecutionContext(context.TreeContext):
for result in self._eval_yield(yield_):
yield result
else:
input_node = for_stmt.get_input_node()
for_types = self.eval_node(input_node)
ordered = iterable.py__iter__(evaluator, for_types, input_node)
input_node = for_stmt.get_testlist()
cn = ContextualizedNode(self, input_node)
ordered = iterable.py__iter__(evaluator, cn.infer(), cn)
ordered = list(ordered)
for lazy_context in ordered:
dct = {str(for_stmt.children[1]): lazy_context.infer()}
dct = {str(for_stmt.children[1].value): lazy_context.infer()}
with helpers.predefine_names(self, for_stmt, dct):
for yield_in_same_for_stmt in yields:
for result in self._eval_yield(yield_in_same_for_stmt):
@@ -373,20 +414,9 @@ class FunctionExecutionContext(context.TreeContext):
until_position=until_position,
origin_scope=origin_scope)
@memoize_default(default=NO_DEFAULT)
@evaluator_method_cache()
def get_params(self):
return param.get_params(self.evaluator, self.parent_context, self.tree_node, self.var_args)
class AnonymousFunctionExecution(FunctionExecutionContext):
def __init__(self, evaluator, parent_context, function_context):
super(AnonymousFunctionExecution, self).__init__(
evaluator, parent_context, function_context, var_args=None)
@memoize_default(default=NO_DEFAULT)
def get_params(self):
# We need to do a dynamic search here.
return search_params(self.evaluator, self.parent_context, self.tree_node)
return self.var_args.get_params(self)
class ModuleAttributeName(AbstractNameDefinition):
@@ -405,13 +435,26 @@ class ModuleAttributeName(AbstractNameDefinition):
)
class ModuleName(ContextNameMixin, AbstractNameDefinition):
start_pos = 1, 0
def __init__(self, context, name):
self._context = context
self._name = name
@property
def string_name(self):
return self._name
class ModuleContext(use_metaclass(CachedMetaClass, context.TreeContext)):
api_type = 'module'
parent_context = None
def __init__(self, evaluator, module_node):
def __init__(self, evaluator, module_node, path):
super(ModuleContext, self).__init__(evaluator, parent_context=None)
self.tree_node = module_node
self._path = path
def get_filters(self, search_global, until_position=None, origin_scope=None):
yield ParserTreeFilter(
@@ -429,12 +472,12 @@ class ModuleContext(use_metaclass(CachedMetaClass, context.TreeContext)):
# I'm not sure if the star import cache is really that effective anymore
# with all the other really fast import caches. Recheck. Also we would need
# to push the star imports into Evaluator.modules, if we reenable this.
@memoize_default([])
@evaluator_method_cache([])
def star_imports(self):
modules = []
for i in self.tree_node.imports:
for i in self.tree_node.iter_imports():
if i.is_star_import():
name = i.star_import_name()
name = i.get_paths()[-1][-1]
new = imports.infer_import(self, name)
for module in new:
if isinstance(module, ModuleContext):
@@ -442,16 +485,27 @@ class ModuleContext(use_metaclass(CachedMetaClass, context.TreeContext)):
modules += new
return modules
@memoize_default()
@evaluator_method_cache()
def _module_attributes_dict(self):
names = ['__file__', '__package__', '__doc__', '__name__']
# All the additional module attributes are strings.
return dict((n, ModuleAttributeName(self, n)) for n in names)
@property
@memoize_default()
def _string_name(self):
""" This is used for the goto functions. """
if self._path is None:
return '' # no path -> empty name
else:
sep = (re.escape(os.path.sep),) * 2
r = re.search(r'([^%s]*?)(%s__init__)?(\.py|\.so)?$' % sep, self._path)
# Remove PEP 3149 names
return re.sub('\.[a-z]+-\d{2}[mud]{0,3}$', '', r.group(1))
@property
@evaluator_method_cache()
def name(self):
return ContextName(self, self.tree_node.name)
return ModuleName(self, self._string_name)
def _get_init_directory(self):
"""
@@ -477,10 +531,10 @@ class ModuleContext(use_metaclass(CachedMetaClass, context.TreeContext)):
"""
In contrast to Python's __file__ can be None.
"""
if self.tree_node.path is None:
if self._path is None:
return None
return os.path.abspath(self.tree_node.path)
return os.path.abspath(self._path)
def py__package__(self):
if self._get_init_directory() is None:
@@ -493,7 +547,7 @@ class ModuleContext(use_metaclass(CachedMetaClass, context.TreeContext)):
init_path = self.py__file__()
if os.path.basename(init_path) == '__init__.py':
with open(init_path, 'rb') as f:
content = common.source_to_unicode(f.read())
content = python_bytes_to_unicode(f.read(), errors='replace')
# these are strings that need to be used for namespace packages,
# the first one is ``pkgutil``, the second ``pkg_resources``.
options = ('declare_namespace(__name__)', 'extend_path(__path__')
@@ -532,13 +586,13 @@ class ModuleContext(use_metaclass(CachedMetaClass, context.TreeContext)):
else:
return self._py__path__
@memoize_default()
@evaluator_method_cache()
def _sub_modules_dict(self):
"""
Lists modules in the directory of this module (if this module is a
package).
"""
path = self.tree_node.path
path = self._path
names = {}
if path is not None and path.endswith(os.path.sep + '__init__.py'):
mods = pkgutil.iter_modules([os.path.dirname(path)])
@@ -557,3 +611,74 @@ class ModuleContext(use_metaclass(CachedMetaClass, context.TreeContext)):
def py__class__(self):
return compiled.get_special_object(self.evaluator, 'MODULE_CLASS')
def __repr__(self):
return "<%s: %s@%s-%s>" % (
self.__class__.__name__, self._string_name,
self.tree_node.start_pos[0], self.tree_node.end_pos[0])
class ImplicitNSName(AbstractNameDefinition):
"""
Accessing names for implicit namespace packages should infer to nothing.
This object will prevent Jedi from raising exceptions
"""
def __init__(self, implicit_ns_context, string_name):
self.implicit_ns_context = implicit_ns_context
self.string_name = string_name
def infer(self):
return []
def get_root_context(self):
return self.implicit_ns_context
class ImplicitNamespaceContext(use_metaclass(CachedMetaClass, context.TreeContext)):
"""
Provides support for implicit namespace packages
"""
api_type = 'module'
parent_context = None
def __init__(self, evaluator, fullname):
super(ImplicitNamespaceContext, self).__init__(evaluator, parent_context=None)
self.evaluator = evaluator
self.fullname = fullname
def get_filters(self, search_global, until_position=None, origin_scope=None):
yield DictFilter(self._sub_modules_dict())
@property
@evaluator_method_cache()
def name(self):
string_name = self.py__package__().rpartition('.')[-1]
return ImplicitNSName(self, string_name)
def py__file__(self):
return None
def py__package__(self):
"""Return the fullname
"""
return self.fullname
@property
def py__path__(self):
return lambda: [self.paths]
@evaluator_method_cache()
def _sub_modules_dict(self):
names = {}
paths = self.paths
file_names = chain.from_iterable(os.listdir(path) for path in paths)
mods = [
file_name.rpartition('.')[0] if '.' in file_name else file_name
for file_name in file_names
if file_name != '__pycache__'
]
for name in mods:
names[name] = imports.SubModuleName(self, name)
return names

View File

@@ -12,18 +12,35 @@ compiled module that returns the types for C-builtins.
import collections
import re
from jedi._compatibility import unicode
from jedi.common import unite
from jedi.evaluate import compiled
from jedi.evaluate import representation as er
from jedi.evaluate.instance import InstanceFunctionExecution, \
AbstractInstanceContext, CompiledInstance, BoundMethod
AbstractInstanceContext, CompiledInstance, BoundMethod, \
AnonymousInstanceFunctionExecution
from jedi.evaluate import iterable
from jedi.parser import ParserWithRecovery
from jedi import debug
from jedi.evaluate import precedence
from jedi.evaluate import param
from jedi.evaluate import analysis
from jedi.evaluate.context import LazyTreeContext, ContextualizedNode
# Now this is all part of fake tuples in Jedi. However super doesn't work on
# __init__ and __new__ doesn't work at all. So adding this to nametuples is
# just the easiest way.
_NAMEDTUPLE_INIT = """
def __init__(_cls, {arg_list}):
'A helper function for namedtuple.'
self.__iterable = ({arg_list})
def __iter__(self):
for i in self.__iterable:
yield i
def __getitem__(self, y):
return self.__iterable[y]
"""
class NotInStdLib(LookupError):
@@ -164,7 +181,8 @@ class SuperInstance(AbstractInstanceContext):
@argument_clinic('[type[, obj]], /', want_context=True)
def builtins_super(evaluator, types, objects, context):
# TODO make this able to detect multiple inheritance super
if isinstance(context, InstanceFunctionExecution):
if isinstance(context, (InstanceFunctionExecution,
AnonymousInstanceFunctionExecution)):
su = context.instance.py__class__().py__bases__()
return unite(context.execute_evaluated() for context in su[0].infer())
return set()
@@ -176,7 +194,11 @@ def builtins_reversed(evaluator, sequences, obj, arguments):
# want static analysis to work well. Therefore we need to generated the
# values again.
key, lazy_context = next(arguments.unpack())
ordered = list(iterable.py__iter__(evaluator, sequences, lazy_context.data))
cn = None
if isinstance(lazy_context, LazyTreeContext):
# TODO access private
cn = ContextualizedNode(lazy_context._context, lazy_context.data)
ordered = list(iterable.py__iter__(evaluator, sequences, cn))
rev = list(reversed(ordered))
# Repack iterator values and then run it the normal way. This is
@@ -215,11 +237,12 @@ def builtins_isinstance(evaluator, objects, types, arguments):
bool_results.add(any(cls in mro for cls in classes))
else:
_, lazy_context = list(arguments.unpack())[1]
node = lazy_context.data
message = 'TypeError: isinstance() arg 2 must be a ' \
'class, type, or tuple of classes and types, ' \
'not %s.' % cls_or_tup
analysis.add(cls_or_tup, 'type-error-isinstance', node, message)
if isinstance(lazy_context, LazyTreeContext):
node = lazy_context.data
message = 'TypeError: isinstance() arg 2 must be a ' \
'class, type, or tuple of classes and types, ' \
'not %s.' % cls_or_tup
analysis.add(lazy_context._context, 'type-error-isinstance', node, message)
return set(compiled.create(evaluator, x) for x in bool_results)
@@ -253,20 +276,24 @@ def collections_namedtuple(evaluator, obj, arguments):
else:
return set()
base = collections._class_template
base += _NAMEDTUPLE_INIT
# Build source
source = collections._class_template.format(
source = base.format(
typename=name,
field_names=fields,
field_names=tuple(fields),
num_fields=len(fields),
arg_list=', '.join(fields),
arg_list = repr(tuple(fields)).replace("'", "")[1:-1],
repr_fmt=', '.join(collections._repr_template.format(name=name) for name in fields),
field_defs='\n'.join(collections._field_template.format(index=index, name=name)
for index, name in enumerate(fields))
)
# Parse source
generated_class = ParserWithRecovery(evaluator.grammar, unicode(source)).module.subscopes[0]
return set([er.ClassContext(evaluator, generated_class, evaluator.BUILTINS)])
module = evaluator.grammar.parse(source)
generated_class = next(module.iter_classdefs())
parent_context = er.ModuleContext(evaluator, module, '')
return set([er.ClassContext(evaluator, generated_class, parent_context)])
@argument_clinic('first, /')

View File

@@ -1,16 +1,16 @@
import glob
import os
import sys
import imp
from jedi.evaluate.site import addsitedir
from jedi._compatibility import exec_function, unicode
from jedi.parser import tree
from jedi.parser import ParserWithRecovery
from jedi.evaluate.cache import memoize_default
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate.compiled import CompiledObject
from jedi.evaluate.context import ContextualizedNode
from jedi import settings
from jedi import debug
from jedi import common
from jedi.evaluate.compiled import CompiledObject
from jedi.parser.utils import load_parser, save_parser
def get_venv_path(venv):
@@ -122,8 +122,8 @@ def _paths_from_assignment(module_context, expr_stmt):
from jedi.evaluate.iterable import py__iter__
from jedi.evaluate.precedence import is_string
types = module_context.create_context(expr_stmt).eval_node(expr_stmt)
for lazy_context in py__iter__(module_context.evaluator, types, expr_stmt):
cn = ContextualizedNode(module_context.create_context(expr_stmt), expr_stmt)
for lazy_context in py__iter__(module_context.evaluator, cn.infer(), cn):
for context in lazy_context.infer():
if is_string(context):
yield context.obj
@@ -156,10 +156,10 @@ def _check_module(module_context):
power = name.parent.parent
if power.type in ('power', 'atom_expr'):
c = power.children
if isinstance(c[0], tree.Name) and c[0].value == 'sys' \
if c[0].type == 'name' and c[0].value == 'sys' \
and c[1].type == 'trailer':
n = c[1].children[1]
if isinstance(n, tree.Name) and n.value == 'path':
if n.type == 'name' and n.value == 'path':
yield name, power
sys_path = list(module_context.evaluator.sys_path) # copy
@@ -167,26 +167,24 @@ def _check_module(module_context):
return sys_path
try:
possible_names = module_context.tree_node.used_names['path']
possible_names = module_context.tree_node.get_used_names()['path']
except KeyError:
# module.used_names is MergedNamesDict whose getitem never throws
# keyerror, this is superfluous.
pass
else:
for name, power in get_sys_path_powers(possible_names):
stmt = name.get_definition()
expr_stmt = power.parent
if len(power.children) >= 4:
sys_path.extend(
_paths_from_list_modifications(
module_context.py__file__(), *power.children[2:4]
)
)
elif name.get_definition().type == 'expr_stmt':
sys_path.extend(_paths_from_assignment(module_context, stmt))
elif expr_stmt is not None and expr_stmt.type == 'expr_stmt':
sys_path.extend(_paths_from_assignment(module_context, expr_stmt))
return sys_path
@memoize_default(evaluator_is_first_arg=True, default=[])
@evaluator_function_cache(default=[])
def sys_path_with_modifications(evaluator, module_context):
path = module_context.py__file__()
if path is None:
@@ -203,34 +201,27 @@ def sys_path_with_modifications(evaluator, module_context):
result = _check_module(module_context)
result += _detect_django_path(path)
for buildout_script in _get_buildout_scripts(path):
for path in _get_paths_from_buildout_script(evaluator, buildout_script):
for buildout_script_path in _get_buildout_script_paths(path):
for path in _get_paths_from_buildout_script(evaluator, buildout_script_path):
buildout_script_paths.add(path)
# cleanup, back to old directory
os.chdir(curdir)
return list(result) + list(buildout_script_paths)
def _get_paths_from_buildout_script(evaluator, buildout_script):
def load(buildout_script):
try:
with open(buildout_script, 'rb') as f:
source = common.source_to_unicode(f.read())
except IOError:
debug.dbg('Error trying to read buildout_script: %s', buildout_script)
return
p = ParserWithRecovery(evaluator.grammar, source, buildout_script)
save_parser(buildout_script, p)
return p.module
cached = load_parser(buildout_script)
module_node = cached and cached.module or load(buildout_script)
if module_node is None:
def _get_paths_from_buildout_script(evaluator, buildout_script_path):
try:
module_node = evaluator.grammar.parse(
path=buildout_script_path,
cache=True,
cache_path=settings.cache_directory
)
except IOError:
debug.warning('Error trying to read buildout_script: %s', buildout_script_path)
return
from jedi.evaluate.representation import ModuleContext
for path in _check_module(ModuleContext(evaluator, module_node)):
for path in _check_module(ModuleContext(evaluator, module_node, buildout_script_path)):
yield path
@@ -262,7 +253,7 @@ def _detect_django_path(module_path):
return result
def _get_buildout_scripts(module_path):
def _get_buildout_script_paths(module_path):
"""
if there is a 'buildout.cfg' file in one of the parent directories of the
given module it will return a list of all files in the buildout bin
@@ -291,3 +282,33 @@ def _get_buildout_scripts(module_path):
debug.warning(unicode(e))
continue
return extra_module_paths
def dotted_path_in_sys_path(sys_path, module_path):
"""
Returns the dotted path inside a sys.path.
"""
# First remove the suffix.
for suffix, _, _ in imp.get_suffixes():
if module_path.endswith(suffix):
module_path = module_path[:-len(suffix)]
break
else:
# There should always be a suffix in a valid Python file on the path.
return None
if module_path.startswith(os.path.sep):
# The paths in sys.path most of the times don't end with a slash.
module_path = module_path[1:]
for p in sys_path:
if module_path.startswith(p):
rest = module_path[len(p):]
if rest:
split = rest.split(os.path.sep)
for string in split:
if not string or '.' in string:
return None
return '.'.join(split)
return None

View File

@@ -1,355 +0,0 @@
"""
The ``Parser`` tries to convert the available Python code in an easy to read
format, something like an abstract syntax tree. The classes who represent this
tree, are sitting in the :mod:`jedi.parser.tree` module.
The Python module ``tokenize`` is a very important part in the ``Parser``,
because it splits the code into different words (tokens). Sometimes it looks a
bit messy. Sorry for that! You might ask now: "Why didn't you use the ``ast``
module for this? Well, ``ast`` does a very good job understanding proper Python
code, but fails to work as soon as there's a single line of broken code.
There's one important optimization that needs to be known: Statements are not
being parsed completely. ``Statement`` is just a representation of the tokens
within the statement. This lowers memory usage and cpu time and reduces the
complexity of the ``Parser`` (there's another parser sitting inside
``Statement``, which produces ``Array`` and ``Call``).
"""
import os
import re
from jedi._compatibility import FileNotFoundError
from jedi.parser import tree as pt
from jedi.parser import tokenize
from jedi.parser.token import (DEDENT, INDENT, ENDMARKER, NEWLINE, NUMBER,
STRING, tok_name)
from jedi.parser.pgen2.pgen import generate_grammar
from jedi.parser.pgen2.parse import PgenParser
OPERATOR_KEYWORDS = 'and', 'for', 'if', 'else', 'in', 'is', 'lambda', 'not', 'or'
# Not used yet. In the future I intend to add something like KeywordStatement
STATEMENT_KEYWORDS = 'assert', 'del', 'global', 'nonlocal', 'raise', \
'return', 'yield', 'pass', 'continue', 'break'
_loaded_grammars = {}
class ParseError(Exception):
"""
Signals you that the code you fed the Parser was not correct Python code.
"""
def load_grammar(version='3.6'):
# For now we only support two different Python syntax versions: The latest
# Python 3 and Python 2. This may change.
if version in ('3.2', '3.3'):
version = '3.4'
elif version == '2.6':
version = '2.7'
file = 'grammar' + version + '.txt'
global _loaded_grammars
path = os.path.join(os.path.dirname(__file__), file)
try:
return _loaded_grammars[path]
except KeyError:
try:
return _loaded_grammars.setdefault(path, generate_grammar(path))
except FileNotFoundError:
# Just load the default if the file does not exist.
return load_grammar()
class ParserSyntaxError(object):
def __init__(self, message, position):
self.message = message
self.position = position
class Parser(object):
AST_MAPPING = {
'expr_stmt': pt.ExprStmt,
'classdef': pt.Class,
'funcdef': pt.Function,
'file_input': pt.Module,
'import_name': pt.ImportName,
'import_from': pt.ImportFrom,
'break_stmt': pt.KeywordStatement,
'continue_stmt': pt.KeywordStatement,
'return_stmt': pt.ReturnStmt,
'raise_stmt': pt.KeywordStatement,
'yield_expr': pt.YieldExpr,
'del_stmt': pt.KeywordStatement,
'pass_stmt': pt.KeywordStatement,
'global_stmt': pt.GlobalStmt,
'nonlocal_stmt': pt.KeywordStatement,
'print_stmt': pt.KeywordStatement,
'assert_stmt': pt.AssertStmt,
'if_stmt': pt.IfStmt,
'with_stmt': pt.WithStmt,
'for_stmt': pt.ForStmt,
'while_stmt': pt.WhileStmt,
'try_stmt': pt.TryStmt,
'comp_for': pt.CompFor,
'decorator': pt.Decorator,
'lambdef': pt.Lambda,
'old_lambdef': pt.Lambda,
'lambdef_nocond': pt.Lambda,
}
def __init__(self, grammar, source, start_symbol='file_input',
tokenizer=None, start_parsing=True):
# Todo Remove start_parsing (with False)
self._used_names = {}
self.source = source
self._added_newline = False
# The Python grammar needs a newline at the end of each statement.
if not source.endswith('\n') and start_symbol == 'file_input':
source += '\n'
self._added_newline = True
self._start_symbol = start_symbol
self._grammar = grammar
self._parsed = None
if start_parsing:
if tokenizer is None:
tokenizer = tokenize.source_tokens(source, use_exact_op_types=True)
self.parse(tokenizer)
def parse(self, tokenizer):
if self._parsed is not None:
return self._parsed
start_number = self._grammar.symbol2number[self._start_symbol]
self.pgen_parser = PgenParser(
self._grammar, self.convert_node, self.convert_leaf,
self.error_recovery, start_number
)
self._parsed = self.pgen_parser.parse(tokenizer)
if self._start_symbol == 'file_input' != self._parsed.type:
# If there's only one statement, we get back a non-module. That's
# not what we want, we want a module, so we add it here:
self._parsed = self.convert_node(self._grammar,
self._grammar.symbol2number['file_input'],
[self._parsed])
if self._added_newline:
self.remove_last_newline()
# The stack is empty now, we don't need it anymore.
del self.pgen_parser
return self._parsed
def get_parsed_node(self):
# TODO remove in favor of get_root_node
return self._parsed
def get_root_node(self):
return self._parsed
def error_recovery(self, grammar, stack, arcs, typ, value, start_pos, prefix,
add_token_callback):
raise ParseError
def convert_node(self, grammar, type, children):
"""
Convert raw node information to a Node instance.
This is passed to the parser driver which calls it whenever a reduction of a
grammar rule produces a new complete node, so that the tree is build
strictly bottom-up.
"""
symbol = grammar.number2symbol[type]
try:
return Parser.AST_MAPPING[symbol](children)
except KeyError:
if symbol == 'suite':
# We don't want the INDENT/DEDENT in our parser tree. Those
# leaves are just cancer. They are virtual leaves and not real
# ones and therefore have pseudo start/end positions and no
# prefixes. Just ignore them.
children = [children[0]] + children[2:-1]
return pt.Node(symbol, children)
def convert_leaf(self, grammar, type, value, prefix, start_pos):
# print('leaf', repr(value), token.tok_name[type])
if type == tokenize.NAME:
if value in grammar.keywords:
return pt.Keyword(value, start_pos, prefix)
else:
name = pt.Name(value, start_pos, prefix)
# Keep a listing of all used names
arr = self._used_names.setdefault(name.value, [])
arr.append(name)
return name
elif type == STRING:
return pt.String(value, start_pos, prefix)
elif type == NUMBER:
return pt.Number(value, start_pos, prefix)
elif type == NEWLINE:
return pt.Newline(value, start_pos, prefix)
elif type == ENDMARKER:
return pt.EndMarker(value, start_pos, prefix)
else:
return pt.Operator(value, start_pos, prefix)
def remove_last_newline(self):
endmarker = self._parsed.children[-1]
# The newline is either in the endmarker as a prefix or the previous
# leaf as a newline token.
prefix = endmarker.prefix
if prefix.endswith('\n'):
endmarker.prefix = prefix = prefix[:-1]
last_end = 0
if '\n' not in prefix:
# Basically if the last line doesn't end with a newline. we
# have to add the previous line's end_position.
try:
last_end = endmarker.get_previous_leaf().end_pos[1]
except IndexError:
pass
last_line = re.sub('.*\n', '', prefix)
endmarker.start_pos = endmarker.line - 1, last_end + len(last_line)
else:
try:
newline = endmarker.get_previous_leaf()
except IndexError:
return # This means that the parser is empty.
assert newline.value.endswith('\n')
newline.value = newline.value[:-1]
endmarker.start_pos = \
newline.start_pos[0], newline.start_pos[1] + len(newline.value)
class ParserWithRecovery(Parser):
"""
This class is used to parse a Python file, it then divides them into a
class structure of different scopes.
:param grammar: The grammar object of pgen2. Loaded by load_grammar.
:param source: The codebase for the parser. Must be unicode.
:param module_path: The path of the module in the file system, may be None.
:type module_path: str
"""
def __init__(self, grammar, source, module_path=None, tokenizer=None,
start_parsing=True):
self.syntax_errors = []
self._omit_dedent_list = []
self._indent_counter = 0
self._module_path = module_path
# TODO do print absolute import detection here.
# try:
# del python_grammar_no_print_statement.keywords["print"]
# except KeyError:
# pass # Doesn't exist in the Python 3 grammar.
# if self.options["print_function"]:
# python_grammar = pygram.python_grammar_no_print_statement
# else:
super(ParserWithRecovery, self).__init__(
grammar, source,
tokenizer=tokenizer,
start_parsing=start_parsing
)
def parse(self, tokenizer):
root_node = super(ParserWithRecovery, self).parse(self._tokenize(tokenizer))
self.module = root_node
self.module.used_names = self._used_names
self.module.path = self._module_path
return root_node
def error_recovery(self, grammar, stack, arcs, typ, value, start_pos, prefix,
add_token_callback):
"""
This parser is written in a dynamic way, meaning that this parser
allows using different grammars (even non-Python). However, error
recovery is purely written for Python.
"""
def current_suite(stack):
# For now just discard everything that is not a suite or
# file_input, if we detect an error.
for index, (dfa, state, (type_, nodes)) in reversed(list(enumerate(stack))):
# `suite` can sometimes be only simple_stmt, not stmt.
symbol = grammar.number2symbol[type_]
if symbol == 'file_input':
break
elif symbol == 'suite' and len(nodes) > 1:
# suites without an indent in them get discarded.
break
elif symbol == 'simple_stmt' and len(nodes) > 1:
# simple_stmt can just be turned into a Node, if there are
# enough statements. Ignore the rest after that.
break
return index, symbol, nodes
index, symbol, nodes = current_suite(stack)
if symbol == 'simple_stmt':
index -= 2
(_, _, (type_, suite_nodes)) = stack[index]
symbol = grammar.number2symbol[type_]
suite_nodes.append(pt.Node(symbol, list(nodes)))
# Remove
nodes[:] = []
nodes = suite_nodes
stack[index]
# print('err', token.tok_name[typ], repr(value), start_pos, len(stack), index)
if self._stack_removal(grammar, stack, arcs, index + 1, value, start_pos):
add_token_callback(typ, value, start_pos, prefix)
else:
if typ == INDENT:
# For every deleted INDENT we have to delete a DEDENT as well.
# Otherwise the parser will get into trouble and DEDENT too early.
self._omit_dedent_list.append(self._indent_counter)
else:
error_leaf = pt.ErrorLeaf(tok_name[typ].lower(), value, start_pos, prefix)
stack[-1][2][1].append(error_leaf)
def _stack_removal(self, grammar, stack, arcs, start_index, value, start_pos):
failed_stack = []
found = False
all_nodes = []
for dfa, state, (typ, nodes) in stack[start_index:]:
if nodes:
found = True
if found:
symbol = grammar.number2symbol[typ]
failed_stack.append((symbol, nodes))
all_nodes += nodes
if failed_stack:
stack[start_index - 1][2][1].append(pt.ErrorNode(all_nodes))
stack[start_index:] = []
return failed_stack
def _tokenize(self, tokenizer):
for typ, value, start_pos, prefix in tokenizer:
# print(tokenize.tok_name[typ], repr(value), start_pos, repr(prefix))
if typ == DEDENT:
# We need to count indents, because if we just omit any DEDENT,
# we might omit them in the wrong place.
o = self._omit_dedent_list
if o and o[-1] == self._indent_counter:
o.pop()
continue
self._indent_counter -= 1
elif typ == INDENT:
self._indent_counter += 1
yield typ, value, start_pos, prefix
def __repr__(self):
return "<%s: %s>" % (type(self).__name__, self.module)

View File

@@ -1,638 +0,0 @@
"""
Basically a contains parser that is faster, because it tries to parse only
parts and if anything changes, it only reparses the changed parts.
It works with a simple diff in the beginning and will try to reuse old parser
fragments.
"""
import re
import difflib
from collections import namedtuple
from jedi._compatibility import use_metaclass
from jedi import settings
from jedi.common import splitlines
from jedi.parser import ParserWithRecovery
from jedi.parser.tree import EndMarker
from jedi.parser.utils import parser_cache
from jedi import debug
from jedi.parser.tokenize import (generate_tokens, NEWLINE, TokenInfo,
ENDMARKER, INDENT, DEDENT)
class CachedFastParser(type):
""" This is a metaclass for caching `FastParser`. """
def __call__(self, grammar, source, module_path=None):
pi = parser_cache.get(module_path, None)
if pi is None or not settings.fast_parser:
return ParserWithRecovery(grammar, source, module_path)
parser = pi.parser
d = DiffParser(parser)
new_lines = splitlines(source, keepends=True)
parser.module = parser._parsed = d.update(new_lines)
return parser
class FastParser(use_metaclass(CachedFastParser)):
pass
def _merge_used_names(base_dict, other_dict):
for key, names in other_dict.items():
base_dict.setdefault(key, []).extend(names)
def _get_last_line(node_or_leaf):
last_leaf = node_or_leaf.last_leaf()
if _ends_with_newline(last_leaf):
return last_leaf.start_pos[0]
else:
return last_leaf.end_pos[0]
def _ends_with_newline(leaf, suffix=''):
if leaf.type == 'error_leaf':
typ = leaf.original_type
else:
typ = leaf.type
return typ == 'newline' or suffix.endswith('\n')
def _flows_finished(grammar, stack):
"""
if, while, for and try might not be finished, because another part might
still be parsed.
"""
for dfa, newstate, (symbol_number, nodes) in stack:
if grammar.number2symbol[symbol_number] in ('if_stmt', 'while_stmt',
'for_stmt', 'try_stmt'):
return False
return True
def suite_or_file_input_is_valid(grammar, stack):
if not _flows_finished(grammar, stack):
return False
for dfa, newstate, (symbol_number, nodes) in reversed(stack):
if grammar.number2symbol[symbol_number] == 'suite':
# If only newline is in the suite, the suite is not valid, yet.
return len(nodes) > 1
# Not reaching a suite means that we're dealing with file_input levels
# where there's no need for a valid statement in it. It can also be empty.
return True
def _is_flow_node(node):
try:
value = node.children[0].value
except AttributeError:
return False
return value in ('if', 'for', 'while', 'try')
class _PositionUpdatingFinished(Exception):
pass
def _update_positions(nodes, line_offset, last_leaf):
for node in nodes:
try:
children = node.children
except AttributeError:
# Is a leaf
node.line += line_offset
if node is last_leaf:
raise _PositionUpdatingFinished
else:
_update_positions(children, line_offset, last_leaf)
class DiffParser(object):
def __init__(self, parser):
self._parser = parser
self._grammar = self._parser._grammar
self._module = parser.get_root_node()
def _reset(self):
self._copy_count = 0
self._parser_count = 0
self._copied_ranges = []
self._new_used_names = {}
self._nodes_stack = _NodesStack(self._module)
def update(self, lines_new):
'''
The algorithm works as follows:
Equal:
- Assure that the start is a newline, otherwise parse until we get
one.
- Copy from parsed_until_line + 1 to max(i2 + 1)
- Make sure that the indentation is correct (e.g. add DEDENT)
- Add old and change positions
Insert:
- Parse from parsed_until_line + 1 to min(j2 + 1), hopefully not
much more.
Returns the new module node.
'''
debug.speed('diff parser start')
self._parser_lines_new = lines_new
self._added_newline = False
if lines_new[-1] != '':
# The Python grammar needs a newline at the end of a file, but for
# everything else we keep working with lines_new here.
self._parser_lines_new = list(lines_new)
self._parser_lines_new[-1] += '\n'
self._added_newline = True
self._reset()
line_length = len(lines_new)
lines_old = splitlines(self._parser.source, keepends=True)
sm = difflib.SequenceMatcher(None, lines_old, self._parser_lines_new)
opcodes = sm.get_opcodes()
debug.speed('diff parser calculated')
debug.dbg('diff: line_lengths old: %s, new: %s' % (len(lines_old), line_length))
if len(opcodes) == 1 and opcodes[0][0] == 'equal':
self._copy_count = 1
return self._module
for operation, i1, i2, j1, j2 in opcodes:
debug.dbg('diff %s old[%s:%s] new[%s:%s]',
operation, i1 + 1, i2, j1 + 1, j2)
if j2 == line_length + int(self._added_newline):
# The empty part after the last newline is not relevant.
j2 -= 1
if operation == 'equal':
line_offset = j1 - i1
self._copy_from_old_parser(line_offset, i2, j2)
elif operation == 'replace':
self._parse(until_line=j2)
elif operation == 'insert':
self._parse(until_line=j2)
else:
assert operation == 'delete'
# With this action all change will finally be applied and we have a
# changed module.
self._nodes_stack.close()
self._cleanup()
if self._added_newline:
self._parser.remove_last_newline()
self._parser.source = ''.join(lines_new)
# Good for debugging.
if debug.debug_function:
self._enable_debugging(lines_old, lines_new)
last_pos = self._module.end_pos[0]
if last_pos != line_length:
current_lines = splitlines(self._module.get_code(), keepends=True)
diff = difflib.unified_diff(current_lines, lines_new)
raise Exception(
"There's an issue (%s != %s) with the diff parser. Please report:\n%s"
% (last_pos, line_length, ''.join(diff))
)
debug.speed('diff parser end')
return self._module
def _enable_debugging(self, lines_old, lines_new):
if self._module.get_code() != ''.join(lines_new):
debug.warning('parser issue:\n%s\n%s', repr(''.join(lines_old)),
repr(''.join(lines_new)))
def _copy_from_old_parser(self, line_offset, until_line_old, until_line_new):
copied_nodes = [None]
while until_line_new > self._nodes_stack.parsed_until_line:
parsed_until_line_old = self._nodes_stack.parsed_until_line - line_offset
line_stmt = self._get_old_line_stmt(parsed_until_line_old + 1)
if line_stmt is None:
# Parse 1 line at least. We don't need more, because we just
# want to get into a state where the old parser has statements
# again that can be copied (e.g. not lines within parentheses).
self._parse(self._nodes_stack.parsed_until_line + 1)
elif not copied_nodes:
# We have copied as much as possible (but definitely not too
# much). Therefore we just parse the rest.
# We might not reach the end, because there's a statement
# that is not finished.
self._parse(until_line_new)
else:
p_children = line_stmt.parent.children
index = p_children.index(line_stmt)
copied_nodes = self._nodes_stack.copy_nodes(
p_children[index:],
until_line_old,
line_offset
)
# Match all the nodes that are in the wanted range.
if copied_nodes:
self._copy_count += 1
from_ = copied_nodes[0].get_start_pos_of_prefix()[0] + line_offset
to = self._nodes_stack.parsed_until_line
self._copied_ranges.append((from_, to))
debug.dbg('diff actually copy %s to %s', from_, to)
def _get_old_line_stmt(self, old_line):
leaf = self._module.get_leaf_for_position((old_line, 0), include_prefixes=True)
if _ends_with_newline(leaf):
leaf = leaf.get_next_leaf()
if leaf.get_start_pos_of_prefix()[0] == old_line:
node = leaf
# TODO use leaf.get_definition one day when that one is working
# well.
while node.parent.type not in ('file_input', 'suite'):
node = node.parent
return node
# Must be on the same line. Otherwise we need to parse that bit.
return None
def _get_before_insertion_node(self):
if self._nodes_stack.is_empty():
return None
line = self._nodes_stack.parsed_until_line + 1
node = self._new_module.last_leaf()
while True:
parent = node.parent
if parent.type in ('suite', 'file_input'):
assert node.end_pos[0] <= line
assert node.end_pos[1] == 0 or '\n' in self._prefix
return node
node = parent
def _parse(self, until_line):
"""
Parses at least until the given line, but might just parse more until a
valid state is reached.
"""
while until_line > self._nodes_stack.parsed_until_line:
node = self._try_parse_part(until_line)
nodes = self._get_children_nodes(node)
#self._insert_nodes(nodes)
self._nodes_stack.add_parsed_nodes(nodes)
debug.dbg(
'parse part %s to %s (to %s in parser)',
nodes[0].get_start_pos_of_prefix()[0],
self._nodes_stack.parsed_until_line,
node.end_pos[0] - 1
)
_merge_used_names(
self._new_used_names,
node.used_names
)
def _get_children_nodes(self, node):
nodes = node.children
first_element = nodes[0]
# TODO this looks very strange...
if first_element.type == 'error_leaf' and \
first_element.original_type == 'indent':
assert False, str(nodes)
return nodes
def _try_parse_part(self, until_line):
"""
Sets up a normal parser that uses a spezialized tokenizer to only parse
until a certain position (or a bit longer if the statement hasn't
ended.
"""
self._parser_count += 1
# TODO speed up, shouldn't copy the whole list all the time.
# memoryview?
parsed_until_line = self._nodes_stack.parsed_until_line
lines_after = self._parser_lines_new[parsed_until_line:]
#print('parse_content', parsed_until_line, lines_after, until_line)
tokenizer = self._diff_tokenize(
lines_after,
until_line,
line_offset=parsed_until_line
)
self._active_parser = ParserWithRecovery(
self._grammar,
source='\n',
start_parsing=False
)
return self._active_parser.parse(tokenizer=tokenizer)
def _cleanup(self):
"""Add the used names from the old parser to the new one."""
copied_line_numbers = set()
for l1, l2 in self._copied_ranges:
copied_line_numbers.update(range(l1, l2 + 1))
new_used_names = self._new_used_names
for key, names in self._module.used_names.items():
for name in names:
if name.line in copied_line_numbers:
new_used_names.setdefault(key, []).append(name)
self._module.used_names = new_used_names
def _diff_tokenize(self, lines, until_line, line_offset=0):
is_first_token = True
omitted_first_indent = False
indents = []
l = iter(lines)
tokens = generate_tokens(lambda: next(l, ''), use_exact_op_types=True)
stack = self._active_parser.pgen_parser.stack
for typ, string, start_pos, prefix in tokens:
start_pos = start_pos[0] + line_offset, start_pos[1]
if typ == INDENT:
indents.append(start_pos[1])
if is_first_token:
omitted_first_indent = True
# We want to get rid of indents that are only here because
# we only parse part of the file. These indents would only
# get parsed as error leafs, which doesn't make any sense.
is_first_token = False
continue
is_first_token = False
if typ == DEDENT:
indents.pop()
if omitted_first_indent and not indents:
# We are done here, only thing that can come now is an
# endmarker or another dedented code block.
typ, string, start_pos, prefix = next(tokens)
if '\n' in prefix:
prefix = re.sub(r'(<=\n)[^\n]+$', '', prefix)
else:
prefix = ''
yield TokenInfo(ENDMARKER, '', (start_pos[0] + line_offset, 0), prefix)
break
elif typ == NEWLINE and start_pos[0] >= until_line:
yield TokenInfo(typ, string, start_pos, prefix)
# Check if the parser is actually in a valid suite state.
if suite_or_file_input_is_valid(self._grammar, stack):
start_pos = start_pos[0] + 1, 0
while len(indents) > int(omitted_first_indent):
indents.pop()
yield TokenInfo(DEDENT, '', start_pos, '')
yield TokenInfo(ENDMARKER, '', start_pos, '')
break
else:
continue
yield TokenInfo(typ, string, start_pos, prefix)
class _NodesStackNode(object):
ChildrenGroup = namedtuple('ChildrenGroup', 'children line_offset last_line_offset_leaf')
def __init__(self, tree_node, parent=None):
self.tree_node = tree_node
self.children_groups = []
self.parent = parent
def close(self):
children = []
for children_part, line_offset, last_line_offset_leaf in self.children_groups:
if line_offset != 0:
try:
_update_positions(
children_part, line_offset, last_line_offset_leaf)
except _PositionUpdatingFinished:
pass
children += children_part
self.tree_node.children = children
# Reset the parents
for node in children:
node.parent = self.tree_node
def add(self, children, line_offset=0, last_line_offset_leaf=None):
group = self.ChildrenGroup(children, line_offset, last_line_offset_leaf)
self.children_groups.append(group)
def get_last_line(self, suffix):
if not self.children_groups:
assert not self.parent
return 0
last_leaf = self.children_groups[-1].children[-1].last_leaf()
line = last_leaf.end_pos[0]
# Calculate the line offsets
line += self.children_groups[-1].line_offset
# Newlines end on the next line, which means that they would cover
# the next line. That line is not fully parsed at this point.
if _ends_with_newline(last_leaf, suffix):
line -= 1
line += suffix.count('\n')
return line
class _NodesStack(object):
endmarker_type = 'endmarker'
def __init__(self, module):
# Top of stack
self._tos = self._base_node = _NodesStackNode(module)
self._module = module
self._last_prefix = ''
self.prefix = ''
def is_empty(self):
return not self._base_node.children
@property
def parsed_until_line(self, ):
return self._tos.get_last_line(self.prefix)
def _get_insertion_node(self, indentation_node):
indentation = indentation_node.start_pos[1]
# find insertion node
node = self._tos
while True:
tree_node = node.tree_node
if tree_node.type == 'suite':
# A suite starts with NEWLINE, ...
node_indentation = tree_node.children[1].start_pos[1]
if indentation >= node_indentation: # Not a Dedent
# We might be at the most outer layer: modules. We
# don't want to depend on the first statement
# having the right indentation.
return node
elif tree_node.type == 'file_input':
return node
node = self._close_tos()
def _close_tos(self):
self._tos.close()
self._tos = self._tos.parent
return self._tos
def add_parsed_nodes(self, tree_nodes):
tree_nodes = self._remove_endmarker(tree_nodes)
if not tree_nodes:
return
assert tree_nodes[0].type != 'newline'
node = self._get_insertion_node(tree_nodes[0])
assert node.tree_node.type in ('suite', 'file_input')
node.add(tree_nodes)
self._update_tos(tree_nodes[-1])
def _remove_endmarker(self, tree_nodes):
"""
Helps cleaning up the tree nodes that get inserted.
"""
last_leaf = tree_nodes[-1].last_leaf()
is_endmarker = last_leaf.type == self.endmarker_type
self._last_prefix = ''
if is_endmarker:
try:
separation = last_leaf.prefix.rindex('\n')
except ValueError:
pass
else:
# Remove the whitespace part of the prefix after a newline.
# That is not relevant if parentheses were opened. Always parse
# until the end of a line.
last_leaf.prefix, self._last_prefix = \
last_leaf.prefix[:separation + 1], last_leaf.prefix[separation + 1:]
first_leaf = tree_nodes[0].first_leaf()
first_leaf.prefix = self.prefix + first_leaf.prefix
self.prefix = ''
if is_endmarker:
self.prefix = last_leaf.prefix
tree_nodes = tree_nodes[:-1]
return tree_nodes
def copy_nodes(self, tree_nodes, until_line, line_offset):
"""
Copies tree nodes from the old parser tree.
Returns the number of tree nodes that were copied.
"""
tos = self._get_insertion_node(tree_nodes[0])
new_nodes, self._tos = self._copy_nodes(tos, tree_nodes, until_line, line_offset)
return new_nodes
def _copy_nodes(self, tos, nodes, until_line, line_offset):
new_nodes = []
new_tos = tos
for node in nodes:
if node.type == 'endmarker':
# Endmarkers just distort all the checks below. Remove them.
break
if node.start_pos[0] > until_line:
break
# TODO this check might take a bit of time for large files. We
# might want to change this to do more intelligent guessing or
# binary search.
if _get_last_line(node) > until_line:
# We can split up functions and classes later.
if node.type in ('classdef', 'funcdef') and node.children[-1].type == 'suite':
new_nodes.append(node)
break
new_nodes.append(node)
if not new_nodes:
return [], tos
last_node = new_nodes[-1]
line_offset_index = -1
if last_node.type in ('classdef', 'funcdef'):
suite = last_node.children[-1]
if suite.type == 'suite':
suite_tos = _NodesStackNode(suite)
# Don't need to pass line_offset here, it's already done by the
# parent.
suite_nodes, recursive_tos = self._copy_nodes(
suite_tos, suite.children, until_line, line_offset)
if len(suite_nodes) < 2:
# A suite only with newline is not valid.
new_nodes.pop()
else:
suite_tos.parent = tos
new_tos = recursive_tos
line_offset_index = -2
elif (new_nodes[-1].type in ('error_leaf', 'error_node') or
_is_flow_node(new_nodes[-1])):
# Error leafs/nodes don't have a defined start/end. Error
# nodes might not end with a newline (e.g. if there's an
# open `(`). Therefore ignore all of them unless they are
# succeeded with valid parser state.
# If we copy flows at the end, they might be continued
# after the copy limit (in the new parser).
# In this while loop we try to remove until we find a newline.
new_nodes.pop()
while new_nodes:
last_node = new_nodes[-1]
if last_node.last_leaf().type == 'newline':
break
new_nodes.pop()
if new_nodes:
try:
last_line_offset_leaf = new_nodes[line_offset_index].last_leaf()
except IndexError:
line_offset = 0
# In this case we don't have to calculate an offset, because
# there's no children to be managed.
last_line_offset_leaf = None
tos.add(new_nodes, line_offset, last_line_offset_leaf)
return new_nodes, new_tos
def _update_tos(self, tree_node):
if tree_node.type in ('suite', 'file_input'):
self._tos = _NodesStackNode(tree_node, self._tos)
self._tos.add(list(tree_node.children))
self._update_tos(tree_node.children[-1])
elif tree_node.type in ('classdef', 'funcdef'):
self._update_tos(tree_node.children[-1])
def close(self):
while self._tos is not None:
self._close_tos()
# Add an endmarker.
try:
last_leaf = self._module.last_leaf()
end_pos = list(last_leaf.end_pos)
except IndexError:
end_pos = [1, 0]
lines = splitlines(self.prefix)
assert len(lines) > 0
if len(lines) == 1:
end_pos[1] += len(lines[0])
else:
end_pos[0] += len(lines) - 1
end_pos[1] = len(lines[-1])
endmarker = EndMarker('', tuple(end_pos), self.prefix + self._last_prefix)
endmarker.parent = self._module
self._module.children.append(endmarker)

View File

@@ -1,152 +0,0 @@
# Grammar for 2to3. This grammar supports Python 2.x and 3.x.
# Note: Changing the grammar specified in this file will most likely
# require corresponding changes in the parser module
# (../Modules/parsermodule.c). If you can't make the changes to
# that module yourself, please co-ordinate the required changes
# with someone who can; ask around on python-dev for help. Fred
# Drake <fdrake@acm.org> will probably be listening there.
# NOTE WELL: You should also follow all the steps listed in PEP 306,
# "How to Change Python's Grammar"
# Start symbols for the grammar:
# file_input is a module or sequence of commands read from an input file;
# single_input is a single interactive statement;
# eval_input is the input for the eval() and input() functions.
# NB: compound_stmt in single_input is followed by extra NEWLINE!
file_input: (NEWLINE | stmt)* ENDMARKER
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
eval_input: testlist NEWLINE* ENDMARKER
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef)
funcdef: 'def' NAME parameters ['->' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: ((tfpdef ['=' test] ',')*
('*' [tname] (',' tname ['=' test])* [',' '**' tname] | '**' tname)
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
tname: NAME [':' test]
tfpdef: tname | '(' tfplist ')'
tfplist: tfpdef (',' tfpdef)* [',']
varargslist: ((vfpdef ['=' test] ',')*
('*' [vname] (',' vname ['=' test])* [',' '**' vname] | '**' vname)
| vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
vname: NAME
vfpdef: vname | '(' vfplist ')'
vfplist: vfpdef (',' vfpdef)* [',']
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | exec_stmt | assert_stmt)
expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) |
('=' (yield_expr|testlist_star_expr))*)
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
# For normal assignments, additional restrictions enforced by the interpreter
print_stmt: 'print' ( [ test (',' test)* [','] ] |
'>>' test [ (',' test)+ [','] ] )
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist]
yield_stmt: yield_expr
raise_stmt: 'raise' [test [',' test [',' test]]]
import_stmt: import_name | import_from
import_name: 'import' dotted_as_names
# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
import_as_name: NAME ['as' NAME]
dotted_as_name: dotted_name ['as' NAME]
import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: 'global' NAME (',' NAME)*
exec_stmt: 'exec' expr ['in' test [',' test]]
assert_stmt: 'assert' test [',' test]
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
while_stmt: 'while' test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' with_item (',' with_item)* ':' suite
with_item: test ['as' expr]
with_var: 'as' expr
# NB compile.c makes sure that the default except clause is last
except_clause: 'except' [test [(',' | 'as') test]]
# Edit by David Halter: The stmt is now optional. This reflects how Jedi allows
# classes and functions to be empty, which is beneficial for autocompletion.
suite: simple_stmt | NEWLINE INDENT stmt* DEDENT
# Backward compatibility cruft to support:
# [ x for x in lambda: True, lambda: False if x() ]
# even while also allowing:
# lambda x: 5 if x else 2
# (But not a mix of the two)
testlist_safe: old_test [(',' old_test)+ [',']]
old_test: or_test | old_lambdef
old_lambdef: 'lambda' [varargslist] ':' old_test
test: or_test ['if' or_test 'else' test] | lambdef
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: expr (comp_op expr)*
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
star_expr: '*' expr
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: atom trailer* ['**' factor]
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
'`' testlist1 '`' |
NAME | NUMBER | STRING+ | '.' '.' '.')
# Modification by David Halter, remove `testlist_gexp` and `listmaker`
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
lambdef: 'lambda' [varargslist] ':' test
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
# Modification by David Halter, dictsetmaker -> dictorsetmaker (so that it's
# the same as in the 3.4 grammar).
dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
(test (comp_for | (',' test)* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: (argument ',')* (argument [',']
|'*' test (',' argument)* [',' '**' test]
|'**' test)
argument: test [comp_for] | test '=' test # Really [keyword '='] test
comp_iter: comp_for | comp_if
comp_for: 'for' exprlist 'in' testlist_safe [comp_iter]
comp_if: 'if' old_test [comp_iter]
testlist1: test (',' test)*
# not used in grammar, but may appear in "node" passed from Parser to Compiler
encoding_decl: NAME
yield_expr: 'yield' [testlist]

View File

@@ -1,135 +0,0 @@
# Grammar for Python
# Note: Changing the grammar specified in this file will most likely
# require corresponding changes in the parser module
# (../Modules/parsermodule.c). If you can't make the changes to
# that module yourself, please co-ordinate the required changes
# with someone who can; ask around on python-dev for help. Fred
# Drake <fdrake@acm.org> will probably be listening there.
# NOTE WELL: You should also follow all the steps listed in PEP 306,
# "How to Change Python's Grammar"
# Start symbols for the grammar:
# single_input is a single interactive statement;
# file_input is a module or sequence of commands read from an input file;
# eval_input is the input for the eval() functions.
# NB: compound_stmt in single_input is followed by extra NEWLINE!
file_input: (NEWLINE | stmt)* ENDMARKER
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
eval_input: testlist NEWLINE* ENDMARKER
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef)
funcdef: 'def' NAME parameters ['->' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [','
['*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef]]
| '*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef)
tfpdef: NAME [':' test]
varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [','
['*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef]]
| '*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef)
vfpdef: NAME
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) |
('=' (yield_expr|testlist_star_expr))*)
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
# For normal assignments, additional restrictions enforced by the interpreter
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test]]
import_stmt: import_name | import_from
import_name: 'import' dotted_as_names
# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
import_as_name: NAME ['as' NAME]
dotted_as_name: dotted_name ['as' NAME]
import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: 'global' NAME (',' NAME)*
nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
assert_stmt: 'assert' test [',' test]
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
while_stmt: 'while' test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' with_item (',' with_item)* ':' suite
with_item: test ['as' expr]
# NB compile.c makes sure that the default except clause is last
except_clause: 'except' [test ['as' NAME]]
# Edit by David Halter: The stmt is now optional. This reflects how Jedi allows
# classes and functions to be empty, which is beneficial for autocompletion.
suite: simple_stmt | NEWLINE INDENT stmt* DEDENT
test: or_test ['if' or_test 'else' test] | lambdef
test_nocond: or_test | lambdef_nocond
lambdef: 'lambda' [varargslist] ':' test
lambdef_nocond: 'lambda' [varargslist] ':' test_nocond
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: expr (comp_op expr)*
# <> isn't actually a valid comparison operator in Python. It's here for the
# sake of a __future__ import described in PEP 401
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
star_expr: '*' expr
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: atom trailer* ['**' factor]
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
(test (comp_for | (',' test)* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: (argument ',')* (argument [',']
|'*' test (',' argument)* [',' '**' test]
|'**' test)
# The reason that keywords are test nodes instead of NAME is that using NAME
# results in an ambiguity. ast.c makes sure it's a NAME.
argument: test [comp_for] | test '=' test # Really [keyword '='] test
comp_iter: comp_for | comp_if
comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' test_nocond [comp_iter]
# not used in grammar, but may appear in "node" passed from Parser to Compiler
encoding_decl: NAME
yield_expr: 'yield' [yield_arg]
yield_arg: 'from' test | testlist

View File

@@ -1,154 +0,0 @@
# Grammar for Python
# Note: Changing the grammar specified in this file will most likely
# require corresponding changes in the parser module
# (../Modules/parsermodule.c). If you can't make the changes to
# that module yourself, please co-ordinate the required changes
# with someone who can; ask around on python-dev for help. Fred
# Drake <fdrake@acm.org> will probably be listening there.
# NOTE WELL: You should also follow all the steps listed at
# https://docs.python.org/devguide/grammar.html
# Start symbols for the grammar:
# single_input is a single interactive statement;
# file_input is a module or sequence of commands read from an input file;
# eval_input is the input for the eval() functions.
# NB: compound_stmt in single_input is followed by extra NEWLINE!
file_input: (NEWLINE | stmt)* ENDMARKER
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
eval_input: testlist NEWLINE* ENDMARKER
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef | async_funcdef)
# NOTE: Reinoud Elhorst, using ASYNC/AWAIT keywords instead of tokens
# skipping python3.5 compatibility, in favour of 3.7 solution
async_funcdef: 'async' funcdef
funcdef: 'def' NAME parameters ['->' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [','
['*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef]]
| '*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef)
tfpdef: NAME [':' test]
varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [','
['*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef]]
| '*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef)
vfpdef: NAME
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) |
('=' (yield_expr|testlist_star_expr))*)
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
# For normal assignments, additional restrictions enforced by the interpreter
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test]]
import_stmt: import_name | import_from
import_name: 'import' dotted_as_names
# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
import_as_name: NAME ['as' NAME]
dotted_as_name: dotted_name ['as' NAME]
import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: 'global' NAME (',' NAME)*
nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
assert_stmt: 'assert' test [',' test]
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt
async_stmt: 'async' (funcdef | with_stmt | for_stmt)
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
while_stmt: 'while' test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' with_item (',' with_item)* ':' suite
with_item: test ['as' expr]
# NB compile.c makes sure that the default except clause is last
except_clause: 'except' [test ['as' NAME]]
# Edit by David Halter: The stmt is now optional. This reflects how Jedi allows
# classes and functions to be empty, which is beneficial for autocompletion.
suite: simple_stmt | NEWLINE INDENT stmt* DEDENT
test: or_test ['if' or_test 'else' test] | lambdef
test_nocond: or_test | lambdef_nocond
lambdef: 'lambda' [varargslist] ':' test
lambdef_nocond: 'lambda' [varargslist] ':' test_nocond
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: expr (comp_op expr)*
# <> isn't actually a valid comparison operator in Python. It's here for the
# sake of a __future__ import described in PEP 401 (which really works :-)
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
star_expr: '*' expr
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'@'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: atom_expr ['**' factor]
atom_expr: ['await'] atom trailer*
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( ((test ':' test | '**' expr)
(comp_for | (',' (test ':' test | '**' expr))* [','])) |
((test | star_expr)
(comp_for | (',' (test | star_expr))* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: argument (',' argument)* [',']
# The reason that keywords are test nodes instead of NAME is that using NAME
# results in an ambiguity. ast.c makes sure it's a NAME.
# "test '=' test" is really "keyword '=' test", but we have no such token.
# These need to be in a single rule to avoid grammar that is ambiguous
# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,
# we explicitly match '*' here, too, to give it proper precedence.
# Illegal combinations and orderings are blocked in ast.c:
# multiple (test comp_for) arguements are blocked; keyword unpackings
# that precede iterable unpackings are blocked; etc.
argument: ( test [comp_for] |
test '=' test |
'**' test |
'*' test )
comp_iter: comp_for | comp_if
comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' test_nocond [comp_iter]
# not used in grammar, but may appear in "node" passed from Parser to Compiler
encoding_decl: NAME
yield_expr: 'yield' [yield_arg]
yield_arg: 'from' test | testlist

View File

@@ -1,161 +0,0 @@
# Grammar for Python
# Note: Changing the grammar specified in this file will most likely
# require corresponding changes in the parser module
# (../Modules/parsermodule.c). If you can't make the changes to
# that module yourself, please co-ordinate the required changes
# with someone who can; ask around on python-dev for help. Fred
# Drake <fdrake@acm.org> will probably be listening there.
# NOTE WELL: You should also follow all the steps listed at
# https://docs.python.org/devguide/grammar.html
# Start symbols for the grammar:
# file_input is a module or sequence of commands read from an input file;
# single_input is a single interactive statement;
# eval_input is the input for the eval() functions.
# NB: compound_stmt in single_input is followed by extra NEWLINE!
file_input: (NEWLINE | stmt)* ENDMARKER
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
eval_input: testlist NEWLINE* ENDMARKER
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef | async_funcdef)
# NOTE: Francisco Souza/Reinoud Elhorst, using ASYNC/'await' keywords instead of
# skipping python3.5+ compatibility, in favour of 3.7 solution
async_funcdef: 'async' funcdef
funcdef: 'def' NAME parameters ['->' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [
'*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]
| '**' tfpdef [',']]]
| '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]
| '**' tfpdef [','])
tfpdef: NAME [':' test]
varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [
'*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
| '**' vfpdef [',']]]
| '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
| '**' vfpdef [',']
)
vfpdef: NAME
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |
('=' (yield_expr|testlist_star_expr))*)
annassign: ':' test ['=' test]
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
# For normal and annotated assignments, additional restrictions enforced by the interpreter
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test]]
import_stmt: import_name | import_from
import_name: 'import' dotted_as_names
# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
import_as_name: NAME ['as' NAME]
dotted_as_name: dotted_name ['as' NAME]
import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: 'global' NAME (',' NAME)*
nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
assert_stmt: 'assert' test [',' test]
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt
async_stmt: 'async' (funcdef | with_stmt | for_stmt)
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
while_stmt: 'while' test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' with_item (',' with_item)* ':' suite
with_item: test ['as' expr]
# NB compile.c makes sure that the default except clause is last
except_clause: 'except' [test ['as' NAME]]
# Edit by Francisco Souza/David Halter: The stmt is now optional. This reflects
# how Jedi allows classes and functions to be empty, which is beneficial for
# autocompletion.
suite: simple_stmt | NEWLINE INDENT stmt* DEDENT
test: or_test ['if' or_test 'else' test] | lambdef
test_nocond: or_test | lambdef_nocond
lambdef: 'lambda' [varargslist] ':' test
lambdef_nocond: 'lambda' [varargslist] ':' test_nocond
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: expr (comp_op expr)*
# <> isn't actually a valid comparison operator in Python. It's here for the
# sake of a __future__ import described in PEP 401 (which really works :-)
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
star_expr: '*' expr
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'@'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: atom_expr ['**' factor]
atom_expr: ['await'] atom trailer*
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( ((test ':' test | '**' expr)
(comp_for | (',' (test ':' test | '**' expr))* [','])) |
((test | star_expr)
(comp_for | (',' (test | star_expr))* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: argument (',' argument)* [',']
# The reason that keywords are test nodes instead of NAME is that using NAME
# results in an ambiguity. ast.c makes sure it's a NAME.
# "test '=' test" is really "keyword '=' test", but we have no such token.
# These need to be in a single rule to avoid grammar that is ambiguous
# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,
# we explicitly match '*' here, too, to give it proper precedence.
# Illegal combinations and orderings are blocked in ast.c:
# multiple (test comp_for) arguments are blocked; keyword unpackings
# that precede iterable unpackings are blocked; etc.
argument: ( test [comp_for] |
test '=' test |
'**' test |
'*' test )
comp_iter: comp_for | comp_if
comp_for: ['async'] 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' test_nocond [comp_iter]
# not used in grammar, but may appear in "node" passed from Parser to Compiler
encoding_decl: NAME
yield_expr: 'yield' [yield_arg]
yield_arg: 'from' test | testlist

View File

@@ -1,8 +0,0 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright 2006 Google, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Copyright 2014 David Halter. Integration into Jedi.
# Modifications are dual-licensed: MIT and PSF.

View File

@@ -1,125 +0,0 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright 2014 David Halter. Integration into Jedi.
# Modifications are dual-licensed: MIT and PSF.
"""This module defines the data structures used to represent a grammar.
These are a bit arcane because they are derived from the data
structures used by Python's 'pgen' parser generator.
There's also a table here mapping operators to their names in the
token module; the Python tokenize module reports all operators as the
fallback token code OP, but the parser needs the actual token code.
"""
# Python imports
import pickle
class Grammar(object):
"""Pgen parsing tables conversion class.
Once initialized, this class supplies the grammar tables for the
parsing engine implemented by parse.py. The parsing engine
accesses the instance variables directly. The class here does not
provide initialization of the tables; several subclasses exist to
do this (see the conv and pgen modules).
The load() method reads the tables from a pickle file, which is
much faster than the other ways offered by subclasses. The pickle
file is written by calling dump() (after loading the grammar
tables using a subclass). The report() method prints a readable
representation of the tables to stdout, for debugging.
The instance variables are as follows:
symbol2number -- a dict mapping symbol names to numbers. Symbol
numbers are always 256 or higher, to distinguish
them from token numbers, which are between 0 and
255 (inclusive).
number2symbol -- a dict mapping numbers to symbol names;
these two are each other's inverse.
states -- a list of DFAs, where each DFA is a list of
states, each state is a list of arcs, and each
arc is a (i, j) pair where i is a label and j is
a state number. The DFA number is the index into
this list. (This name is slightly confusing.)
Final states are represented by a special arc of
the form (0, j) where j is its own state number.
dfas -- a dict mapping symbol numbers to (DFA, first)
pairs, where DFA is an item from the states list
above, and first is a set of tokens that can
begin this grammar rule (represented by a dict
whose values are always 1).
labels -- a list of (x, y) pairs where x is either a token
number or a symbol number, and y is either None
or a string; the strings are keywords. The label
number is the index in this list; label numbers
are used to mark state transitions (arcs) in the
DFAs.
start -- the number of the grammar's start symbol.
keywords -- a dict mapping keyword strings to arc labels.
tokens -- a dict mapping token numbers to arc labels.
"""
def __init__(self):
self.symbol2number = {}
self.number2symbol = {}
self.states = []
self.dfas = {}
self.labels = [(0, "EMPTY")]
self.keywords = {}
self.tokens = {}
self.symbol2label = {}
self.start = 256
def dump(self, filename):
"""Dump the grammar tables to a pickle file."""
with open(filename, "wb") as f:
pickle.dump(self.__dict__, f, 2)
def load(self, filename):
"""Load the grammar tables from a pickle file."""
with open(filename, "rb") as f:
d = pickle.load(f)
self.__dict__.update(d)
def copy(self):
"""
Copy the grammar.
"""
new = self.__class__()
for dict_attr in ("symbol2number", "number2symbol", "dfas", "keywords",
"tokens", "symbol2label"):
setattr(new, dict_attr, getattr(self, dict_attr).copy())
new.labels = self.labels[:]
new.states = self.states[:]
new.start = self.start
return new
def report(self):
"""Dump the grammar tables to standard output, for debugging."""
from pprint import pprint
print("s2n")
pprint(self.symbol2number)
print("n2s")
pprint(self.number2symbol)
print("states")
pprint(self.states)
print("dfas")
pprint(self.dfas)
print("labels")
pprint(self.labels)
print("start", self.start)

View File

@@ -1,213 +0,0 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright 2014 David Halter. Integration into Jedi.
# Modifications are dual-licensed: MIT and PSF.
"""
Parser engine for the grammar tables generated by pgen.
The grammar table must be loaded first.
See Parser/parser.c in the Python distribution for additional info on
how this parsing engine works.
"""
# Local imports
from jedi.parser import tokenize
class InternalParseError(Exception):
"""
Exception to signal the parser is stuck and error recovery didn't help.
Basically this shouldn't happen. It's a sign that something is really
wrong.
"""
def __init__(self, msg, type, value, start_pos):
Exception.__init__(self, "%s: type=%r, value=%r, start_pos=%r" %
(msg, tokenize.tok_name[type], value, start_pos))
self.msg = msg
self.type = type
self.value = value
self.start_pos = start_pos
def token_to_ilabel(grammar, type_, value):
# Map from token to label
if type_ == tokenize.NAME:
# Check for reserved words (keywords)
try:
return grammar.keywords[value]
except KeyError:
pass
try:
return grammar.tokens[type_]
except KeyError:
return None
class PgenParser(object):
"""Parser engine.
The proper usage sequence is:
p = Parser(grammar, [converter]) # create instance
p.setup([start]) # prepare for parsing
<for each input token>:
if p.addtoken(...): # parse a token
break
root = p.rootnode # root of abstract syntax tree
A Parser instance may be reused by calling setup() repeatedly.
A Parser instance contains state pertaining to the current token
sequence, and should not be used concurrently by different threads
to parse separate token sequences.
See driver.py for how to get input tokens by tokenizing a file or
string.
Parsing is complete when addtoken() returns True; the root of the
abstract syntax tree can then be retrieved from the rootnode
instance variable. When a syntax error occurs, error_recovery()
is called. There is no error recovery; the parser cannot be used
after a syntax error was reported (but it can be reinitialized by
calling setup()).
"""
def __init__(self, grammar, convert_node, convert_leaf, error_recovery, start):
"""Constructor.
The grammar argument is a grammar.Grammar instance; see the
grammar module for more information.
The parser is not ready yet for parsing; you must call the
setup() method to get it started.
The optional convert argument is a function mapping concrete
syntax tree nodes to abstract syntax tree nodes. If not
given, no conversion is done and the syntax tree produced is
the concrete syntax tree. If given, it must be a function of
two arguments, the first being the grammar (a grammar.Grammar
instance), and the second being the concrete syntax tree node
to be converted. The syntax tree is converted from the bottom
up.
A concrete syntax tree node is a (type, nodes) tuple, where
type is the node type (a token or symbol number) and nodes
is a list of children for symbols, and None for tokens.
An abstract syntax tree node may be anything; this is entirely
up to the converter function.
"""
self.grammar = grammar
self.convert_node = convert_node
self.convert_leaf = convert_leaf
# Each stack entry is a tuple: (dfa, state, node).
# A node is a tuple: (type, children),
# where children is a list of nodes or None
newnode = (start, [])
stackentry = (self.grammar.dfas[start], 0, newnode)
self.stack = [stackentry]
self.rootnode = None
self.error_recovery = error_recovery
def parse(self, tokenizer):
for type_, value, start_pos, prefix in tokenizer:
if self.addtoken(type_, value, start_pos, prefix):
break
else:
# We never broke out -- EOF is too soon -- Unfinished statement.
# However, the error recovery might have added the token again, if
# the stack is empty, we're fine.
if self.stack:
raise InternalParseError("incomplete input", type_, value, start_pos)
return self.rootnode
def addtoken(self, type_, value, start_pos, prefix):
"""Add a token; return True if this is the end of the program."""
ilabel = token_to_ilabel(self.grammar, type_, value)
# Loop until the token is shifted; may raise exceptions
while True:
dfa, state, node = self.stack[-1]
states, first = dfa
arcs = states[state]
# Look for a state with this label
for i, newstate in arcs:
t, v = self.grammar.labels[i]
if ilabel == i:
# Look it up in the list of labels
assert t < 256
# Shift a token; we're done with it
self.shift(type_, value, newstate, prefix, start_pos)
# Pop while we are in an accept-only state
state = newstate
while states[state] == [(0, state)]:
self.pop()
if not self.stack:
# Done parsing!
return True
dfa, state, node = self.stack[-1]
states, first = dfa
# Done with this token
return False
elif t >= 256:
# See if it's a symbol and if we're in its first set
itsdfa = self.grammar.dfas[t]
itsstates, itsfirst = itsdfa
if ilabel in itsfirst:
# Push a symbol
self.push(t, itsdfa, newstate)
break # To continue the outer while loop
else:
if (0, state) in arcs:
# An accepting state, pop it and try something else
self.pop()
if not self.stack:
# Done parsing, but another token is input
raise InternalParseError("too much input", type_, value, start_pos)
else:
self.error_recovery(self.grammar, self.stack, arcs, type_,
value, start_pos, prefix, self.addtoken)
break
def shift(self, type_, value, newstate, prefix, start_pos):
"""Shift a token. (Internal)"""
dfa, state, node = self.stack[-1]
newnode = self.convert_leaf(self.grammar, type_, value, prefix, start_pos)
node[-1].append(newnode)
self.stack[-1] = (dfa, newstate, node)
def push(self, type_, newdfa, newstate):
"""Push a nonterminal. (Internal)"""
dfa, state, node = self.stack[-1]
newnode = (type_, [])
self.stack[-1] = (dfa, newstate, node)
self.stack.append((newdfa, 0, newnode))
def pop(self):
"""Pop a nonterminal. (Internal)"""
popdfa, popstate, (type_, children) = self.stack.pop()
# If there's exactly one child, return that child instead of creating a
# new node. We still create expr_stmt and file_input though, because a
# lot of Jedi depends on its logic.
if len(children) == 1:
newnode = children[0]
else:
newnode = self.convert_node(self.grammar, type_, children)
try:
# Equal to:
# dfa, state, node = self.stack[-1]
# symbol, children = node
self.stack[-1][2][1].append(newnode)
except IndexError:
# Stack is empty, set the rootnode.
self.rootnode = newnode

View File

@@ -1,394 +0,0 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright 2014 David Halter. Integration into Jedi.
# Modifications are dual-licensed: MIT and PSF.
# Pgen imports
from . import grammar
from jedi.parser import token
from jedi.parser import tokenize
class ParserGenerator(object):
def __init__(self, filename, stream=None):
close_stream = None
if stream is None:
stream = open(filename)
close_stream = stream.close
self.filename = filename
self.stream = stream
self.generator = tokenize.generate_tokens(stream.readline)
self.gettoken() # Initialize lookahead
self.dfas, self.startsymbol = self.parse()
if close_stream is not None:
close_stream()
self.first = {} # map from symbol name to set of tokens
self.addfirstsets()
def make_grammar(self):
c = grammar.Grammar()
names = list(self.dfas.keys())
names.sort()
names.remove(self.startsymbol)
names.insert(0, self.startsymbol)
for name in names:
i = 256 + len(c.symbol2number)
c.symbol2number[name] = i
c.number2symbol[i] = name
for name in names:
dfa = self.dfas[name]
states = []
for state in dfa:
arcs = []
for label, next in state.arcs.items():
arcs.append((self.make_label(c, label), dfa.index(next)))
if state.isfinal:
arcs.append((0, dfa.index(state)))
states.append(arcs)
c.states.append(states)
c.dfas[c.symbol2number[name]] = (states, self.make_first(c, name))
c.start = c.symbol2number[self.startsymbol]
return c
def make_first(self, c, name):
rawfirst = self.first[name]
first = {}
for label in rawfirst:
ilabel = self.make_label(c, label)
##assert ilabel not in first # XXX failed on <> ... !=
first[ilabel] = 1
return first
def make_label(self, c, label):
# XXX Maybe this should be a method on a subclass of converter?
ilabel = len(c.labels)
if label[0].isalpha():
# Either a symbol name or a named token
if label in c.symbol2number:
# A symbol name (a non-terminal)
if label in c.symbol2label:
return c.symbol2label[label]
else:
c.labels.append((c.symbol2number[label], None))
c.symbol2label[label] = ilabel
return ilabel
else:
# A named token (NAME, NUMBER, STRING)
itoken = getattr(token, label, None)
assert isinstance(itoken, int), label
assert itoken in token.tok_name, label
if itoken in c.tokens:
return c.tokens[itoken]
else:
c.labels.append((itoken, None))
c.tokens[itoken] = ilabel
return ilabel
else:
# Either a keyword or an operator
assert label[0] in ('"', "'"), label
value = eval(label)
if value[0].isalpha():
# A keyword
if value in c.keywords:
return c.keywords[value]
else:
c.labels.append((token.NAME, value))
c.keywords[value] = ilabel
return ilabel
else:
# An operator (any non-numeric token)
itoken = token.opmap[value] # Fails if unknown token
if itoken in c.tokens:
return c.tokens[itoken]
else:
c.labels.append((itoken, None))
c.tokens[itoken] = ilabel
return ilabel
def addfirstsets(self):
names = list(self.dfas.keys())
names.sort()
for name in names:
if name not in self.first:
self.calcfirst(name)
#print name, self.first[name].keys()
def calcfirst(self, name):
dfa = self.dfas[name]
self.first[name] = None # dummy to detect left recursion
state = dfa[0]
totalset = {}
overlapcheck = {}
for label, next in state.arcs.items():
if label in self.dfas:
if label in self.first:
fset = self.first[label]
if fset is None:
raise ValueError("recursion for rule %r" % name)
else:
self.calcfirst(label)
fset = self.first[label]
totalset.update(fset)
overlapcheck[label] = fset
else:
totalset[label] = 1
overlapcheck[label] = {label: 1}
inverse = {}
for label, itsfirst in overlapcheck.items():
for symbol in itsfirst:
if symbol in inverse:
raise ValueError("rule %s is ambiguous; %s is in the"
" first sets of %s as well as %s" %
(name, symbol, label, inverse[symbol]))
inverse[symbol] = label
self.first[name] = totalset
def parse(self):
dfas = {}
startsymbol = None
# MSTART: (NEWLINE | RULE)* ENDMARKER
while self.type != token.ENDMARKER:
while self.type == token.NEWLINE:
self.gettoken()
# RULE: NAME ':' RHS NEWLINE
name = self.expect(token.NAME)
self.expect(token.OP, ":")
a, z = self.parse_rhs()
self.expect(token.NEWLINE)
#self.dump_nfa(name, a, z)
dfa = self.make_dfa(a, z)
#self.dump_dfa(name, dfa)
# oldlen = len(dfa)
self.simplify_dfa(dfa)
# newlen = len(dfa)
dfas[name] = dfa
#print name, oldlen, newlen
if startsymbol is None:
startsymbol = name
return dfas, startsymbol
def make_dfa(self, start, finish):
# To turn an NFA into a DFA, we define the states of the DFA
# to correspond to *sets* of states of the NFA. Then do some
# state reduction. Let's represent sets as dicts with 1 for
# values.
assert isinstance(start, NFAState)
assert isinstance(finish, NFAState)
def closure(state):
base = {}
addclosure(state, base)
return base
def addclosure(state, base):
assert isinstance(state, NFAState)
if state in base:
return
base[state] = 1
for label, next in state.arcs:
if label is None:
addclosure(next, base)
states = [DFAState(closure(start), finish)]
for state in states: # NB states grows while we're iterating
arcs = {}
for nfastate in state.nfaset:
for label, next in nfastate.arcs:
if label is not None:
addclosure(next, arcs.setdefault(label, {}))
for label, nfaset in arcs.items():
for st in states:
if st.nfaset == nfaset:
break
else:
st = DFAState(nfaset, finish)
states.append(st)
state.addarc(st, label)
return states # List of DFAState instances; first one is start
def dump_nfa(self, name, start, finish):
print("Dump of NFA for", name)
todo = [start]
for i, state in enumerate(todo):
print(" State", i, state is finish and "(final)" or "")
for label, next in state.arcs:
if next in todo:
j = todo.index(next)
else:
j = len(todo)
todo.append(next)
if label is None:
print(" -> %d" % j)
else:
print(" %s -> %d" % (label, j))
def dump_dfa(self, name, dfa):
print("Dump of DFA for", name)
for i, state in enumerate(dfa):
print(" State", i, state.isfinal and "(final)" or "")
for label, next in state.arcs.items():
print(" %s -> %d" % (label, dfa.index(next)))
def simplify_dfa(self, dfa):
# This is not theoretically optimal, but works well enough.
# Algorithm: repeatedly look for two states that have the same
# set of arcs (same labels pointing to the same nodes) and
# unify them, until things stop changing.
# dfa is a list of DFAState instances
changes = True
while changes:
changes = False
for i, state_i in enumerate(dfa):
for j in range(i + 1, len(dfa)):
state_j = dfa[j]
if state_i == state_j:
#print " unify", i, j
del dfa[j]
for state in dfa:
state.unifystate(state_j, state_i)
changes = True
break
def parse_rhs(self):
# RHS: ALT ('|' ALT)*
a, z = self.parse_alt()
if self.value != "|":
return a, z
else:
aa = NFAState()
zz = NFAState()
aa.addarc(a)
z.addarc(zz)
while self.value == "|":
self.gettoken()
a, z = self.parse_alt()
aa.addarc(a)
z.addarc(zz)
return aa, zz
def parse_alt(self):
# ALT: ITEM+
a, b = self.parse_item()
while (self.value in ("(", "[") or
self.type in (token.NAME, token.STRING)):
c, d = self.parse_item()
b.addarc(c)
b = d
return a, b
def parse_item(self):
# ITEM: '[' RHS ']' | ATOM ['+' | '*']
if self.value == "[":
self.gettoken()
a, z = self.parse_rhs()
self.expect(token.OP, "]")
a.addarc(z)
return a, z
else:
a, z = self.parse_atom()
value = self.value
if value not in ("+", "*"):
return a, z
self.gettoken()
z.addarc(a)
if value == "+":
return a, z
else:
return a, a
def parse_atom(self):
# ATOM: '(' RHS ')' | NAME | STRING
if self.value == "(":
self.gettoken()
a, z = self.parse_rhs()
self.expect(token.OP, ")")
return a, z
elif self.type in (token.NAME, token.STRING):
a = NFAState()
z = NFAState()
a.addarc(z, self.value)
self.gettoken()
return a, z
else:
self.raise_error("expected (...) or NAME or STRING, got %s/%s",
self.type, self.value)
def expect(self, type, value=None):
if self.type != type or (value is not None and self.value != value):
self.raise_error("expected %s/%s, got %s/%s",
type, value, self.type, self.value)
value = self.value
self.gettoken()
return value
def gettoken(self):
tup = next(self.generator)
while tup[0] in (token.COMMENT, token.NL):
tup = next(self.generator)
self.type, self.value, self.begin, prefix = tup
#print tokenize.tok_name[self.type], repr(self.value)
def raise_error(self, msg, *args):
if args:
try:
msg = msg % args
except:
msg = " ".join([msg] + list(map(str, args)))
line = open(self.filename).readlines()[self.begin[0]]
raise SyntaxError(msg, (self.filename, self.begin[0],
self.begin[1], line))
class NFAState(object):
def __init__(self):
self.arcs = [] # list of (label, NFAState) pairs
def addarc(self, next, label=None):
assert label is None or isinstance(label, str)
assert isinstance(next, NFAState)
self.arcs.append((label, next))
class DFAState(object):
def __init__(self, nfaset, final):
assert isinstance(nfaset, dict)
assert isinstance(next(iter(nfaset)), NFAState)
assert isinstance(final, NFAState)
self.nfaset = nfaset
self.isfinal = final in nfaset
self.arcs = {} # map from label to DFAState
def addarc(self, next, label):
assert isinstance(label, str)
assert label not in self.arcs
assert isinstance(next, DFAState)
self.arcs[label] = next
def unifystate(self, old, new):
for label, next in self.arcs.items():
if next is old:
self.arcs[label] = new
def __eq__(self, other):
# Equality test -- ignore the nfaset instance variable
assert isinstance(other, DFAState)
if self.isfinal != other.isfinal:
return False
# Can't just return self.arcs == other.arcs, because that
# would invoke this method recursively, with cycles...
if len(self.arcs) != len(other.arcs):
return False
for label, next in self.arcs.items():
if next is not other.arcs.get(label):
return False
return True
__hash__ = None # For Py3 compatibility.
def generate_grammar(filename="Grammar.txt"):
p = ParserGenerator(filename)
return p.make_grammar()

View File

@@ -1,90 +0,0 @@
from __future__ import absolute_import
from jedi._compatibility import is_py3, is_py35
from token import *
COMMENT = N_TOKENS
tok_name[COMMENT] = 'COMMENT'
N_TOKENS += 1
NL = N_TOKENS
tok_name[NL] = 'NL'
N_TOKENS += 1
if is_py3:
BACKQUOTE = N_TOKENS
tok_name[BACKQUOTE] = 'BACKQUOTE'
N_TOKENS += 1
else:
RARROW = N_TOKENS
tok_name[RARROW] = 'RARROW'
N_TOKENS += 1
ELLIPSIS = N_TOKENS
tok_name[ELLIPSIS] = 'ELLIPSIS'
N_TOKENS += 1
if not is_py35:
ATEQUAL = N_TOKENS
tok_name[ATEQUAL] = 'ATEQUAL'
N_TOKENS += 1
# Map from operator to number (since tokenize doesn't do this)
opmap_raw = """\
( LPAR
) RPAR
[ LSQB
] RSQB
: COLON
, COMMA
; SEMI
+ PLUS
- MINUS
* STAR
/ SLASH
| VBAR
& AMPER
< LESS
> GREATER
= EQUAL
. DOT
% PERCENT
` BACKQUOTE
{ LBRACE
} RBRACE
@ AT
== EQEQUAL
!= NOTEQUAL
<> NOTEQUAL
<= LESSEQUAL
>= GREATEREQUAL
~ TILDE
^ CIRCUMFLEX
<< LEFTSHIFT
>> RIGHTSHIFT
** DOUBLESTAR
+= PLUSEQUAL
-= MINEQUAL
*= STAREQUAL
/= SLASHEQUAL
%= PERCENTEQUAL
&= AMPEREQUAL
|= VBAREQUAL
@= ATEQUAL
^= CIRCUMFLEXEQUAL
<<= LEFTSHIFTEQUAL
>>= RIGHTSHIFTEQUAL
**= DOUBLESTAREQUAL
// DOUBLESLASH
//= DOUBLESLASHEQUAL
-> RARROW
... ELLIPSIS
"""
opmap = {}
for line in opmap_raw.splitlines():
op, name = line.split()
opmap[op] = globals()[name]

View File

@@ -1,362 +0,0 @@
# -*- coding: utf-8 -*-
"""
This tokenizer has been copied from the ``tokenize.py`` standard library
tokenizer. The reason was simple: The standard library tokenizer fails
if the indentation is not right. The fast parser of jedi however requires
"wrong" indentation.
Basically this is a stripped down version of the standard library module, so
you can read the documentation there. Additionally we included some speed and
memory optimizations here.
"""
from __future__ import absolute_import
import string
import re
from collections import namedtuple
from io import StringIO
import itertools as _itertools
from jedi.parser.token import (tok_name, N_TOKENS, ENDMARKER, STRING, NUMBER, opmap,
NAME, OP, ERRORTOKEN, NEWLINE, INDENT, DEDENT)
from jedi._compatibility import is_py3, py_version
from jedi.common import splitlines
cookie_re = re.compile("coding[:=]\s*([-\w.]+)")
if is_py3:
# Python 3 has str.isidentifier() to check if a char is a valid identifier
is_identifier = str.isidentifier
else:
namechars = string.ascii_letters + '_'
is_identifier = lambda s: s in namechars
COMMENT = N_TOKENS
tok_name[COMMENT] = 'COMMENT'
def group(*choices, **kwargs):
capture = kwargs.pop('capture', False) # Python 2, arrghhhhh :(
assert not kwargs
start = '('
if not capture:
start += '?:'
return start + '|'.join(choices) + ')'
def any(*choices):
return group(*choices) + '*'
def maybe(*choices):
return group(*choices) + '?'
# Note: we use unicode matching for names ("\w") but ascii matching for
# number literals.
Whitespace = r'[ \f\t]*'
Comment = r'#[^\r\n]*'
Name = r'\w+'
if py_version >= 36:
Hexnumber = r'0[xX](?:_?[0-9a-fA-F])+'
Binnumber = r'0[bB](?:_?[01])+'
Octnumber = r'0[oO](?:_?[0-7])+'
Decnumber = r'(?:0(?:_?0)*|[1-9](?:_?[0-9])*)'
Intnumber = group(Hexnumber, Binnumber, Octnumber, Decnumber)
Exponent = r'[eE][-+]?[0-9](?:_?[0-9])*'
Pointfloat = group(r'[0-9](?:_?[0-9])*\.(?:[0-9](?:_?[0-9])*)?',
r'\.[0-9](?:_?[0-9])*') + maybe(Exponent)
Expfloat = r'[0-9](?:_?[0-9])*' + Exponent
Floatnumber = group(Pointfloat, Expfloat)
Imagnumber = group(r'[0-9](?:_?[0-9])*[jJ]', Floatnumber + r'[jJ]')
else:
Hexnumber = r'0[xX][0-9a-fA-F]+'
Binnumber = r'0[bB][01]+'
if is_py3:
Octnumber = r'0[oO][0-7]+'
else:
Octnumber = '0[0-7]+'
Decnumber = r'(?:0+|[1-9][0-9]*)'
Intnumber = group(Hexnumber, Binnumber, Octnumber, Decnumber)
Exponent = r'[eE][-+]?[0-9]+'
Pointfloat = group(r'[0-9]+\.[0-9]*', r'\.[0-9]+') + maybe(Exponent)
Expfloat = r'[0-9]+' + Exponent
Floatnumber = group(Pointfloat, Expfloat)
Imagnumber = group(r'[0-9]+[jJ]', Floatnumber + r'[jJ]')
Number = group(Imagnumber, Floatnumber, Intnumber)
# Return the empty string, plus all of the valid string prefixes.
def _all_string_prefixes():
# The valid string prefixes. Only contain the lower case versions,
# and don't contain any permuations (include 'fr', but not
# 'rf'). The various permutations will be generated.
_valid_string_prefixes = ['b', 'r', 'u', 'br']
if py_version >= 36:
_valid_string_prefixes += ['f', 'fr']
if py_version <= 27:
# TODO this is actually not 100% valid. ur is valid in Python 2.7,
# while ru is not.
_valid_string_prefixes.append('ur')
# if we add binary f-strings, add: ['fb', 'fbr']
result = set([''])
for prefix in _valid_string_prefixes:
for t in _itertools.permutations(prefix):
# create a list with upper and lower versions of each
# character
for u in _itertools.product(*[(c, c.upper()) for c in t]):
result.add(''.join(u))
return result
def _compile(expr):
return re.compile(expr, re.UNICODE)
# Note that since _all_string_prefixes includes the empty string,
# StringPrefix can be the empty string (making it optional).
StringPrefix = group(*_all_string_prefixes())
# Tail end of ' string.
Single = r"[^'\\]*(?:\\.[^'\\]*)*'"
# Tail end of " string.
Double = r'[^"\\]*(?:\\.[^"\\]*)*"'
# Tail end of ''' string.
Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''"
# Tail end of """ string.
Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""'
Triple = group(StringPrefix + "'''", StringPrefix + '"""')
# Because of leftmost-then-longest match semantics, be sure to put the
# longest operators first (e.g., if = came before ==, == would get
# recognized as two instances of =).
Operator = group(r"\*\*=?", r">>=?", r"<<=?", r"!=",
r"//=?", r"->",
r"[+\-*/%&@|^=<>]=?",
r"~")
Bracket = '[][(){}]'
Special = group(r'\r?\n', r'\.\.\.', r'[:;.,@]')
Funny = group(Operator, Bracket, Special)
PlainToken = group(Number, Funny, Name, capture=True)
# First (or only) line of ' or " string.
ContStr = group(StringPrefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*" +
group("'", r'\\\r?\n'),
StringPrefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*' +
group('"', r'\\\r?\n'))
PseudoExtras = group(r'\\\r?\n|\Z', Comment, Triple)
PseudoToken = group(Whitespace, capture=True) + \
group(PseudoExtras, Number, Funny, ContStr, Name, capture=True)
# For a given string prefix plus quotes, endpats maps it to a regex
# to match the remainder of that string. _prefix can be empty, for
# a normal single or triple quoted string (with no prefix).
endpats = {}
for _prefix in _all_string_prefixes():
endpats[_prefix + "'"] = _compile(Single)
endpats[_prefix + '"'] = _compile(Double)
endpats[_prefix + "'''"] = _compile(Single3)
endpats[_prefix + '"""'] = _compile(Double3)
# A set of all of the single and triple quoted string prefixes,
# including the opening quotes.
single_quoted = set()
triple_quoted = set()
for t in _all_string_prefixes():
for u in (t + '"', t + "'"):
single_quoted.add(u)
for u in (t + '"""', t + "'''"):
triple_quoted.add(u)
# TODO add with?
ALWAYS_BREAK_TOKENS = (';', 'import', 'class', 'def', 'try', 'except',
'finally', 'while', 'return')
pseudo_token_compiled = _compile(PseudoToken)
class TokenInfo(namedtuple('Token', ['type', 'string', 'start_pos', 'prefix'])):
def __repr__(self):
annotated_type = tok_name[self.type]
return ('TokenInfo(type=%s, string=%r, start=%r, prefix=%r)' %
self._replace(type=annotated_type))
@property
def exact_type(self):
if self.type == OP and self.string in opmap:
return opmap[self.string]
else:
return self.type
@property
def end_pos(self):
lines = splitlines(self.string)
if len(lines) > 1:
return self.start_pos[0] + len(lines) - 1, 0
else:
return self.start_pos[0], self.start_pos[1] + len(self.string)
def source_tokens(source, use_exact_op_types=False):
"""Generate tokens from a the source code (string)."""
source = source
readline = StringIO(source).readline
return generate_tokens(readline, use_exact_op_types)
def generate_tokens(readline, use_exact_op_types=False):
"""
A heavily modified Python standard library tokenizer.
Additionally to the default information, yields also the prefix of each
token. This idea comes from lib2to3. The prefix contains all information
that is irrelevant for the parser like newlines in parentheses or comments.
"""
paren_level = 0 # count parentheses
indents = [0]
lnum = 0
max = 0
numchars = '0123456789'
contstr = ''
contline = None
# We start with a newline. This makes indent at the first position
# possible. It's not valid Python, but still better than an INDENT in the
# second line (and not in the first). This makes quite a few things in
# Jedi's fast parser possible.
new_line = True
prefix = '' # Should never be required, but here for safety
additional_prefix = ''
while True: # loop over lines in stream
line = readline() # readline returns empty when finished. See StringIO
if not line:
if contstr:
yield TokenInfo(ERRORTOKEN, contstr, contstr_start, prefix)
break
lnum += 1
pos, max = 0, len(line)
if contstr: # continued string
endmatch = endprog.match(line)
if endmatch:
pos = endmatch.end(0)
yield TokenInfo(STRING, contstr + line[:pos], contstr_start, prefix)
contstr = ''
contline = None
else:
contstr = contstr + line
contline = contline + line
continue
while pos < max:
pseudomatch = pseudo_token_compiled.match(line, pos)
if not pseudomatch: # scan for tokens
txt = line[pos]
if line[pos] in '"\'':
# If a literal starts but doesn't end the whole rest of the
# line is an error token.
txt = line[pos:]
if txt.endswith('\n'):
new_line = True
yield TokenInfo(ERRORTOKEN, txt, (lnum, pos), prefix)
break
prefix = additional_prefix + pseudomatch.group(1)
additional_prefix = ''
start, pos = pseudomatch.span(2)
spos = (lnum, start)
token, initial = line[start:pos], line[start]
if new_line and initial not in '\r\n#':
new_line = False
if paren_level == 0:
i = 0
while line[i] == '\f':
i += 1
start -= 1
if start > indents[-1]:
yield TokenInfo(INDENT, '', spos, '')
indents.append(start)
while start < indents[-1]:
yield TokenInfo(DEDENT, '', spos, '')
indents.pop()
if (initial in numchars or # ordinary number
(initial == '.' and token != '.' and token != '...')):
yield TokenInfo(NUMBER, token, spos, prefix)
elif initial in '\r\n':
if not new_line and paren_level == 0:
yield TokenInfo(NEWLINE, token, spos, prefix)
else:
additional_prefix = prefix + token
new_line = True
elif initial == '#': # Comments
assert not token.endswith("\n")
additional_prefix = prefix + token
elif token in triple_quoted:
endprog = endpats[token]
endmatch = endprog.match(line, pos)
if endmatch: # all on one line
pos = endmatch.end(0)
token = line[start:pos]
yield TokenInfo(STRING, token, spos, prefix)
else:
contstr_start = (lnum, start) # multiple lines
contstr = line[start:]
contline = line
break
elif initial in single_quoted or \
token[:2] in single_quoted or \
token[:3] in single_quoted:
if token[-1] == '\n': # continued string
contstr_start = lnum, start
endprog = (endpats.get(initial) or endpats.get(token[1])
or endpats.get(token[2]))
contstr = line[start:]
contline = line
break
else: # ordinary string
yield TokenInfo(STRING, token, spos, prefix)
elif is_identifier(initial): # ordinary name
if token in ALWAYS_BREAK_TOKENS:
paren_level = 0
while True:
indent = indents.pop()
if indent > start:
yield TokenInfo(DEDENT, '', spos, '')
else:
indents.append(indent)
break
yield TokenInfo(NAME, token, spos, prefix)
elif initial == '\\' and line[start:] in ('\\\n', '\\\r\n'): # continued stmt
additional_prefix += prefix + line[start:]
break
else:
if token in '([{':
paren_level += 1
elif token in ')]}':
paren_level -= 1
try:
# This check is needed in any case to check if it's a valid
# operator or just some random unicode character.
exact_type = opmap[token]
except KeyError:
exact_type = typ = ERRORTOKEN
if use_exact_op_types:
typ = exact_type
else:
typ = OP
yield TokenInfo(typ, token, spos, prefix)
if new_line or additional_prefix[-1:] == '\n':
end_pos = lnum + 1, 0
else:
end_pos = lnum, max
# As the last position we just take the maximally possible position. We
# remove -1 for the last new line.
for indent in indents[1:]:
yield TokenInfo(DEDENT, '', end_pos, '')
yield TokenInfo(ENDMARKER, '', end_pos, additional_prefix)

File diff suppressed because it is too large Load Diff

View File

@@ -1,197 +0,0 @@
import inspect
import time
import os
import sys
import json
import hashlib
import gc
import shutil
import pickle
from jedi import settings
from jedi import debug
def underscore_memoization(func):
"""
Decorator for methods::
class A(object):
def x(self):
if self._x:
self._x = 10
return self._x
Becomes::
class A(object):
@underscore_memoization
def x(self):
return 10
A now has an attribute ``_x`` written by this decorator.
"""
name = '_' + func.__name__
def wrapper(self):
try:
return getattr(self, name)
except AttributeError:
result = func(self)
if inspect.isgenerator(result):
result = list(result)
setattr(self, name, result)
return result
return wrapper
# for fast_parser, should not be deleted
parser_cache = {}
class ParserCacheItem(object):
def __init__(self, parser, change_time=None):
self.parser = parser
if change_time is None:
change_time = time.time()
self.change_time = change_time
def load_parser(path):
"""
Returns the module or None, if it fails.
"""
p_time = os.path.getmtime(path) if path else None
try:
parser_cache_item = parser_cache[path]
if not path or p_time <= parser_cache_item.change_time:
return parser_cache_item.parser
except KeyError:
if settings.use_filesystem_cache:
return ParserPickling.load_parser(path, p_time)
def save_parser(path, parser, pickling=True):
try:
p_time = None if path is None else os.path.getmtime(path)
except OSError:
p_time = None
pickling = False
item = ParserCacheItem(parser, p_time)
parser_cache[path] = item
if settings.use_filesystem_cache and pickling:
ParserPickling.save_parser(path, item)
class ParserPickling(object):
version = 27
"""
Version number (integer) for file system cache.
Increment this number when there are any incompatible changes in
parser representation classes. For example, the following changes
are regarded as incompatible.
- Class name is changed.
- Class is moved to another module.
- Defined slot of the class is changed.
"""
def __init__(self):
self.__index = None
self.py_tag = 'cpython-%s%s' % sys.version_info[:2]
"""
Short name for distinguish Python implementations and versions.
It's like `sys.implementation.cache_tag` but for Python < 3.3
we generate something similar. See:
http://docs.python.org/3/library/sys.html#sys.implementation
.. todo:: Detect interpreter (e.g., PyPy).
"""
def load_parser(self, path, original_changed_time):
try:
pickle_changed_time = self._index[path]
except KeyError:
return None
if original_changed_time is not None \
and pickle_changed_time < original_changed_time:
# the pickle file is outdated
return None
with open(self._get_hashed_path(path), 'rb') as f:
try:
gc.disable()
parser_cache_item = pickle.load(f)
finally:
gc.enable()
debug.dbg('pickle loaded: %s', path)
parser_cache[path] = parser_cache_item
return parser_cache_item.parser
def save_parser(self, path, parser_cache_item):
self.__index = None
try:
files = self._index
except KeyError:
files = {}
self._index = files
with open(self._get_hashed_path(path), 'wb') as f:
pickle.dump(parser_cache_item, f, pickle.HIGHEST_PROTOCOL)
files[path] = parser_cache_item.change_time
self._flush_index()
@property
def _index(self):
if self.__index is None:
try:
with open(self._get_path('index.json')) as f:
data = json.load(f)
except (IOError, ValueError):
self.__index = {}
else:
# 0 means version is not defined (= always delete cache):
if data.get('version', 0) != self.version:
self.clear_cache()
else:
self.__index = data['index']
return self.__index
def _remove_old_modules(self):
# TODO use
change = False
if change:
self._flush_index(self)
self._index # reload index
def _flush_index(self):
data = {'version': self.version, 'index': self._index}
with open(self._get_path('index.json'), 'w') as f:
json.dump(data, f)
self.__index = None
def clear_cache(self):
shutil.rmtree(self._cache_directory())
self.__index = {}
def _get_hashed_path(self, path):
return self._get_path('%s.pkl' % hashlib.md5(path.encode("utf-8")).hexdigest())
def _get_path(self, file):
dir = self._cache_directory()
if not os.path.exists(dir):
os.makedirs(dir)
return os.path.join(dir, file)
def _cache_directory(self):
return os.path.join(settings.cache_directory, self.py_tag)
# is a singleton
ParserPickling = ParserPickling()

241
jedi/parser_utils.py Normal file
View File

@@ -0,0 +1,241 @@
import textwrap
from inspect import cleandoc
from jedi._compatibility import literal_eval, is_py3
from parso.python import tree
_EXECUTE_NODES = set([
'funcdef', 'classdef', 'import_from', 'import_name', 'test', 'or_test',
'and_test', 'not_test', 'comparison', 'expr', 'xor_expr', 'and_expr',
'shift_expr', 'arith_expr', 'atom_expr', 'term', 'factor', 'power', 'atom'
])
_FLOW_KEYWORDS = (
'try', 'except', 'finally', 'else', 'if', 'elif', 'with', 'for', 'while'
)
def get_executable_nodes(node, last_added=False):
"""
For static analysis.
"""
result = []
typ = node.type
if typ == 'name':
next_leaf = node.get_next_leaf()
if last_added is False and node.parent.type != 'param' and next_leaf != '=':
result.append(node)
elif typ == 'expr_stmt':
# I think evaluating the statement (and possibly returned arrays),
# should be enough for static analysis.
result.append(node)
for child in node.children:
result += get_executable_nodes(child, last_added=True)
elif typ == 'decorator':
# decorator
if node.children[-2] == ')':
node = node.children[-3]
if node != '(':
result += get_executable_nodes(node)
else:
try:
children = node.children
except AttributeError:
pass
else:
if node.type in _EXECUTE_NODES and not last_added:
result.append(node)
for child in children:
result += get_executable_nodes(child, last_added)
return result
def get_comp_fors(comp_for):
yield comp_for
last = comp_for.children[-1]
while True:
if last.type == 'comp_for':
yield last
elif not last.type == 'comp_if':
break
last = last.children[-1]
def for_stmt_defines_one_name(for_stmt):
"""
Returns True if only one name is returned: ``for x in y``.
Returns False if the for loop is more complicated: ``for x, z in y``.
:returns: bool
"""
return for_stmt.children[1].type == 'name'
def get_flow_branch_keyword(flow_node, node):
start_pos = node.start_pos
if not (flow_node.start_pos < start_pos <= flow_node.end_pos):
raise ValueError('The node is not part of the flow.')
keyword = None
for i, child in enumerate(flow_node.children):
if start_pos < child.start_pos:
return keyword
first_leaf = child.get_first_leaf()
if first_leaf in _FLOW_KEYWORDS:
keyword = first_leaf
return 0
def get_statement_of_position(node, pos):
for c in node.children:
if c.start_pos <= pos <= c.end_pos:
if c.type not in ('decorated', 'simple_stmt', 'suite') \
and not isinstance(c, (tree.Flow, tree.ClassOrFunc)):
return c
else:
try:
return get_statement_of_position(c, pos)
except AttributeError:
pass # Must be a non-scope
return None
def clean_scope_docstring(scope_node):
""" Returns a cleaned version of the docstring token. """
node = scope_node.get_doc_node()
if node is not None:
# TODO We have to check next leaves until there are no new
# leaves anymore that might be part of the docstring. A
# docstring can also look like this: ``'foo' 'bar'
# Returns a literal cleaned version of the ``Token``.
cleaned = cleandoc(safe_literal_eval(node.value))
# Since we want the docstr output to be always unicode, just
# force it.
if is_py3 or isinstance(cleaned, unicode):
return cleaned
else:
return unicode(cleaned, 'UTF-8', 'replace')
return ''
def safe_literal_eval(value):
first_two = value[:2].lower()
if first_two[0] == 'f' or first_two in ('fr', 'rf'):
# literal_eval is not able to resovle f literals. We have to do that
# manually, but that's right now not implemented.
return ''
try:
return literal_eval(value)
except SyntaxError:
# It's possible to create syntax errors with literals like rb'' in
# Python 2. This should not be possible and in that case just return an
# empty string.
# Before Python 3.3 there was a more strict definition in which order
# you could define literals.
return ''
def get_call_signature(funcdef, width=72, call_string=None):
"""
Generate call signature of this function.
:param width: Fold lines if a line is longer than this value.
:type width: int
:arg func_name: Override function name when given.
:type func_name: str
:rtype: str
"""
# Lambdas have no name.
if call_string is None:
if funcdef.type == 'lambdef':
call_string = '<lambda>'
else:
call_string = funcdef.name.value
if funcdef.type == 'lambdef':
p = '(' + ''.join(param.get_code() for param in funcdef.get_params()).strip() + ')'
else:
p = funcdef.children[2].get_code()
code = call_string + p
return '\n'.join(textwrap.wrap(code, width))
def get_doc_with_call_signature(scope_node):
"""
Return a document string including call signature.
"""
call_signature = None
if scope_node.type == 'classdef':
for funcdef in scope_node.iter_funcdefs():
if funcdef.name.value == '__init__':
call_signature = \
get_call_signature(funcdef, call_string=scope_node.name.value)
elif scope_node.type in ('funcdef', 'lambdef'):
call_signature = get_call_signature(scope_node)
doc = clean_scope_docstring(scope_node)
if call_signature is None:
return doc
return '%s\n\n%s' % (call_signature, doc)
def move(node, line_offset):
"""
Move the `Node` start_pos.
"""
try:
children = node.children
except AttributeError:
node.line += line_offset
else:
for c in children:
move(c, line_offset)
def get_following_comment_same_line(node):
"""
returns (as string) any comment that appears on the same line,
after the node, including the #
"""
try:
if node.type == 'for_stmt':
whitespace = node.children[5].get_first_leaf().prefix
elif node.type == 'with_stmt':
whitespace = node.children[3].get_first_leaf().prefix
else:
whitespace = node.get_last_leaf().get_next_leaf().prefix
except AttributeError:
return None
except ValueError:
# TODO in some particular cases, the tree doesn't seem to be linked
# correctly
return None
if "#" not in whitespace:
return None
comment = whitespace[whitespace.index("#"):]
if "\r" in comment:
comment = comment[:comment.index("\r")]
if "\n" in comment:
comment = comment[:comment.index("\n")]
return comment
def is_scope(node):
return node.type in ('file_input', 'classdef', 'funcdef', 'lambdef', 'comp_for')
def get_parent_scope(node, include_flows=False):
"""
Returns the underlying scope.
"""
scope = node.parent
while scope is not None:
if include_flows and isinstance(scope, tree.Flow):
return scope
if is_scope(scope):
break
scope = scope.parent
return scope

View File

@@ -15,8 +15,8 @@ following functions (sometimes bug-prone):
import difflib
from jedi import common
from parso import python_bytes_to_unicode, split_lines
from jedi.evaluate import helpers
from jedi.parser import tree as pt
class Refactoring(object):
@@ -29,7 +29,7 @@ class Refactoring(object):
def old_files(self):
dct = {}
for old_path, (new_path, old_l, new_l) in self.change_dct.items():
dct[new_path] = '\n'.join(new_l)
dct[old_path] = '\n'.join(old_l)
return dct
def new_files(self):
@@ -83,7 +83,7 @@ def _rename(names, replace_str):
with open(current_path) as f:
source = f.read()
new_lines = common.splitlines(common.source_to_unicode(source))
new_lines = split_lines(python_bytes_to_unicode(source))
old_lines = new_lines[:]
nr, indent = name.line, name.column
@@ -101,7 +101,7 @@ def extract(script, new_name):
:type source: str
:return: list of changed lines/changed files
"""
new_lines = common.splitlines(common.source_to_unicode(script.source))
new_lines = split_lines(python_bytes_to_unicode(script.source))
old_lines = new_lines[:]
user_stmt = script._parser.user_stmt()
@@ -160,7 +160,7 @@ def inline(script):
"""
:type script: api.Script
"""
new_lines = common.splitlines(common.source_to_unicode(script.source))
new_lines = split_lines(python_bytes_to_unicode(script.source))
dct = {}

View File

@@ -43,32 +43,9 @@ Dynamic stuff
.. autodata:: auto_import_modules
.. _settings-recursion:
Recursions
~~~~~~~~~~
Recursion settings are important if you don't want extremly
recursive python code to go absolutely crazy. First of there is a
global limit :data:`max_executions`. This limit is important, to set
a maximum amount of time, the completion may use.
The default values are based on experiments while completing the |jedi| library
itself (inception!). But I don't think there's any other Python library that
uses recursion in a similarly extreme way. These settings make the completion
definitely worse in some cases. But a completion should also be fast.
.. autodata:: max_until_execution_unique
.. autodata:: max_function_recursion_level
.. autodata:: max_executions_without_builtins
.. autodata:: max_executions
.. autodata:: scale_call_signatures
Caching
~~~~~~~
.. autodata:: star_import_cache_validity
.. autodata:: call_signatures_validity
@@ -116,7 +93,7 @@ else:
'jedi')
cache_directory = os.path.expanduser(_cache_directory)
"""
The path where all the caches can be found.
The path where the cache is stored.
On Linux, this defaults to ``~/.cache/jedi/``, on OS X to
``~/Library/Caches/Jedi/`` and on Windows to ``%APPDATA%\\Jedi\\Jedi\\``.
@@ -175,55 +152,10 @@ This improves autocompletion for libraries that use ``setattr`` or
``globals()`` modifications a lot.
"""
# ----------------
# recursions
# ----------------
max_until_execution_unique = 50
"""
This limit is probably the most important one, because if this limit is
exceeded, functions can only be one time executed. So new functions will be
executed, complex recursions with the same functions again and again, are
ignored.
"""
max_function_recursion_level = 5
"""
`max_function_recursion_level` is more about whether the recursions are
stopped in deepth or in width. The ratio beetween this and
`max_until_execution_unique` is important here. It stops a recursion (after
the number of function calls in the recursion), if it was already used
earlier.
"""
max_executions_without_builtins = 200
"""
.. todo:: Document this.
"""
max_executions = 250
"""
A maximum amount of time, the completion may use.
"""
scale_call_signatures = 0.1
"""
Because call_signatures is normally used on every single key hit, it has
to be faster than a normal completion. This is the factor that is used to
scale `max_executions` and `max_until_execution_unique`:
"""
# ----------------
# caching validity (time)
# ----------------
star_import_cache_validity = 60.0
"""
In huge packages like numpy, checking all star imports on every completion
might be slow, therefore we do a star import caching, that lasts a certain
time span (in seconds).
"""
call_signatures_validity = 3.0
"""
Finding function calls might be slow (0.1-0.5s). This is not acceptible for

View File

@@ -11,9 +11,10 @@ import re
import os
import sys
from parso import split_lines
from jedi import Interpreter
from jedi.api.helpers import get_on_completion_name
from jedi import common
READLINE_DEBUG = False
@@ -85,7 +86,7 @@ def setup_readline(namespace_module=__main__):
logging.debug("Start REPL completion: " + repr(text))
interpreter = Interpreter(text, [namespace_module.__dict__])
lines = common.splitlines(text)
lines = split_lines(text)
position = (len(lines), len(lines[-1]))
name = get_on_completion_name(
interpreter._get_module_node(),

View File

@@ -2,7 +2,7 @@
addopts = --doctest-modules
# Ignore broken files in blackbox test directories
norecursedirs = .* docs completion refactor absolute_import namespace_package scripts extensions speed static_analysis not_in_sys_path buildout_project sample_venvs init_extension_module
norecursedirs = .* docs completion refactor absolute_import namespace_package scripts extensions speed static_analysis not_in_sys_path buildout_project sample_venvs init_extension_module simple_import
# Activate `clean_jedi_cache` fixture for all tests. This should be
# fine as long as we are using `clean_jedi_cache` as a session scoped

1
requirements.txt Normal file
View File

@@ -0,0 +1 @@
parso==0.1.0

View File

@@ -9,15 +9,15 @@ Usage:
Options:
-h --help Show this screen.
-d --debug Enable Jedi internal debugging.
-s <sort> Sort the profile results, e.g. cum, name [default: time].
-s <sort> Sort the profile results, e.g. cumtime, name [default: time].
"""
import cProfile
from docopt import docopt
from jedi.parser import load_grammar
from jedi.parser.python import load_grammar
from jedi.parser.diff import DiffParser
from jedi.parser import ParserWithRecovery
from jedi.parser.python import ParserWithRecovery
from jedi._compatibility import u
from jedi.common import splitlines
import jedi
@@ -26,14 +26,20 @@ import jedi
def run(parser, lines):
diff_parser = DiffParser(parser)
diff_parser.update(lines)
# Make sure used_names is loaded
parser.module.used_names
def main(args):
jedi.set_debug_function(notices=args['--debug'])
if args['--debug']:
jedi.set_debug_function(notices=True)
with open(args['<file>']) as f:
code = f.read()
grammar = load_grammar()
parser = ParserWithRecovery(grammar, u(code))
# Make sure used_names is loaded
parser.module.used_names
code = code + '\na\n' # Add something so the diff parser needs to run.
lines = splitlines(code, keepends=True)

View File

@@ -45,5 +45,5 @@ def main(args):
if __name__ == '__main__':
args = docopt(__doc__)
if args['<code>'] is None:
args['<code>'] = 'import numpy; numpy.array([0])'
args['<code>'] = 'import numpy; numpy.array([0]).'
main(args)

2
setup.cfg Normal file
View File

@@ -0,0 +1,2 @@
[bdist_wheel]
universal=1

View File

@@ -1,26 +1,29 @@
#!/usr/bin/env python
from __future__ import with_statement
try:
from setuptools import setup
except ImportError:
# Distribute is not actually required to install
from distutils.core import setup
from setuptools import setup
import ast
__AUTHOR__ = 'David Halter'
__AUTHOR_EMAIL__ = 'davidhalter88@gmail.com'
readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read()
packages = ['jedi', 'jedi.parser', 'jedi.parser.pgen2',
'jedi.evaluate', 'jedi.evaluate.compiled', 'jedi.api']
# Get the version from within jedi. It's defined in exactly one place now.
with open('jedi/__init__.py') as f:
tree = ast.parse(f.read())
version = tree.body[1].value.s
import jedi
readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read()
packages = ['jedi', 'jedi.evaluate', 'jedi.evaluate.compiled', 'jedi.api']
with open('requirements.txt') as f:
install_requires = f.read().splitlines()
setup(name='jedi',
version=jedi.__version__,
version=version,
description='An autocompletion tool for Python that can be used for text editors.',
author=__AUTHOR__,
author_email=__AUTHOR_EMAIL__,
include_package_data=True,
maintainer=__AUTHOR__,
maintainer_email=__AUTHOR_EMAIL__,
url='https://github.com/davidhalter/jedi',
@@ -28,7 +31,8 @@ setup(name='jedi',
keywords='python completion refactoring vim',
long_description=readme,
packages=packages,
package_data={'jedi': ['evaluate/compiled/fake/*.pym', 'parser/grammar*.txt']},
install_requires=install_requires,
package_data={'jedi': ['evaluate/compiled/fake/*.pym']},
platforms=['any'],
classifiers=[
'Development Status :: 4 - Beta',

View File

@@ -366,8 +366,6 @@ tuple(a)[1]
#? int() str()
tuple(list(set(a)))[1]
#? int()
tuple({1})[0]
#? int()
tuple((1,))[0]
@@ -405,6 +403,12 @@ def test_func():
x
# python >= 2.7
# Set literals are not valid in 2.6.
#? int()
tuple({1})[0]
# python >= 3.3
# -----------------
# PEP 3132 Extended Iterable Unpacking (star unpacking)
# -----------------

View File

@@ -136,24 +136,6 @@ ret(1)[0]
#? str() set()
ret()[0]
# -----------------
# with statements
# -----------------
with open('') as f:
#? ['closed']
f.closed
for line in f:
#? str()
line
with open('') as f1, open('') as f2:
#? ['closed']
f1.closed
#? ['closed']
f2.closed
# -----------------
# global vars
# -----------------
@@ -291,3 +273,23 @@ foo
__file__
#? ['__file__']
__file__
# -----------------
# with statements
# -----------------
with open('') as f:
#? ['closed']
f.closed
for line in f:
#? str()
line
# Nested with statements don't exist in Python 2.6.
# python >= 2.7
with open('') as f1, open('') as f2:
#? ['closed']
f1.closed
#? ['closed']
f2.closed

View File

@@ -493,3 +493,43 @@ B().a
B.b
#? int()
B().b
# -----------------
# With import
# -----------------
from import_tree.classes import Config2, BaseClass
class Config(BaseClass):
"""#884"""
#? Config2()
Config.mode
#? int()
Config.mode2
# -----------------
# Nested class/def/class
# -----------------
class Foo(object):
a = 3
def create_class(self):
class X():
a = self.a
self.b = 3.0
return X
#? int()
Foo().create_class().a
#? float()
Foo().b
class Foo(object):
def comprehension_definition(self):
return [1 for self.b in [1]]
#? int()
Foo().b

View File

@@ -24,3 +24,27 @@ class MyClass:
tuple,
):
return 1
if x:
pass
#? ['else']
else
try:
pass
#? ['except', 'Exception']
except
try:
pass
#? 6 ['except', 'Exception']
except AttributeError:
pass
#? ['finally']
finally
for x in y:
pass
#? ['else']
else

View File

@@ -61,10 +61,6 @@ listen(['' for x in [1]])
#?
([str for x in []])[0]
# with a set literal
#? int()
[a for a in {1, 2, 3}][0]
# -----------------
# nested list comprehensions
# -----------------
@@ -133,6 +129,46 @@ left
#? int()
right
# -----------------
# name resolution in comprehensions.
# -----------------
def x():
"""Should not try to resolve to the if hio, which was a bug."""
#? 22
[a for a in h if hio]
if hio: pass
# -----------------
# slices
# -----------------
#? list()
foo = [x for x in [1, '']][:1]
#? int()
foo[0]
#? str()
foo[1]
# -----------------
# In class
# -----------------
class X():
def __init__(self, bar):
self.bar = bar
def foo(self):
x = [a for a in self.bar][0]
#? int()
x
return x
#? int()
X([1]).foo()
# set/dict comprehensions were introduced in 2.7, therefore:
# python >= 2.7
# -----------------
# dict comprehensions
# -----------------
@@ -174,40 +210,6 @@ d[2]
next(iter({a for a in range(10)}))
# -----------------
# name resolution in comprehensions.
# -----------------
def x():
"""Should not try to resolve to the if hio, which was a bug."""
#? 22
[a for a in h if hio]
if hio: pass
# -----------------
# slices
# -----------------
#? list()
foo = [x for x in [1, '']][:1]
# with a set literal (also doesn't work in 2.6).
#? int()
foo[0]
#? str()
foo[1]
# -----------------
# In class
# -----------------
class X():
def __init__(self, bar):
self.bar = bar
def foo(self):
x = [a for a in self.bar][0]
#? int()
x
return x
#? int()
X([1]).foo()
[a for a in {1, 2, 3}][0]

View File

@@ -304,3 +304,15 @@ class A():
#? int()
A().ret()
# -----------------
# On decorator completions
# -----------------
import abc
#? ['abc']
@abc
#? ['abstractmethod']
@abc.abstractmethod

View File

@@ -49,6 +49,17 @@ def sphinxy2(a, b, x):
#?
sphinxy2()
def sphinxy_param_type_wrapped(a):
"""
:param str a:
Some description wrapped onto the next line with no space after the
colon.
"""
#? str()
a
# local classes -> github #370
class ProgramNode():
pass

View File

@@ -48,18 +48,6 @@ for a in arr:
#? float() str()
list(arr)[10]
# -----------------
# set.add
# -----------------
st = {1.0}
for a in [1,2]:
st.add(a)
st.append('') # lists should not have an influence
st.add # should not cause an exception
st.add()
# -----------------
# list.extend / set.update
# -----------------
@@ -103,15 +91,6 @@ arr2.append('')
arr2[0]
st = {1.0}
st.add(1)
lst = list(st)
lst.append('')
#? float() int() str()
lst[0]
lst = [1]
lst.append(1.0)
s = set(lst)
@@ -304,3 +283,28 @@ def third():
return list(b)
#?
third()[0]
# -----------------
# set.add
# -----------------
# Set literals are not valid in 2.6.
# python >= 2.7
st = {1.0}
for a in [1,2]:
st.add(a)
st.append('') # lists should not have an influence
st.add # should not cause an exception
st.add()
st = {1.0}
st.add(1)
lst = list(st)
lst.append('')
#? float() int() str()
lst[0]

View File

@@ -1,3 +1,9 @@
def x():
return
#? None
x()
def array(first_param):
#? ['first_param']
first_param

View File

@@ -179,6 +179,35 @@ gen().send()
#?
gen()()
# -----------------
# empty yield
# -----------------
def x():
yield
#? None
next(x())
#? gen()
x()
def x():
for i in range(3):
yield
#? None
next(x())
# -----------------
# yield in expression
# -----------------
def x():
a= [(yield 1)]
#? int()
next(x())
# -----------------
# yield from
# -----------------

View File

@@ -184,7 +184,7 @@ param = ClassDef
def ab1(param): pass
#! 9 ['param param']
def ab2(param): pass
#! 11 ['param a=param']
#! 11 ['param = ClassDef']
def ab3(a=param): pass
ab1(ClassDef);ab2(ClassDef);ab3(ClassDef)

View File

@@ -0,0 +1,10 @@
blub = 1
class Config2():
pass
class BaseClass():
mode = Config2()
if isinstance(whaat, int):
mode2 = whaat

View File

@@ -1 +1,3 @@
a = 1.0
from ..random import foobar

View File

@@ -2,3 +2,5 @@
Here because random is also a builtin module.
"""
a = set
foobar = 0

View File

@@ -49,7 +49,7 @@ def scope_nested():
#? float()
import_tree.pkg.mod1.a
#? ['a', '__name__', '__package__', '__file__', '__doc__']
#? ['a', 'foobar', '__name__', '__package__', '__file__', '__doc__']
a = import_tree.pkg.mod1.
import import_tree.random
@@ -283,3 +283,13 @@ def underscore():
# Does that also work for the our own module?
#? ['__file__']
__file__
# -----------------
# complex relative imports #784
# -----------------
def relative():
#? ['foobar']
from import_tree.pkg.mod1 import foobar
#? int()
foobar

View File

@@ -65,6 +65,8 @@ class C():
def with_param(self):
return lambda x: x + self.a()
lambd = lambda self: self.foo
#? int()
C().a()
@@ -75,6 +77,11 @@ index = C().with_param()(1)
#? float()
['', 1, 1.0][index]
#? float()
C().lambd()
#? int()
C(1).lambd()
def xy(param):
def ret(a, b):

View File

@@ -43,6 +43,8 @@ from . import some_variable
from . import arrays
#? []
from . import import_tree as ren
#? []
import json as
import os
#? os.path.join
@@ -63,13 +65,13 @@ import datetime.date
#? 21 ['import']
from import_tree.pkg import pkg
#? 49 ['a', '__name__', '__doc__', '__file__', '__package__']
#? 49 ['a', 'foobar', '__name__', '__doc__', '__file__', '__package__']
from import_tree.pkg.mod1 import not_existant, # whitespace before
#? ['a', '__name__', '__doc__', '__file__', '__package__']
#? ['a', 'foobar', '__name__', '__doc__', '__file__', '__package__']
from import_tree.pkg.mod1 import not_existant,
#? 22 ['mod1']
from import_tree.pkg. import mod1
#? 17 ['mod1', 'mod2', 'random', 'pkg', 'rename1', 'rename2', 'recurse_class1', 'recurse_class2', 'invisible_pkg', 'flow_import']
#? 17 ['mod1', 'mod2', 'random', 'pkg', 'rename1', 'rename2', 'classes', 'recurse_class1', 'recurse_class2', 'invisible_pkg', 'flow_import']
from import_tree. import pkg
#? 18 ['pkg']

View File

@@ -158,3 +158,10 @@ Y = int
def just_because_we_can(x: "flo" + "at"):
#? float()
x
def keyword_only(a: str, *, b: str):
#? ['startswith']
a.startswi
#? ['startswith']
b.startswi

View File

@@ -65,6 +65,11 @@ class X(): pass
#? type
type(X)
with open('foo') as f:
for line in f.readlines():
#? str()
line
# -----------------
# enumerate
# -----------------
@@ -222,3 +227,23 @@ z = zipfile.ZipFile("foo")
# It's too slow. So we don't run it at the moment.
##? ['upper']
z.read('name').upper
# -----------------
# contextlib
# -----------------
import contextlib
with contextlib.closing('asd') as string:
#? str()
string
# -----------------
# shlex
# -----------------
# Github issue #929
import shlex
qsplit = shlex.split("foo, ferwerwerw werw werw e")
for part in qsplit:
#? str() None
part

View File

@@ -15,8 +15,8 @@ sys.path.append('a' +* '/thirdparty')
#? ['evaluate']
import evaluate
#? ['Evaluator']
evaluate.Evaluator
#? ['evaluator_function_cache']
evaluate.Evaluator_fu
#? ['jedi_']
import jedi_

View File

@@ -91,19 +91,6 @@ d.items()[0][0]
#? int()
d.items()[0][1]
# -----------------
# set
# -----------------
set_t = {1,2}
#? ['clear', 'copy']
set_t.c
set_t2 = set()
#? ['clear', 'copy']
set_t2.c
# -----------------
# tuples
# -----------------
@@ -125,3 +112,18 @@ tup3.index
tup4 = 1,""
#? ['index']
tup4.index
# -----------------
# set
# -----------------
# Set literals are not valid in 2.6.
# python >= 2.7
set_t = {1,2}
#? ['clear', 'copy']
set_t.c
set_t2 = set()
#? ['clear', 'copy']
set_t2.c

View File

@@ -83,17 +83,18 @@ import module_not_exists
module_not_exists
#< ('rename1', 1,0), (0,24), (3,0), (6,17), ('rename2', 4,5), (10,17), (13,17), ('imports', 72, 16)
#< ('rename1', 1,0), (0,24), (3,0), (6,17), ('rename2', 4,5), (11,17), (14,17), ('imports', 72, 16)
from import_tree import rename1
#< (0,8), ('rename1',3,0), ('rename2',4,20), ('rename2',6,0), (3,32), (7,32), (4,0)
#< (0,8), ('rename1',3,0), ('rename2',4,20), ('rename2',6,0), (3,32), (8,32), (5,0)
rename1.abc
#< (-3,8), ('rename1', 3,0), ('rename2', 4,20), ('rename2', 6,0), (0,32), (4,32), (1,0)
#< (-3,8), ('rename1', 3,0), ('rename2', 4,20), ('rename2', 6,0), (0,32), (5,32), (2,0)
from import_tree.rename1 import abc
#< (-5,8), (-2,32), ('rename1', 3,0), ('rename2', 4,20), ('rename2', 6,0), (0,0), (3,32)
abc
#< 20 ('rename1', 1,0), ('rename2', 4,5), (-10,24), (-7,0), (-4,17), (0,17), (3,17), ('imports', 72, 16)
#< 20 ('rename1', 1,0), ('rename2', 4,5), (-11,24), (-8,0), (-5,17), (0,17), (3,17), ('imports', 72, 16)
from import_tree.rename1 import abc
#< (0, 32),
@@ -294,6 +295,9 @@ x = 32
#< 7 (0,1), (0,7)
[x for x in something]
x = 3
# Not supported syntax in Python 2.6.
# python >= 2.7
#< 1 (0,1), (0,10)
{x:1 for x in something}
#< 10 (0,1), (0,10)

View File

@@ -9,7 +9,6 @@ from . import refactor
import jedi
from jedi.evaluate.analysis import Warning
from jedi import settings
def pytest_addoption(parser):
@@ -123,11 +122,6 @@ class StaticAnalysisCase(object):
@pytest.fixture()
def isolated_jedi_cache(monkeypatch, tmpdir):
"""
Set `jedi.settings.cache_directory` to a temporary directory during test.
Same as `clean_jedi_cache`, but create the temporary directory for
each test case (scope='function').
"""
monkeypatch.setattr(settings, 'cache_directory', str(tmpdir))
def cwd_tmpdir(monkeypatch, tmpdir):
with helpers.set_cwd(tmpdir.dirpath):
yield tmpdir

Some files were not shown because too many files have changed in this diff Show More